id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
9577912 | pes2o/s2orc | v3-fos-license | Standard Climate Models Radiation Codes Underestimate Black Carbon Radiative Forcing
Radiative forcing (RF) of black carbon (BC) in the atmosphere is estimated using radiative transfer codes of various complexities. Here we show that the two-stream radiative transfer codes used most in climate models give too strong forward scattering, leading to enhanced absorption at the surface and too weak absorption by BC in the atmosphere. Such calculations are found to underestimate the positive RF of BC by 10 % for global mean, all sky conditions , relative to the more sophisticated multi-stream models. The underestimation occurs primarily for low surface albedo, even though BC is more efficient for absorption of solar radiation over high surface albedo.
When estimating BC RF, the radiative transfer code is a crucial component.Accurate results can be achieved by using multi-stream line-by-line codes.However, these calculations are computationally demanding and are usually not applied for global scale simulations.In present climate mod-els, simplified radiation schemes of various complexity are therefore used and compared against line-by-line results, and each other, as consistency checks.
Several radiation intercomparison exercises have taken place (Boucher et al., 1998;Collins et al., 2006;Ellingson et al., 1991;Forster et al., 2011Forster et al., , 2005;;Myhre et al., 2009b;Randles et al., 2013), yielding important suggestions for improvement to the radiative transfer codes.Randles et al. (2013) found that many of the presently used radiative transfer codes underestimate the radiative effect of absorbing aerosols, relative to benchmark multi-stream lineby-line codes.Further, one of the radiative transfer codes was run both as a multi-stream code resembling the benchmark codes, as well as run as a two-stream code resembling the simpler codes used in climate models.These two codes were denoted as numbers 3 and 4, respectively in Randles et al. (2013) and used in the current work.The results indicated that the number of streams in the radiative transfer calculation, i.e. the number of angles through which radiation is allowed to scatter, is crucial for the differences found between the radiation codes.On average, the simpler codes underestimated the radiative effect of BC of the order of 10-15 % relative to the benchmark line-by-line codes.In the present study we further investigate this potential underestimation of BC RF in many of the global climate models, and develop a physical understanding for why it occurs.
Models and methods
Simulations in the present paper were performed with a radiative transfer code using the discrete ordinate method (Stamnes et al., 1988).This model has previously been run in idealized experiments with prescribed vertical profiles Published by Copernicus Publications on behalf of the European Geosciences Union. of aerosol extinction (Randles et al., 2013) and used for global climate simulations (Myhre et al., 2009a).The radiative transfer code was run either in a multi-stream mode (eight streams) or with two streams and the Delta-M method (Wiscombe, 1977).In the global simulations we used meteorological data from ECMWF, and specified aerosol optical properties (Myhre et al., 2009a) and aerosol distribution from the OsloCTM2 chemical transport model (Skeie et al., 2011).
To study the impact of the radiation code on global mean RF of BC, input fields and results from OsloCTM2 part of Ae-roCom Phase II for several aerosol components were used.Here, aerosol BC abundances were specified for 1850 and 2000, and anthropogenic RF defined as the difference between outgoing top-of-atmosphere shortwave radiative flux between these 2 years (Myhre et al., 2013).
Global distribution of underestimated BC RF in models
Figure 1a shows the global mean, clear sky direct effect RF of BC, for a two-stream simulation relative to a simulation with eight streams.As in Randles et al. (2013) we find that the two-stream calculation tends to give lower RF than the eightstream one.The underestimation in the two-stream simulation is shown here to be largest over ocean, with low surface albedo, whereas over regions with high surface albedo the two-stream more closely reproduces the eight-stream simulation.Under clear sky conditions, the global, annual mean underestimation is 15 % (0.158 vs. 0.187 W m −2 ) in the twostream relative to eight-stream simulation (RF (two-stream) divided by RF (eight-stream)).
The albedo of clouds is also affected by the number of streams adopted in the radiative transfer simulations.This makes the top-of-atmosphere reflected solar radiation increase in two-stream calculations, relative to eight-stream simulations.For all sky conditions, the global mean underestimation of RF in the two-stream simulation amounts to 7 %.However, modifying the cloud scattering to get similar topof-atmosphere solar flux, as in the eight-stream simulation and close to measured fluxes, leads to a 10 % underestimation in the two-stream simulation relative to the eight-stream simulation (0.254 vs. 0.283 W m −2 ).The largest underestimation is over ocean, and over regions with small cloud cover, as shown in Fig. 1b.
Underestimation of BC RF as a function of altitude
Global mean RF of BC, as a function of BC located at various altitudes, is shown in Fig. 2. The figure shows results for both two-stream and eight-stream simulations.A similar curve has previously been presented in Samset and Myhre (2011) for eight-stream simulations.The present curve is slightly modified, due to updated ozone and cloud fields.The same ap- proach as in Sect.3.1, to keep cloud scattering and therefore top-of-atmosphere radiative flux for the two-stream simulation equal to the eight-stream simulation, has been applied.
Figure 2a clearly shows the increasing normalized RF (RF exerted per unit aerosol burden) by BC as a function of altitude, due to enhanced effect of absorbing material above scattering components.The underestimation in the two-stream simulation is similar in magnitude for clear sky and all sky conditions, but is, in relative terms, larger for clear sky due to smaller absolute values (Fig. 2b).
For the all sky simulation the underestimation by the twostream vs. the eight-stream simulation is close to 10 % for BC at all altitudes, except below 900 hPa.Being above scattering components such as clouds increases the absorption by BC, as does the presence of scattering aerosol types, and Rayleigh scattering.Absorption by gases such as ozone and water vapour, as well as absorption by other aerosol types, reduces the absorption by BC.For all sky conditions, Fig. 2 shows a large degree of compensation by scattering and absorption by gases, and other aerosol types than BC.In a model simulation with only BC in the atmosphere, the normalized RF of BC was found to be 1 % higher in two-stream simulations than in eight-stream simulations, showing the importance of the other atmospheric components for the correct determination of BC RF.
Physical description of the underestimation of BC RF
The radiative forcing due to aerosols is known to be a strong function of surface albedo (Haywood and Shine, 1997).This is illustrated in Fig. 3a, where the radiative effect of aerosols with different single scattering albedo has been calculated as a function of surface albedo.We reproduce the well-known characteristics of largest impact of absorbing aerosols over bright surfaces, and of scattering aerosols over dark surfaces.
Figure 3b shows the difference between two-stream and eight-stream calculations, as a function of surface albedo, and for a range of aerosol single scattering albedos.Twostream and eight-stream results deviate substantially between surface albedos of 0.05 and 0.2.These are surface albedo values where absorbing aerosols have a relatively weak radiative effect.An increasing single scattering albedo gives increasing underestimations of two-stream results (Fig. 3b) and at the same time a decreasing radiative effect (Fig. 3a).
Our interpretation of the cause for the underestimation of two-stream results relative to multi-stream (containing more than two streams) results is lack of sufficient multiple scattering in connection to forward scattering and low surface albedo.Under such conditions the scattering is too strong in the forward direction in two-stream approaches.In addi-tion the low surface albedo, and thus strong surface absorption, hinders further multiple scattering.Multiple scattering in general enhances the radiative effect of absorbing aerosols.
To illustrate the importance of multiple scattering for the abovementioned underestimation, additional simulations show that purely absorbing aerosols in a non-scattering atmosphere have differences between two-stream and multistream results within only a few percent (less than 2 %), which is the typical deviation as shown in Figure 3b, except for at low surface albedo.The agreement between 8stream and even higher number of streams such as 16-stream simulations is generally within 1 %, except for very small absolute RF values.Simulations with four streams are generally close to eight-stream simulations.For pure scattering aerosols, two-stream simulations vary with solar zenith angle (see Randles et al., 2013) and surface albedo compared to eight-stream simulations; on a global mean, negative RF for anthropogenic sulphate aerosols is 5 % stronger.The results shown in Fig. 3 are for a solar zenith angle of 30 • , but are generally applicable for other solar zenith angles.However, note that the critical single scattering albedo for transitioning from positive to negative radiative effect decreases with increasing solar zenith angle.The underestimation shown in Randles et al. (2013) can also be seen in Figure 3b for a single scattering of 0.75 (close to 0.8 used in the paper) and for a surface albedo of 0.2 at around 10 %.
Conclusions
Two-stream approximations using the Delta-M method, as employed by a majority of present climate models, are found to be relatively accurate for absorbing aerosols.The exception is over areas with low surface albedo.Here, the enhanced forward scattering hinders sufficient multiple scattering, causing an underestimation of the radiative effect of BC.Low albedo occurs in regions with low cloud cover, and low surface albedo such as ocean and snow free forest.In such cases the underestimation relative to more advanced radiation schemes can be of the order of 20-25 %.The underestimation for BC is largest in the presence of scattering components.This also applies to gases with solar absorption.However, under clear sky condition, underestimation of a similar magnitude to BC will only be caused by gases with solar absorption in UV and visible region where Rayleigh scattering is strong.Thus ozone in the lower troposphere is the only gas that is substantially influenced by the number of streams in the radiative transfer simulations.For a global increase in water vapour by 20 % in the lowest 1-2 km of the atmosphere, the difference between two-stream and eight-stream simulations is found to be less than 1 %.
On a global scale, we simulate a 10 % underestimation for RF of BC for all sky conditions, and 15 % for clear sky, for two-stream relative to eight-stream simulations.The clear sky results for selected profiles and solar zenith angles in Randles et al. (2013) showed an average model underestimation between 12 and 15 % compared to benchmark model simulations.The implication of the underestimation is that recent estimates of global mean RF due to BC, e.g., in Myhre et al. (2013) and Bond et al. (2013), where the latter is based on radiative transfer calculations in Schulz et al. (2006), could be up to 10 % too weak, as they are primarily based on models with radiative transfer codes with two-stream simulations.It must however be noted that other issues related to radiative transfer codes may lead to compensation of this underestimation, or additional underestimation.In addition, uncertainties in the abundance of BC, and in its optical properties, are much larger than 10 %.Burden of BC and the normalized RF has a standard deviation of the order of 50 % relative to mean values for the 15 global aerosol models in AeroCom Phase II (Myhre et al., 2013).Even so, considerations for improvements of radiation schemes in global climate models should be made to provide more accurate calculations of present and future radiative forcing due to BC.
Figure 1 .
Figure 1.Geographical distribution of ratio between annual mean RF of BC from two-stream simulation relative to eight-stream simulation for clear sky (upper) and all sky (lower).
Figure 2 .
Figure 2. (a) BC RF normalized by abundance, as a function of altitude.Solid lines: eight-stream simulations.Dashed lines: two-stream simulations.Colours represent all sky and clear sky conditions, and whether a full atmospheric simulation including Rayleigh scattering, water vapour and background aerosols was performed ("Full sim"), or if BC was the only radiatively active agent ("BC only"); (b) ratio of two-stream to eight-stream simulation results, for the four cases shown in (a).
Figure 3 .
Figure3.RF as a function of surface albedo for various single scattering albedo (upper), and relative differences between two-stream and multi-stream simulations (lower).In cases where the sign of two-stream and multi-stream simulations for a particular single-scattering albedo differs the results are left out of the lower panel. | 2015-03-27T04:16:54.000Z | 2014-10-20T00:00:00.000 | {
"year": 2014,
"sha1": "ffd20e8b02886a25b760ace521d07c04f26b9fbf",
"oa_license": "CCBY",
"oa_url": "https://acp.copernicus.org/articles/15/2883/2015/acp-15-2883-2015.pdf",
"oa_status": "GOLD",
"pdf_src": "Grobid",
"pdf_hash": "5a3c4410f007d398acc2bf304a705efc7156d133",
"s2fieldsofstudy": [
"Environmental Science",
"Physics"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
269524098 | pes2o/s2orc | v3-fos-license | Decoding the molecular landscape: A novel prognostic signature for uveal melanoma unveiled through programmed cell death-associated genes
Uveal melanoma (UM) is a rare but aggressive malignant ocular tumor with a high metastatic potential and limited therapeutic options, currently lacking accurate prognostic predictors and effective individualized treatment strategies. Public databases were utilized to analyze the prognostic relevance of programmed cell death-related genes (PCDRGs) in UM transcriptomes and survival data. Consensus clustering and Lasso Cox regression analysis were performed for molecular subtyping and risk feature construction. The PCDRG-derived index (PCDI) was evaluated for its association with clinicopathological features, gene expression, drug sensitivity, and immune infiltration. A total of 369 prognostic PCDRGs were identified, which could cluster UM into 2 molecular subtypes with significant differences in prognosis and clinicopathological characteristics. Furthermore, a risk feature PCDI composed of 11 PCDRGs was constructed, capable of indicating prognosis in UM patients. Additionally, PCDI exhibited correlations with the sensitivity to 25 drugs and the infiltration of various immune cells. Enrichment analysis revealed that PCDI was associated with immune regulation-related biological processes and pathways. Finally, a nomogram for prognostic assessment of UM patients was developed based on PCDI and gender, demonstrating excellent performance. This study elucidated the potential value of PCDRGs in prognostic assessment for UM and developed a corresponding risk feature. However, further basic and clinical studies are warranted to validate the functions and mechanisms of PCDRGs in UM.
Introduction
Melanoma is a malignant tumor originating from melanocytes, characterized by high invasiveness and metastatic potential.Based on the site of occurrence, melanoma can be classified into cutaneous melanoma, mucosal melanoma, and uveal melanoma (UM), among others. [1]UM is the most common primary intraocular tumor, although its incidence is relatively low, it has a poor prognosis, with an overall 5-year survival rate of only around 60%. [2] Currently, high-risk UM patients are primarily treated with radical surgeries such as enucleation, but a considerable proportion of patients ultimately develop distant metastases, with a median overall survival of only 1 year. [3]Therefore, accurately assessing the prognostic risk of UM patients and formulating individualized treatment strategies is crucial for improving their quality of life.
Programmed cell death (PCD) refers to the orderly process of cell death through intrinsic molecular mechanisms encoded by genes, playing a pivotal role in maintaining homeostasis, tissue development, and tumor progression. [4]Increasing evidence suggests that aberrant expression of PCD-related genes is closely associated with tumor initiation, progression, and metastasis. [5]For instance, dysregulation of the apoptosisrelated BCL2 family members can lead to inappropriate survival of cancer cells [6] ; deficiency in the mitochondrial quality control genes PINK1 and Parkin can cause mitochondrial dysfunction, promoting tumor progression [7,8] ; activation of key molecules such as GSDMD and MLKL in the inflammatory PCD pathway can induce immunogenic cell death, eliciting antitumor immune responses. [9,10]Therefore, in-depth exploration of molecular markers associated with PCD and their relationship with UM prognosis may provide new insights for individualized risk assessment and targeted therapy.
Institutional review board approval and informed consent were not required in the current study because research data are publicly available and all patient data are de-identified.
The authors have no conflicts of interest to disclose.
The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.
There is no need for informed consent in our study since the unidentified data were free from medical ethics review.In this study, we employed systems biology approaches to analyze transcriptomic and clinicopathological data of UM from public databases, aiming to elucidate the potential value of PCD-related genes (PCDRGs) in UM prognosis prediction.Based on the prognostic PCDRGs, we performed molecular subtyping of UM and constructed a risk signature to quantify the risk of poor prognosis.Additionally, we systematically analyzed the potential roles of PCDRGs in drug sensitivity and the tumor immune microenvironment.This study contributes to elucidating the potential roles of PCDRGs in tumor development, targeted drug development, and guiding personalized treatment.
Molecular subtyping
Univariate Cox regression analysis was performed to identify PCDRGs significantly associated with UM prognosis (P < .05),and consensus clustering analysis was conducted using the ConsensusClusterPlus [12] package.The partitioning around medoids clustering method and the "pearson" distance function were employed for clustering analysis, and differences in prognosis and clinicopathological features among subtypes were analyzed.Principal component analysis (PCA) was performed using the prognostic PCDRGs.
Construction of risk signature
PCDRGs with P < .005 in the univariate Cox regression analysis were selected, and the Least Absolute Shrinkage and Selection Operator Cox regression analysis was performed using the glmnet [13] package to construct the PCDRG-derived index (PCDI), which was calculated as follows: PCDI = Σ(β i × exp i ), where β i is the coefficient of gene i, and exp i is the expression level of gene i.Survival analysis and PCA were conducted to evaluate the performance of PCDI in molecular subtyping within the TCGA-UVM cohort.
Somatic mutation analysis
The maftools [14] package was used to analyze the somatic mutation characteristics of UM patients.
Drug sensitivity analysis
The pRRophetic [15] package was employed to analyze the sensitivity to 45 drugs, using the pRRopheticPredict() function, and batch correction was performed using the ComBat method.The Wilcoxon test was used to evaluate the differences in drug sensitivity between the High_PCDI and Low_PCDI groups, and the correlation between PCDI and drug sensitivity was analyzed.
Immune infiltration analysis
The IOBR [16] package, which includes algorithms such as CIBERSORT, EPIC, xCell, MCP-counter, ESTIMATE, TIMER, quanTIseq, and immune phenotype scores (IPS), was used to assess tumor immune cell infiltration in the TCGA-UVM cohort.
The Wilcoxon test was employed to evaluate the differences between the High_PCDI and Low_PCDI groups.
Enrichment analysis
The edgeR [17] package was used to analyze differentially expressed genes between the High_PCDI and Low_PCDI groups, with P = .05and |log(fold change)| > 2 as the cutoff criteria.Enrichment analysis, including Gene Ontology and Kyoto Encyclopedia of Genes and Genomes enrichment analyses, was performed using the clusterProfiler [18] package, with a significance threshold of 0.05.
Construction and evaluation of nomogram
Independent prognostic factors for UM were identified through multivariate Cox regression analysis and used to construct a nomogram.The rms package was employed for nomogram construction and visualization.The performance of the nomogram was evaluated using receiver operating characteristic curve analysis, calibration curve analysis, and decision curve analysis using the rmda package.
Molecular subtyping of UM based on prognostic PCDRGs
Among the 1560 PCDRGs analyzed, 369 genes exhibited a significant association with UM prognosis (Table S2, Supplemental Digital Content, http://links.lww.com/MD/M355).Consensus clustering analysis based on these prognostic PCDRGs identified 2 distinct UM subtypes, designated as cluster 1 and cluster 2 (Fig. 1A-C).Survival analysis revealed a significantly worse prognosis for patients in cluster 1 compared to cluster 2 (P < .0001,Fig. 1D).PCA demonstrated clear separation between the 2 molecular subtypes (Fig. 1E), indicating that these prognostic PCDRGs can effectively stratify UM patients with favorable and unfavorable outcomes.Notably, cluster 1, associated with poorer prognosis, exhibited a higher proportion of stage III and IV UM patients compared to the better prognostic cluster 2 (Fig. 1F).
Relationship between PCDI and clinicopathological characteristics of UM patients
A heatmap was generated based on the normalized gene expression levels to compare the differences in gene expression and clinicopathological features between the High_PCDI and Low_PCDI groups (Fig. 3A).Notably, the Low_PCDI group had fewer deceased cases and a lower proportion of patients with stage III and IV disease compared to the High_PCDI group.Furthermore, deceased UM patients exhibited higher PCDI values compared to alive UM patients.Additionally, stage IV patients had higher PCDI values than stage II and III patients (Fig. 3B).
Relationship between PCDI and gene expression in UM
To investigate the relationship between PCDI and gene expression in UM, we identified differentially expressed genes (DEGs) between the High_PCDI and Low_PCDI groups, resulting in 867 DEGs (P < .05,|log(fold change)| > 2).Enrichment analysis revealed that these DEGs were associated with pathways related to cytokine signaling, T-cell differentiation, antigen processing and presentation (Fig. 5A).Furthermore, Gene Ontology enrichment analysis demonstrated that these genes were involved in biological processes such as mononuclear cell differentiation, lymphocyte differentiation, cell killing, and T-cell differentiation (Fig. 5B), suggesting differences in the immune status between the High_PCDI and Low_PCDI groups.
Relationship between PCDI and sensitivity
We evaluated the sensitivity of the TCGA-UVM cohort to 45 drugs using pRRophetic, and the results showed significant differences in drug sensitivity between the High_PCDI and Low_PCDI groups for 25 drugs.The High_PCDI group exhibited lower sensitivity to bortezomib, cisplatin, gefitinib, lapatinib, nilotinib, and temsirolimus, but higher sensitivity to the other 19 drugs (Fig. 6A). Figure 6B illustrates the correlation between PCDI, its constituent genes, and drug sensitivity, where PYCARD, ACP5, PRKCD, MYH14, and SIRT3 exhibited an opposite correlation trend compared to the other genes and PCDI.
Relationship between PCDI and tumor immune microenvironment
We analyzed the tumor microenvironment in the TCGA-UVM cohort and found that PCDI was negatively correlated with most tumor-infiltrating immune cells.Compared to the High_ PCDI group, the Low_PCDI group had higher infiltration levels of CD4 T cells, activated NK cells, macrophages, activated dendritic cells, activated mast cells, and neutrophils (Fig. 7A).Additionally, the High_PCDI group had higher StromalScore, ImmuneScore, ESTIMATEScore, and lower Tumor Purity compared to the Low_PCDI group (Fig. 7B). Figure 7C shows the scatter plot of the correlation between PCDI and IPS, indicating a positive correlation with MHC_IPS, EC_IPS, and AZ_IPS, and a negative correlation with SC_IPS and CP_IPS.
Nomogram for UM patients based on PCDI
In the TCGA-UVM cohort, PCDI, age, and stage were identified as prognostic factors for UM.However, after performing multivariate Cox regression analysis, PCDI and gender emerged as independent prognostic factors (Table 1).Consequently, we constructed a nomogram incorporating PCDI and gender to predict the 1-and 2-year overall survival of UM patients (Fig. 8A).
Receiver operating characteristic curve analysis demonstrated the excellent predictive performance of the nomogram, with area under the curve values of 0.959 and 0.967 for predicting 1-, and 2-year overall survival, respectively (Fig. 8B). Figure 8C illustrates the calibration curves of the nomogram for predicting 1-and 2-year overall survival.Notably, compared to other prognostic factors, the nomogram exhibited a higher standardized net benefit in predicting 1-year overall survival for UM patients, indicating its superior prognostic performance over alternative approaches (Fig. 8D).
Discussion
Accumulating evidence suggests that PCDRGs play a crucial role in cancer initiation and progression, and can be utilized for molecular subtyping and prognostic assessment of cancers.However, their value in UM remains unclear.In this study, 1560 PCDRGs were analyzed for their association with UM prognosis, and 369 genes were identified as prognostic PCDRGs.These prognostic PCDRGs were further employed for molecular subtyping and prognostic assessment of UM, ultimately leading to the construction of the PCDI risk signature.These findings highlight the potential value of PCDRGs as prognostic markers and potential therapeutic targets in UM.
The PCDI, comprising 11 PCDRGs, has been validated for its potential utility in prognostic assessment of UM patients, suggesting the potential roles of these genes in UM initiation and progression.Among them, TWIST1 encodes a transcription factor that promotes epithelial-mesenchymal transition, tumor invasion, and metastasis when overexpressed. [19][22] Inactivation of SIRT3 can promote tumor metabolic reprogramming and genomic instability. [23]berrations in PRKCD and MYH14 can lead to dysregulated cell cycle and enhanced tumor cell migration. [24,25]On the other hand, MMP9 encodes matrix metalloproteinase 9, which participates in extracellular matrix degradation and plays a crucial role in tumor invasion and metastasis. [26,27]30][31][32] Additionally, ACP5, encoding acid phosphatase prostate, is involved in bone metabolism, and its dysregulation may be related to bone metastasis. [33]Given the known roles of these genes in cancer, further investigation into their functions and mechanisms in UM is warranted.
Accumulating evidence suggests that the tumor microenvironment plays a pivotal role in cancer initiation and progression, and numerous studies have demonstrated a close association between the infiltration status of immune cells in the tumor microenvironment and patient prognosis.For instance, Clemente et al [34] found that higher levels of CD8 + T-cell and CD20 + B-cell infiltration in melanoma tumor tissues were associated with longer overall survival in patients.Our study results revealed a negative correlation between PCDI and the infiltration of various immune cells, suggesting that PCDI may influence patient prognosis by modulating the tumor immune microenvironment.Moreover, we found that patients in the high PCDI group had higher StromalScore, ImmuneScore, and ESTIMATEScore, but lower tumor purity.[37] Therefore, PCDI may impact the prognosis of UM patients by regulating the stromal and immune cell components within the tumor microenvironment.Notably, PCDI exhibited different correlation patterns with various immune phenotype scores (IPS), which may reflect its differential roles in regulating distinct immune pathways.In recent years, IPS has been widely employed to assess tumor immune status and predict responses to immunotherapy, such as the study by Charoentong et al, [38] who utilized IPS to evaluate the immunogenicity and immune infiltration of different cancers.Thus, investigating the associations between PCDI and different IPS could shed light on its underlying mechanisms in immune regulation.
Lastly, we developed a nomogram for UM based on PCDI to facilitate clinical application.However, this study has several limitations.Firstly, the PCDI derived from retrospective transcriptomic data analysis has not been validated in prospective studies, limiting its clinical utility.Secondly, the functions and mechanisms of the genes constituting PCDI in UM are largely unknown, necessitating further in vitro and in vivo experimental validation.Finally, by constructing the prognostic model solely based on gene expression data, without integrating other multi-omics data (e.g., gene mutations, DNA methylation), the heterogeneity of cancer may not be comprehensively captured.
Conclusion
In conclusion, this study provides novel insights into the prognostic relevance of PCDRGs in UM and their potential applications in molecular subtyping, prognostic assessment, and personalized treatment strategies.The constructed PCDI risk signature and the developed nomogram hold promise for improving risk stratification and guiding clinical decisionmaking for UM patients.However, further research is required to validate the clinical utility of PCDI and elucidate the underlying mechanisms of PCDRGs in UM pathogenesis.
Figure 1 .
Figure 1.Molecular subtyping of the TCGA-UVM cohort based on prognostic PCDRGs.(A)-(C) Consensus clustering based on 369 prognostic PCDRGs.(D) Kaplan-Meier survival curves and log-rank test between cluster 1 and cluster 2 subtypes.(E) Principal component analysis based on prognostic PCDRGs.(F) Distribution of clinical pathological features, including status, stage, and age, between cluster 1 and cluster 2 subtypes.PCDRGs = programmed cell deathrelated genes, TCGA = The Cancer Genome Atlas.
Figure 2 .
Figure 2. Construction of the PCDI risk signature based on prognostic PCDRGs.(A) and (B) LASSO Cox regression analysis based on 111 prognostic PCDRGs identified 11 risk genes.(C) Coefficients of the 11 PCDI-related genes.(D) Kaplan-Meier survival curves and log-rank test between the High_PCDI and Low_ PCDI groups.(E) Principal component analysis based on the 11 PCDRGs.LASSO = Least Absolute Shrinkage and Selection Operator, PCDI = PCDRG-derived index, PCDRGs = programmed cell death-related genes, TCGA = The Cancer Genome Atlas.
Figure 4 .
Figure 4. Relationship between PCDI and somatic mutation features of UM patients.(A) Oncoplot of somatic mutations in the High_PCDI group of UM patients.(B) Oncoplot of somatic mutations in the Low_PCDI group of UM patients.(C) Scatter plot of the correlation between PCDI and TMB.(D) Comparison of TMB between the High_PCDI and Low_PCDI groups.ns, not significant, PCDI = PCDRG-derived index, TMB = tumor mutational burden, UM = uveal melanoma.
Figure 5 .
Figure 5. Relationship between PCDI and gene expression regulation in UM patients.(A) KEGG enrichment analysis of differentially expressed genes between the High_PCDI and Low_PCDI groups.(B) GO enrichment analysis of differentially expressed genes between the High_PCDI and Low_PCDI groups.KEGG = Kyoto Encyclopedia of Genes and Genomes, PCDI = PCDRG-derived index, UM = uveal melanoma.
Figure 8 .
Figure 8. Nomogram construction for the TCGA-UVM cohort based on PCDI.(A) PCDI and Gender, identified as independent prognostic factors, were used to construct a nomogram for predicting the 1-and 2-year overall survival of UM patients.(B) ROC curve analysis of the nomogram for predicting 1-, and 2-year overall survival of UM patients.(C) Calibration curves of the nomogram for predicting 1-and 2-year overall survival of UM patients.(D) Decision curve analysis of the nomogram and other prognostic factors for predicting 1-year overall survival of UM patients.PCDI = PCDRG-derived index, ROC = receiver operating characteristic, TCGA = The Cancer Genome Atlas, UM = uveal melanoma.
Table 1
Details results of the univariate and multivariate Cox regression analysis. | 2024-05-03T14:58:43.856Z | 2024-05-03T00:00:00.000 | {
"year": 2024,
"sha1": "c7e3b144fc6a9ee2f05d3cef0b412a649f2123b0",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "c7e3b144fc6a9ee2f05d3cef0b412a649f2123b0",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
249079572 | pes2o/s2orc | v3-fos-license | all Education 4.0: An Analysis of Teachers‟ Attitude towards the Use of Technology in Teaching
Abstract —This study investigates mathematics teachers’ readiness towards Education 4.0 and their attitude towards the use of technology in teaching mathematics. The study participants included 162 mathematics teachers in Kota Bharu, Kelantan. A quantitative approach with a questionnaire was employed in this study. Data collected were analysed by using descriptive and t -tests and analysis of variance using Statistical Package for the Social Sciences version 27. The overall results for mathematics teachers’ readiness towards Education 4.0 lead to uncertainty, which indicates that teachers are still not prepared and lack knowledge about Education 4.0. When grouped according to age, gender and grade level taught, the results show that there is no significant difference between teachers’ readiness towards Education 4.0. Meanwhile, for attitude, the results reveal that teachers have a positive attitude towards the use of technology in teaching mathematics, regardless of their age, gender or grade level taught. However, it shows a significant difference between male and female teachers’ attitudes. For age and grade level taught, the results show no significant difference between mathematics teachers’ attitudes towards the use of technology in teaching the subject. Therefore, the study recommends that teachers should strengthen their positive attitude towards the use of technology in teaching and learning and that the higher authorities also should participate in providing teachers knowledge on Education 4.0 and helping them adapt to current educational
I. INTRODUCTION
The Industrial Revolution 4.0 (IR 4.0), also known as the Fourth Industrial Revolution, has brought many changes in various aspects of our life in recent years. IR 4.0 undoubtedly brought about rapid changes in the way people live, work, communicate and interact. IR 4.0 also had a large impact on industries, and education is one industry affected by it.
With the advent of IR 4.0, the role of education has changed, and it called for emerging needs. IR 4.0 brings us the development of Education 4.0, a term used by theorists to describe the various ways in which technology is integrated into the educational process. Hoyles and Lagrange [1] state that technology is the thing that most affects the education Manuscript system in the world today. This is because of the aspects of effectiveness, efficiency and attractiveness offered by digital technology based learning.
With the advent of IR 4.0, the level of readiness of teachers in facing Education 4.0 is one of the most important aspects to address changes towards the effective teaching and learning of mathematics. In addition, with our current global crisis due to the spread of the coronavirus disease 2019 epidemic, learning and teaching have been delayed and have made teachers" tasks increasingly challenging as teachers have to adapt to new life norms and habits that require online teaching and learning. The role of technology has also become more important, especially in the field of education as the basis for the transmission of knowledge.
The issue of changes in teachers" attitude in teaching mathematics owing to the new norms of education has attracted much interest from many researchers. Therefore, we decided to conduct a study on mathematics teachers" readiness towards Education 4.0 and their attitude towards the use of technology in teaching mathematics. We focused on primary and secondary school mathematics teachers in Kota Bharu, Kelantan.
Six affective variables, namely, knowledge and awareness, mathematics confidence, confidence with technology, attitude to learning mathematics with technology, behavioural engagement and affective engagement, were used in this study to measure mathematics teachers" readiness towards Education 4.0 and their attitude towards the use of technology in teaching the subject.
According to Lai, Chundra and Lee [2], educators" knowledge and awareness of the IR 4.0 context are still unclear. Teachers lack confidence in applying IR 4.0 in teaching and still face difficulties in adapting new education reforms. Regarding mathematics confidence, Dance and Kaplan [3] state that mathematical confidence is characterised by a readiness to persevere, a positive attitude towards mistakes, a willingness to accept chances and self-reliance, all of which are traits of a development mind-set. Teacher confidence is significant not just because it has been linked to the quality of education, as observed by Stipek, Givvin, Salmon and MacGyvers [4], but also because it has the ability to duplicate positive or negative effects in students.
Confidence with technology is regarded as the ability of using technological tools or software in the teaching and learning process. According to Smith [5], teachers who use technology wisely can broaden the knowledge of every student, from the gifted student to the student who requires a different medium to learn. Meanwhile, attitude in learning mathematics refers to how technology improves teaching and learning mathematics. Mathematics is the foundation of all technologies, and technologies aid in the teaching of mathematics.
Behavioural engagement is focused more on the classroom established by mathematics teachers and the teacher-student interaction in class when teaching mathematics using technology. Behavioural engagement is concerned with levels of participation and involvement in school-related academic, social, or extra-curricular activities [2]. Affective engagement is indicated as the teacher"s feeling of engagement towards the mathematics teaching and learning process during this new mode of teaching by using technology to teach the subject.
This study aimed to investigate teacher"s readiness towards Education 4.0 and their attitude towards the use of technology in teaching mathematics using a questionnaire. Specifically, this research was conducted: To determine the significant difference in teachers" knowledge and awareness towards Education 4.0 when grouped according to age, gender and grade level taught. To determine the significant difference in teachers" attitude towards the use of technology in teaching mathematics when grouped according to age, gender and grade level taught. It is hoped that this study will be the starting point for further research in measuring teachers" attitudes towards the use of technology in teaching and learning for all subjects, not just mathematics. Further, it is hoped that this study will be a reference for the Ministry of Education, Malaysia, and teachers themselves on the importance of teachers" attitudes towards the use of technology along with current educational developments, Education 4.0.
II. METHOD
In this study, researchers used a quantitative approach. The approach survey method was used to collect data for this study by distributing questionnaires to primary and secondary school teachers in Kota Bharu, Kelantan.
A sample of this research was taken from a group of 86 primary school mathematics teachers and 306 secondary school mathematics teachers in Kota Bharu, Kelantan. The schools were randomly selected. A total of 162 research respondents voluntarily participated in this study. The respondents were grouped according to their profile variables such as age, gender and grade level taught, which is primary, lower secondary and upper secondary, as shown in Table I. Table I shows that 19 respondents (11.7%) were aged 31-40 years, 77 (47.5%) were 41-50 years and 66 (40.7%) were above 51 years. This shows that the majority of participants in this research were aged 41 years and above. A total of 42 respondents (25.9%) were males, whereas 120 (74.1%)were females.
The research instrument utilised in this study was the Mathematics and Technology Attitude Scale, developed by Pierce, Stacey and Barkatsas [6], which monitors five affective variables: mathematics confidence, confidence with technology, attitude to learning mathematics with technology, behavioural and affective engagement. This questionnaire was originally developed for middle secondary year students but was later modified by Marpa [7] to suit mathematics teachers as participants.
For this study, the amended questionnaire by Marpa [7] was adjusted to suit our research purposes. Fourteen more items were added to the original 20-item test, which are related to the knowledge and readiness of mathematics teachers in facing Education 4.0. These items were scored on a 5-point Likert scale.
To establish reliability, the modified research instrument was pilot tested on 15 mathematics teachers, both private and public, who were not included in the study"s actual respondents. Cronbach"s alpha was used to determine reliability, and the alpha coefficient was found to be 0.95. This coefficient of reliability indicates that the research instrument was reliable.
The collected data were then analysed by using the Statistical Package for the Social Sciences software version 27. Descriptive analyses were conducted on the variables in the questionnaire. After the data were collected, the researcher analysed the background of respondents by the frequency and percentage. The Mean (M) and Standard Deviation (SD) were used to measure the mathematics teachers" readiness towards Education 4.0 and their attitude towards the use of technology in teaching mathematics according to their age, gender and grade level taught. Meanwhile, inferential analyses, which included the t-test and one-way analysis of variance (ANOVA), were used to determine the significant difference in mathematics teachers" readiness towards Education 4.0 and their attitude towards the use of technology in teaching the subject.
III. RESULTS AND DISCUSSION
Descriptive analyses were conducted to evaluate the mean and SD of mathematics teachers" knowledge and awareness towards Education 4.0. Table II shows the results of our descriptive analyses. The results in Table II show that when it comes to knowledge and awareness towards Education 4.0, mathematics teachers are uncertain .
This demonstrates that mathematics teachers in Kota Bharu are still unfamiliar with Education 4.0 and are unsure on how to adapt to this new way of teaching. Therefore, we can conclude that all teachers, regardless of their age, gender or grade level, are still unfamiliar and still lack knowledge about Education 4.0. According to Sani [8], lecturers" understanding of the IR 4.0 context still remains unclear. Razak, as cited by Lai, Chundra and Lee [2], stated that the fundamental issue is that most lecturers do not understand the rationale for the changes or what role they must play in implementing IR 4.0-based teaching and learning. He also wrote that this was agreed upon by Syarifuddin and Halim [8], who stated that educators are unaware of the most recent changes and do not see the need to modify their teaching tasks. As a result, any changes to the curriculum or education system must be widely disseminated and explained clearly to educators [9]- [11].
Mathematics Teachers' Knowledge and Awareness Towards Education 4.0 based on Gender, Age and Grade Level
Independent sample t-tests were used to assess the teachers" knowledge and awareness towards Education 4.0. Table IV shows the t-test results of male and female teachers" knowledge and awareness towards Education 4.0. Table IV indicates that there was no significant difference in male and female mathematics teachers" knowledge and awareness towards Education 4.0 ( ) when a t-test for independent means was used to assess this. Table V shows that there was no significant difference between mathematics teachers" age and grade level taught regarding the attitude of teachers" knowledge and awareness towards Education 4.0 when ANOVA was used to assess this variable.
Table VI reveals that mathematics teachers were positive towards the use of technology in teaching mathematics . This shows that they believed that using technology to teach mathematics, especially during the Education 4.0 phase, was the best approach to improve mathematics teaching and learning. In addition, teachers were also positive towards learning mathematics using technology. Besides, the teachers believed that using technology in teaching mathematics in this new normal education can increase their confidence in teaching the subject . Table VII reveals that mathematics teachers" attitude towards the use of technology in teaching mathematics when grouped according to the age group of 31-40 years , 41-50 years and 51 years and above was positive. Teachers aged 41-50 years were strongly positive in terms of behavioural engagement . Meanwhile, teachers aged 31-40 years also had a positive attitude in all categories, except for confidence with technology, which was uncertain . Hence, the results presented in this table shows that overall, teachers of age 41-50 years and 51 years and above have a better attitude towards using technology in teaching mathematics. Table VIII shows that both male and female mathematics teachers were positive towards the use of technology in teaching mathematics. Male teachers were mostly positive towards using technology in teaching mathematics except for confidence with technology . On the other hand, female teachers showed a positive attitude towards the use of technology in teaching mathematics. However, like male mathematics teachers, female teachers also showed uncertainty when it comes to confidence with technology . The results presented in this table show that overall, female mathematics teachers have a better attitude towards using technology in teaching the subject. Table IX indicates that the teachers" attitude towards the use of technology in teaching mathematics when grouped according to grade level taught, primary school teachers , lower secondary school teachers and upper secondary school teachers were all positive. In almost all categories, primary and lower secondary school teachers showed a positive attitude, except for confidence with technology, which had uncertain results. Meanwhile, for upper secondary school teachers, Table IX shows that they were strongly positive in terms of behavioural engagement but uncertain in terms of confidence with technology. Thus, this result shows that overall, upper secondary school teachers have a better attitude towards using technology in teaching mathematics.
Mathematics Teachers' Attitude Towards the Use of Technology in Teaching Mathematics Based on Age, Gender and Grade Level Taught Table X shows the ANOVA results of mathematics teachers" attitude towards the use of technology in teaching the subject when grouped according to their age. Table XI shows that there was a significant difference in mathematics teachers" attitude towards the use of technology in teaching mathematics when a t-test for independent means was used to assess this variable. When categories were considered, significant differences were observed only in terms of behavioural engagement ). Table XII reveals the ANOVA results of primary, lower secondary and upper secondary school teachers" attitude towards the use of technology in teaching mathematics. Table XII shows that there was no significant difference in primary, lower secondary and upper secondary school teachers" attitude towards the use of technology in teaching mathematics when ANOVA was used to assess this variable. However, when categories were considered, significant differences were observed only in terms of mathematics confidence ). On the other hand, there was no significant difference in almost all the categories tested.
IV. CONCLUSION
The Fourth Industrial Revolution has led to many changes to human life. IR 4.0 has changed aspects of human life in terms of economy, politics, education and others, with many positive influences. However, there are also many shortcomings. Therefore, it is important for everyone to be prepared by equipping themselves with knowledge and readiness to face IR 4.0. In the aspect of education, teachers need to be prepared in improving their attitudes towards the use of technology in this new normal education. Teachers also need to adapt to the current situation in which they need to face technology-literate students, and adapt themselves to various technological methods that can be utilised in the teaching and learning process. | 2022-05-27T15:11:04.578Z | 2022-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "81105eeb95347537ce6a866ac86d10eb9384d1d0",
"oa_license": "CCBY",
"oa_url": "http://www.ijiet.org/vol12/1660-IJIET-3935.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "8046ce8443722157dc245d047606a7f904617281",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": []
} |
87489524 | pes2o/s2orc | v3-fos-license | Crassula ovata , a new alien plant for mainland China
Crassula ovata, a new alien plant for mainland China.— Crassula ovata, the jade plant, is reported for the first time from mainland China. Two small populations have been discovered in the downtown of the city of Chengdu (Sichuan Province, western China).
The genus Crassula comprises nearly 200 species mainly distributed in southern africa (its center of distribution), with some species distributed in other parts of africa or other parts of the world (Jaarsveld, 2003).Perhaps the best known species within the genus is Crassula ovata (Mill.)Druce, the jade plant, which is cultivated everywhere as an ornamental plant.according to Jaarsveld (2003), Crassula ovata is native to South africa (eastern Cape and KwaZulu-Natal provinces).However, it also occurs in other regions of southern africa, such as Mozambique and Swaziland, where it is also proba-
Notas breves
bly native (Invasive Species Compendium, 2015).It is present in the wild (casual or naturalized) in other territories of africa (Canary Islands, Madeira), europe (Spain, Italy), america (California in the united States, Mexico), and oceania (Hawaii, australia, New Zealand) (DaISIe, 2015;gBIF, 2015;Invasive Species Compendium, 2015), likely as the result of its use as ornamental; C. ovata has been grown beyond its native range as an ornamental (usually under the synonym Crassula portulacea Lam.) since the eighteenth century due to its beauty, easy propagation (from stem of leaf cuttings), and beliefs (it brings "good financial luck"; Malan & Notten, 2005).Despite its capability to spread, it is not a serious weed, with no records of significant invasions (Invasive Species Compendium, 2015) except for some areas (e.g. in coastal areas of Valencia, Spain;Ferrer & Donat, 2011).
according to all major regional taxonomic works (Flora of China, Flora Reipublicae Popularis Sinicae, Flora of Taiwan, Flora of Hong Kong) not only Crassula ovata but the whole genus is absent in China (including Taiwan).Moreover, C. ovata is not included in any of the lists or compendiums on alien plants in China published during the last decade (e.g.Wu et al., 2004Wu et al., , 2010a, b;, b;Lin et al., 2007;Weber et al., 2008;Fang & Wan, 2009;Jiang et al., 2011;Xu et al., 2012;axmacher & Sang, 2013;Yan et al., 2014).Wild occurrences of the jade plant are also not reported in any of the major databases, information systems, and citizen science projects focused on China, including Global Biodiversity Information Facility (gBIF; www.gbif.org/),Chinese Virtual Herbarium (CVH; www.cvh.ac.cn),Taiwan Biodiversity Information Facility (TaiBIF; www.taibif.tw),iNaturalist (www.inaturalist.org),Chinese Field Herbarium (CFH; www.cfh.ac.cn), and Plant Photo Bank of China (PPBC; www.plantphoto.cn).However, C. ovata is included in the Check List of Hong Kong Plants (Hong Kong Herbarium, 2004) under one of its synonyms (Crassula argentea Thunb.).
In the course of a field investigation in tropical and subtropical areas of China, we observed two populations of C. ovata in the city of Chengdu (Sichuan Province, SW China).Thus, these populations apparently represent new records for mainland China.The identification of C. ovata is straightforward, as the species is very characteristic even in the absence of flowers (by its jade-green obovate leaves of 3-9 cm long, often with reddish acute margins; Jaarsveld, 2003).Both populations are located in the downtown; one is composed by a small colony of a dozen vegetative individuals on a small roof at a building façade (accompanied by Kalanchoe daigremontiana Raym.-Hamet & H. Perrier, also a common invader; Fig. 1) in Wuhou District (near the Sichuan university campus); the second one consisted of just 4-5 vegetative individuals (stems) also on a small roof at a building façade in Qingyang District (near Wenshu Temple; Fig. 1).Since we observed Crassula ovata cultivated as a pot plant in several places of Chengdu, these wild populations are likely escapes from private gardens.Crassula ovata should be, thus, regarded as casual in China, but paying special attention to its potential for naturalization.We believe that there is a considerable risk of naturalization, given that the plant is cultivated in many places of China (the PPBC hosts several images of C. ovata planted in pots in at least 13 provinces), its ease of propagation (even single leaves can produce roots and grow into new plants; Invasive Species Compendium, 2015), and its tolerance to a wide range of temperature and humidity (even tolerating light frost; Mahr, 2010).
The recording of a new alien species for the Chinese flora such as C. ovata is not a rare event and should be included within the process of acceleration of plant invasions that is affecting the country (mainly as consequence of the economic boom; e.g.Ding et al., 2008).The lists of invasive and naturalized plant species have increased several-fold in just two decades (Jiang et al., 2011;axmacher & Sang, 2013) and such exponential growth is expected to continue in the future (Xu et al., 2012;Kleunen et al., 2015).a large part of the naturalized plant species in China (over 40%) have been introduced intentionally as ornamentals (Wu et al., 2010a), and the latest tendencies in gardening and landscaping in China are clearly biased towards alien species.In Beijing, for example, half of the plant species grown in urban green spaces are of alien origin (Wang et al., 2012), and many invasives were involved in the "greening" of the city for the 2008 olympic games (Wang et al., 2011).
Figure 1 .
Figure 1.observed populations of Crassula ovata from Chengdu (Sichuan, China): left, from Wuhou District (note that there is an individual of Kalanchoe daigremontiana on the right); right, from Qingyang District (Photographs: J. López-Pujol). | 2017-12-21T08:43:07.353Z | 2015-12-30T00:00:00.000 | {
"year": 2015,
"sha1": "728f2f75899f4a91f5fd5ec869f5db26aad1790d",
"oa_license": "CCBY",
"oa_url": "http://collectaneabotanica.revistas.csic.es/index.php/collectaneabotanica/article/download/235/269",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "728f2f75899f4a91f5fd5ec869f5db26aad1790d",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Geography"
]
} |
233620200 | pes2o/s2orc | v3-fos-license | Tackling Complexity of the Just Transition in the EU: Evidence from Romania
: The process of reaching carbon neutrality by 2050 and cutting CO 2 emissions by 2030 by 55% compared to 1990 as per the EU Green Deal is highly complex. The energy mix must be changed to ensure long-term environmental sustainability, mainly by closing down coal sites, while preserving the energy-intensive short-term economic growth, ensuring social equity, and opening opportunities for regions diminishing in population and potential. Romania is currently in the position of deciding the optimal way forward in this challenging societal shift while morphing to evidence-based policy-making and anticipatory governance, mainly in its two coal-mining regions. This article provides possible future scenarios for tackling this complex issue in Romania through a three-pronged, staggered, methodology: (1) clustering Romania with other similar countries from the point of view of the Just Transition efforts (i.e., the energy mix and the socio-economic parameters), (2) analyzing Romania’s potential evolution of the energy mix from the point of the thermal efficiency of two major power plants (CEH and CEO) and the systemic energy losses, and (3) providing insights on the socio-economic context (economic development and labor market transformations, including the component on the effects on vulnerable consumers) of the central coal regions in Romania.
Introduction
Europe is moving decisively forward with energy transition in pursuit of its goal of carbon neutrality by 2050. However, clean energy has to be backed by an equally important commitment to ensuring the security of energy supplies and equitable alternatives for the communities that are economically hit by this transition. The Just Transition Mechanism represents the EU's 150-billion-euro effort to ensure that the transition toward a climateneutral economy happens "in a fair way, leaving no one behind" [1]. Given the economic and strategic complexities faced by member states, we argue that such a financing tool has to be pointed in the right direction, targeting key specific issues at a national and local level. To do so, this article presents a diagnostic methodology tested on the case study of Romania. We build on both national and local level data and showcase both specific factors and broader regional trends related to energy-mix, energy production capacity, energy efficiency, pollution, and employment.
Cities, regions, and countries have started to track various indicators reflecting the life of their communities. This process leads to better-informed decisions in the public space. It also renders governments more accountable to their constituencies for their performance in office. Such tracking of indicators is furthered by the transition to smart cities as data becomes more readily available and transparency becomes the norm. This is not always possible for less developed regions, where both solutions and data are harder to find. Evidence-based policy-making is further limited by the need to integrate, apart from data, complex and shifting perspectives of stakeholders.
As we argue in this article, a necessary step forward in the Just Transition policymaking process is to involve real-time management of decisions including corrections and simulations of large-scale collaborative models such as anticipatory governance. Defined as "a broad-based capacity extended through society that can act on a variety of inputs to manage emerging knowledge-based technologies while such management is still possible" [2], anticipatory governance allows for current long-range actions. This stageprocess zooms in and out, from micro-communities to macro-supranational, continent-wide, as is the case with the European Union and its long-term sustainability planning. The Just Transition framework prescribes national governance behavior, but the targets are to be achieved only by looking at the local communities' specificities.
The complexity of such a process comes from the multitude of actors involved, the possible evolutions of the environment and the ecosystems (natural, business), and the high rise in uncertainty. Thus, anticipatory decision-making, understood as a data-driven process, becomes necessary in order to tackle such a task. Anticipatory studies, particularly in sustainability governance [3], relate to how various future paths link and shape current policies. Although our analysis focused on the Just Transition Mechanism in which decisions are made at the supranational or intergovernmental level, anticipatory governance at the local level is still needed to allow for the optimal implementation of the Just Transition.
Central and Eastern Europe (CEE) is facing the dual challenge of energy transition and economic catch-up with older member states [4]. The tension between energy transition and economic development is obviously not specific to CEE, as it can also be found in Latin America [5] or Asia [6]. Still, in CEE, it informs the implementation of energy transition instruments such as the Just Transition Mechanism.
In adopting the Green Deal [7][8][9][10], developmental divides between older and newer member states (NMS) are a weakness. Despite the Just Transition Fund, considering their structural vulnerabilities and economic dependency [11], the green transition's effect could have a more significant negative impact on NMS.
Therefore, it is imperative to account for these regional specificities in CEE countries like Romania. Without pretending to go fully anticipatory, in a classical manner, our diagnostic analysis represents a first step in the development of an evidence-based policy-making for the Just Transition of coal regions in Romania.
With a significant increasing contribution of renewables and nuclear energy, Romania will have to decommission by 2040 all of its currently installed thermal power generating capacity, which is theoretically possible according to recent simulations [12]. However, there is still an ongoing discussion about the transition's socio-economic impact and how the domestic energy needs will be met. Our data analysis shows both the urgency of the transition in terms of pollution and the low energy efficiency of the existing coal-based energy production plants. We nevertheless acknowledge that, given the complexity of the situation, politics will play a significant role.
Th rest of this article is structured as follows. Section 1 engages with the literature on energy transition in the EU, and Section 2 presents our methodological steps (including the aim of the study). Section 3 contains the data analysis, structured on a three-pronged approach of an extensive comparative clustering analysis of all member states on the parameters that are relevant to the Just Transition process as well as the two in-depth case studies on energy production plants and regional transitional challenges in the coal-regions of Romania. Finally, Section 4 concludes by discussing the relevance of our findings to the broader evidence-based decision-making process at the national and European level.
The Complex Issue of Energy Transition-Literature Review
The EU has piloted a series of policy reforms over recent years and is now pursuing a much more comprehensive program in the form of the Green Deal-essentially defined as "a new growth strategy". With the Just Transition Fund, which is a vital instrument for the delivery of the European Green Deal, and its €40 billion behind it, it aims to mobilize at least €150 billion investments over 2021-2027 in the most affected regions, divided in three pillars [13]. It requires an ambitious approach to reshape the way we live and work within the EU [9]. This, in turn, requires concrete evidence on the capabilities and vulnerabilities in both the energy sector and, more broadly, in terms of the socio-economic perspective of local communities.
The Green Deal builds upon a desiderate for a reformed European society, which functions resiliently in congruence with nature, fosters innovation and individual freedoms, and mitigates the risk of various speeds of development. However, this transition is by far one of the most complex endeavors the Union had to take. The reason for this complexity is given by the heterogeneity of the actors involved (Member States do not have similar circumstances concerning sustainability or economic development), the diversity in approaches to societal shifts, and in the speed to change the current societal configurations. The literature on societal shifts (or socio-technical transitions) relies on two pillars: (1) the multi-level perspective (MLP), from the seminal works of Rip and Kemp (1998) [14], followed by the consistent developments by Geels (2002) [15], (2004) [16], (2005) [17] (with Schot, 2007 [18] and 2008 [19]), (2010) [20], (2011) [21], (2014) [22], (2019) [23], and (2) the works of Hagel, Seely Brown, and Davison (2009) [14] and Denning, Hagel, Seely Brown and Davidson (2012) [24] regarding the Shift Index. Both pillars (with their respective criticisms) provide multi-level approaches with three levels:
•
The MLP distinguishes between niches, socio-technical regimes, and a socio-technical landscape. It also talks about transition as a regime shift, relying on inter-level interactions [21].
•
The Shift Index relates to three composite indices: foundations, flows, and impact. The indices act as waves for change, as the authors see the interactions in a sinuate evolution, in which the processes overlap and the momentum is driven by all three forces [25].
As of 2021, the MLP has not been analyzed for Romania and constitutes the next step in our research, while the Shift Index was evaluated for this country in Voicu-Dorobantu et al. (2011) [26], Paraschiv et al. (2012) [27], and Voicu-Dorobantu (2015) [28].
Energy policy has to be informed by evidence related to (1) energy supply and security, (2) environmental impact and pollution, and (3) competitiveness and economic development [12]. We used all three dimensions in the clustering and scenario analysis in the following sections. We briefly illustrate in the following paragraphs how each of these analytical dimensions was explored in the case of Romania and the CEE region.
East-West Divide in the Energy Transition
CEE has distinctive features that make it more vulnerable to energy transition. It has an enormous energy intensity and associated greenhouse gas (GHG) emissions [29,30]. As we show in this article, air pollution scored the highest in Europe for countries in this region. CEE countries rely much more heavily on coal-fired power stations than Western Europe in terms of energy production. This dual imminence of the transition due to pollution and poor alternatives for the current quantity of coal-based energy production constitutes the region's energy transition conundrum. In comparison, environmental transition in the region has been more readily accepted [31] given the relatively limited industrial exploitation of their territories. Energy dependency is reinforced by relative poverty in the region, as it is only in the newer member states from CEE that there are regions with lower than half the EU average GDP level. It is essential to understand that in a context of insufficient institutional capacity, as many of the CEE countries are facing, the implementation of labor reconversion programs is rendered more difficult. Their historical economic pathway [32,33] and their "dependent capitalism(s)" [34][35][36][37] add another layer of difficulty to diversifying employment and developing higher added value jobs. Low regional competitiveness [38] also means that there are fewer internal migration and labor reconversion perspectives.
Technological solutions and availability of alternative energy production are challenging in general [39], but for CEE countries, given their low innovation capacity and R&D spending [40], this becomes an even greater challenge. The incumbent commodity, the revenues, and the market margin can be substituted by innovations [41]. The energy sector uses a blend of many energy service technologies, making socially ideal solutions possible because it preserves flexibility in energy supply [42]. In this sense, progress in energy technology is powered by a convergence of individual technologies to provide a certain energy utility and spillover of information (i.e., the use of tech beyond its first location) [43]. Regarding the Green Deal's objectives, local companies have to have the innovative capacity to adapt to and adopt new non-polluting technologies or processes [44]. This is especially challenging for lagging regions in Romania and other CEE countries, given the companies' weak connectivity to knowledge-transfer networks [45] and domestic eco-innovation capacity [26].
The classical difference between core and peripheral economic growth [46] is also valid in the case of energy poverty [47], provided that in the countries of southeastern Europe, the influence of this problem is considerably higher [48]. Ultimately, customers can bear the burden of electricity supplies in a stable and secure scheme as well as the transition to a less carbon model. The challenge is how to meet these aims while simultaneously maintaining open markets that provide customers with fair pricing and protect the most vulnerable [49]. The Energy Union builds on previous Commission documents and seeks to position "citizens at its core" by investing in the transition and reducing bills using emerging technology, encouraging full market engagement, and protecting disadvantaged customers [50].
Energy Mix and Coal Phase-Out
Coal is sometimes viewed as the cornerstone of the economies of coal-mining areas. Looking more carefully, it is clear that coal is not only an enormous burden on the environment and human health, but that mining and burning coal also raises the cost of public resources. As a result of industrial expansion, areas with large coal industries have become associated with air pollution, soil depletion, and socio-economic loss. However, we should also consider that mining is a traditional activity, and the coal industry has shaped local history, identity, and jobs, transforming them into assets for various other sectors such as renewable energy. This shaping allows for relevant opportunities for regional development and job creation, even as the world gradually moves away from fossil fuels due to their negative impact on health and the environment.
Although coal remains a key fuel in the European energy mix as it represents a fifth of the EU electricity generation mix and three-quarters of CO 2 emissions from the EU electricity sector, according to Bruegel [51], the transition to cleaner sources of energy and advanced technology is imperative to fulfill the EU's promise to reduce CO 2 emissions by at least 55% by 2030 and to become the world's first climate-neutral region by 2050.
The European coal industry employs about half a million workers in direct and indirect operations (185,000 workers in coal mines, 53,000 workers in coal power plants, and 215,000 jobs in indirect activities related to the coal supply chain) [52]. It is projected that by 2030, around 160,000 direct jobs will be lost. Based on a carefully orchestrated restructuring phase in which green energy plays a central role, regional growth would generate new job opportunities. In order to ensure that no region is left behind in this process of transition, the Commission has also initiated the "Initiative for Coal Regions in Transition," which works as an open forum that has brought together all interested actors in sharing information and exchanging experiences in a bottom-up approach to a just transition. Specially designed as a non-legislative feature of the "Clean Energy for all Europeans" package [53], the forum aims to mitigate the social effects' of lowcarbon transition. Nevertheless, coal remains a significant political bottleneck in the EU's decarbonization process; therefore, this subject is tackled more in the following sections.
Among the EU countries, the largest coal reserves are in Poland, Romania, the Czech Republic, Spain, and Germany [54]. Western European member states have been facing the challenges of the energy transition head-on, and as such, they have implemented a series of measures designed to counter its negative impact and comply with the coal-phase out process [55,56]. The interconnected essence of coal mining and coal-fired generation is consistent with the fact that coal is historically a source of electricity linked to its domestic output capacity. For example, the figures from 1991 show that whereas Poland had 116% of self-sufficiency in coal (self-sufficiency being calculated as the percentage of domestic production in the national coal use [57,58]) and a 78% share of coal in the total primary energy use, the United Kingdom (87%) and Germany (95%), with a self-sufficiency in coal of 87%, respectively 95%, had 29%, respectively 33% share of coal in the total primary energy use [59]. These behaviors were observed across Europe even considering the lack of rivalry between coal mines since the late 1950s when imported non-domestic coal prices plummeted sharply [60][61][62]. In these difficult times, many Western European countries kept their coal mines open due to their reserves and local historical lifestyles. The Polish coal mining has recently become globally uncompetitive [63] and Germany is more committed than ever before to the coal phase-out [55,62]. Now, only newer member states in Europe rely on coal for 20% to 50% of their total energy needs: Bulgaria, the Czech Republic, Greece, Poland, Romania, and Slovakia.
Coal-based energy production is not only very polluting, but also highly inefficient [64]. Many of the coal-production facilities are technologically outdated, having been built in the communist period. Therefore, the frequency of coal power plants with the lowest efficiency (around or below 30%) is higher in eastern European countries [54]. However, coal is not an efficient fuel production base, as even the most recent production facilities in Germany still only have a 39% energy production efficiency [54]. In contrast, high-power plant efficiencies in coastal sites in northern Europe are also due to the availability of cold water for power plant cooling [54]. The desertification of coal-regions in Romania only adds to the low energy efficiency of the two plants we assessed in this article.
The perspective of mass unemployment in the coal-regions is one of the primary reasons behind delays in the transition process [65]. As such, delaying the coal-phase out process ensures a natural exit of the coal-related employees into retirement. However, Oei et al. (2020) [56] showed that despite the negative impact on coal-regions in Germany, in terms of losses in output, income, and population, a more rapid phase-out would also result in a quicker recovery, based on Germany's internal migration and demographic changes.
Measures involved both targeted local interventions for communities-in terms of socio-economic costs [56,66], and national-level policies and strategies-to ensure their energy supply [59,67,68]. At this intersection between national and local measures lies the necessity and added-value of the in-depth national diagnostic of the energy systems' political economy. Michael Metzger et al. [69] recently pointed out that national governments need to develop their energy systems with both a higher degree of flexibility and operations planning. We argue that evidence-based policy-making can best address emerging vulnerabilities of the energy systems and the energy transition.
A Brief Overview of the Romanian Context
Romanian energy production facilities (including coal-based power units) were mainly constructed before 1990, starting with the 1970s (similar to many other post-communist countries, like Poland or Hungary), and the oldest facilities are approaching the end of their lifetime [70]. The main coal basins are located in the Jiului Valley (Gorj County, southwest region) and Hunedoara County (west region). However, during the past 30 years, mining activity has started to decline, especially in Jiului Valley. As Barbu (2020) [70] showed, during 1997-2017, the number of mining perimeters in operation reduced from 16 mining perimeters of 163.35 km 2 to four mining perimeters of 22.3 km 2 .
Nowadays, most active coal mines are located in two development regions (namely the South Vest Oltenia Region and the Vest Region) and concentrated in Hunedoara and Gorj Counties, which are responsible for 97% of the electricity produced from coal.
Hunedoara County has an industrial tradition since it was part of the Austro-Hungarian Empire, but the communist period transformed this county into a real center of heavy industry. The county's economic model was centered around the extractive and processing industry mainly due to its rich coal resources and steel production. Jiu Valley, a region Energies 2021, 14, 1509 6 of 22 located in Hunedoara County on the border with Gorj County, famous for its coal production, had at the end of the communist period a population of about 140,000 people, of which about 45,000 were coal workers, across 15 mines. Currently, the Jiu Valley population has decreased significantly, as has the number of coal workers, which now numbers around 11,000. The current situation of the county can be most eloquently explained by the evolution of Hunedoara, which became the largest mono-industrial city in the country during the communist period, its population growing from 4800 inhabitants in 1930 to 90,000 inhabitants at the fall of communism, and the steel plant in Hunedoara had at that time 20,000 employees. ArcelorMittal bought the steel plant, and at the beginning of 2020, before the outbreak of the pandemic, it registered only 640 employees. The reduction of the extractive activity and the obsolescence of the economic model around which the whole county was focused (heavy industry) eventually caused the decline in the county's population and the strong emergence of the phenomenon of "shrinking cities." Gorj County had a population, at the last census conducted in 2012, of about 373,000 inhabitants. Of these, a proportion of 52.5% lived in rural areas and 47.5% in urban areas, and thus it can be concluded that agriculture still plays a somewhat important role in the county's economy. Over 50% of the active workforce is engaged in activities in the agricultural and industrial sectors. Extractive activities and electricity production dominate the county's industry. There are significant lignite resources in the Rovinari and Motru Basins, a region where the lignite mines are located, which supply the Oltenia Energy Complex with raw materials. Both Hunedoara County and Gorj County suffer from the depopulation process. The massive migration of the population over the last 20 years to neighboring counties, primarily to regional university centers such as Timis , County for people from Hunedoara County and Dolj County in the case of Gorj County has led to changes in the demographic structure of the region. This change is evident across the entire Jiu Valley, especially from the perspective that young people who chose to study in university centers in neighboring counties rarely, at the end of their studies, returned to their native counties. This declining trend in both the number of students and teachers has had a significant negative impact on the region's economic development as the labor market is concentrated around the mining sector and does not offer too many other sectoral opportunities.
Within Romania's energy production mix, the coal-based energy production represented, at the end of 2019, around 23-24% of the total (mainly lignite, more than 90%), increasing significantly during the winter months. At the end of 2019, the most widely used primary resource was hydropower (approx. 27% of total), followed by coal and nuclear energy (19%). Additionally, in 2019, oil and gas produced around 16% of the total, while renewable resources like wind and photovoltaic generated more than 14% of the total energy [71].
Regarding the electricity production mix, the following aspects have to be considered: In 2019, compared to 2018, the variation of production by types of resources decreased in the most primary sources of power, with values between 0.94% for nuclear production and 13.56% for oil and gas production. At the same time, there were essential increases in production from renewable sources, respectively wind (+7.14%), biomass (+27.56), and photovoltaic (+0.34%). Hydropower production decreased by 10.28% compared to the previous year. According to the Transelectrica annual report [71], this situation was caused by the decrease in hydraulicity in inland rivers from 97% in 2018, a normal year, to 85% in the year 2019, a subnormal year. However, given that the production of renewable sources is very volatile (variations in production over 1000 MW between concomitant intervals), the integration in the National Electrical System of wind power plants was facilitated, to no small extent, due to variation of the production in the hydropower plants.
Materials and Methods
This article focused on possible scenarios to tackle this complex issue in Romania through a three-pronged methodology: (1) clustering Romania with other similar countries from the point of view of the Just Transition efforts (i.e., the energy mix and the socio-economic parameters), (2) analyzing Romania's potential evolution of the energy mix from the point of the thermal efficiency of two major power plants (Complexul Energetic Hunedoara, CEH and Complexul Energetic Oltenia, CEO) and the systemic energy losses, and (3) providing insights into the socio-economic context (economic development and labor market transformations including the component on the effects on vulnerable consumers) of the central coal regions in Romania. To this extent, we have three different specific methodologies: related to cluster analysis, the evolution of thermal efficiency, and scenario development, and the methodological framework is presented in the following figures with the steps (Figure 1) and logical diagram ( Figure 2).
Cluster Analysis Methodology
The cluster analysis methodology used was the traditional k-means clustering. Clustering was done in four different levels, as seen in Figure 3, with an added final clustering aggregating all layers. Similarly, we ran those four clustering levels on three types of data: Raw data and Standardized data. For the standardization of data, this step aims to standardize the range of the continuous initial variables so that each one of them contributes equally to the analysis [72].
during the three years, while biomass and photovoltaic slightly increased. Complementarily, hydropower and wind power decreased. • In 2019, compared to 2018, the variation of production by types of resources decreased in the most primary sources of power, with values between 0.94% for nuclear production and 13.56% for oil and gas production. At the same time, there were essential increases in production from renewable sources, respectively wind (+7.14%), biomass (+27.56), and photovoltaic (+0.34%). Hydropower production decreased by 10.28% compared to the previous year. According to the Transelectrica annual report [71], this situation was caused by the decrease in hydraulicity in inland rivers from 97% in 2018, a normal year, to 85% in the year 2019, a subnormal year. However, given that the production of renewable sources is very volatile (variations in production over 1000 MW between concomitant intervals), the integration in the National Electrical System of wind power plants was facilitated, to no small extent, due to variation of the production in the hydropower plants.
Materials and Methods
This article focused on possible scenarios to tackle this complex issue in Romania through a three-pronged methodology: (1) clustering Romania with other similar countries from the point of view of the Just Transition efforts (i.e., the energy mix and the socioeconomic parameters), (2) analyzing Romania's potential evolution of the energy mix from the point of the thermal efficiency of two major power plants (Complexul Energetic Hunedoara, CEH and Complexul Energetic Oltenia, CEO) and the systemic energy losses, and (3) providing insights into the socio-economic context (economic development and labor market transformations including the component on the effects on vulnerable consumers) of the central coal regions in Romania. To this extent, we have three different specific methodologies: related to cluster analysis, the evolution of thermal efficiency, and scenario development, and the methodological framework is presented in the following figures with the steps (Figure 1) and logical diagram ( Figure 2).
Cluster Analysis Methodology
The cluster analysis methodology used was the traditional k-means clustering. Clustering was done in four different levels, as seen in Figure 3, with an added final clustering aggregating all layers. Similarly, we ran those four clustering levels on three types of data: Raw data and Standardized data. For the standardization of data, this step aims to standardize the range of the continuous initial variables so that each one of them contributes equally to the analysis [72].
All countries clustered with Romania in any of the generated results were considered for a more focused view beyond the data of relevance, best practices, and use cases.
Cluster Analysis Methodology
The cluster analysis methodology used was the traditional k-means clustering. Clustering was done in four different levels, as seen in Figure 3, with an added final clustering aggregating all layers. Similarly, we ran those four clustering levels on three types of data: Raw data and Standardized data. For the standardization of data, this step aims to standardize the range of the continuous initial variables so that each one of them contributes equally to the analysis [72].
All countries clustered with Romania in any of the generated results were considered for a more focused view beyond the data of relevance, best practices, and use cases. All countries clustered with Romania in any of the generated results were considered for a more focused view beyond the data of relevance, best practices, and use cases.
For the most recent year, all data for the EU Member States are published in Eurostat (2018 in most cases). The following datasets (in brackets the online data code for the dataset according to Eurostat) were used: Finally, the overall clustering integrated all variables to provide an EU image that concerned our researched issue.
For the clustering, we used StatPlus, which allows for k-means clustering. K-means "is a method that partitions n observations into k clusters in which each observation belongs to the cluster with the nearest mean (cluster centers or cluster centroid), serving as a prototype of the cluster. k-means clustering minimizes within-cluster variances (squared Euclidean distances) but not regular Euclidean distances" [73]. Two levels of aggregations are necessary-if the number of items in the cluster is larger than 7: with k = 5 and then k = 3 (alternatively k = 5 and k = 4 were tried for the second level of aggregation, but there were no significant differences in the clusters resulted in the second aggregation).
Efficiency Analysis Methodology
To highlight the efficiency of coal-fired power plants in Romania compared to those in the European Union, we consolidated the findings obtained by Alves- Dias et al. (2018) [54] that estimated the thermal efficiency of the individual power plants based on the available information on the installed capacity, the age, and type of power plant. One of the most important technical factors for assessing a power plant's performance is its efficiency since it is linked to competitiveness, as lower efficiency implies higher fuel consumption, which results in higher production costs and CO 2 emissions.
The CO 2 emissions of a power plant are proportionally related to the fuel used, the fuel consumed during the year, and the generated electricity and efficiency. The following formula was used: where: Intensity fuel : The CO 2 content per calorific energy in the fuel expressed in tons CO 2 per TJ; generation: Annual net generation of the power plant in MWh; and CO 2emissions : Annual emissions in Kg. Note that the 3.6 factor was used to convert all variables in the same measure unitjoule (as 1 MWh = 3.6 Gigajoule).
The dataset used to calculate the coal-fired power plants' thermal efficiency was from the JRC Open Power Plants Database (JRC-PPDB-OPEN). To emphasize each energy source's contribution to the electricity production mix, we used the data provided by Transelectrica for 2019.
In order to calculate the energy losses from the process of transforming gross energy into energy available in the network for consumption, we used the Transelectrica methodology, which is based on the following formula: where: NP (net power) = The power that the generator can deliver to the network for marketing purposes; GAP (gross available power) = Total electricity produced by the generator; PCOS = Power consumed in own services; SCGS = The share of consumption of general services; and PLTB = Power losses in the transformer block. We assumed generation (from Equation (1)) = NP (from Equation (2)) to correlate the two analyses. To calculate the pollution impact of the CEO and the CEH, we used the companies' 2017 and 2018 environment and annual reports and the data provided in them regarding CO 2 , SO 2 , NOx, and PM 2.5 emissions. Next, starting with the amount of greenhouse gas emissions (SO 2 , NOx, and PM 2.5) at the national level, we analyzed the impact that the total closure of these two complexes would have on reducing greenhouse gas pollution. In addition, we investigated if such a scenario is relevant for reaching the 2030 air pollution targets imposed by the EU Green Deal. The impact was calculated at a national level by subtracting the pollution generated by the two energy complexes from the current air pollution levels.
Scenario Methodology
Concerning scenarios, the methodology applied was again the classical version, according to Figure 4. The scenarios' primary purpose was to assess the changes that occurred over a long period, evaluate the effects, and notify the decision-makers by suggesting strategies and policies to adapt to these changes. The scenarios were not intended to reflect all potential future circumstances; instead, they provide plausible answers to significant uncertainties and critical questions about an organization's future growth or society.
An approach taxonomy to scenario modeling is created by defining a classification according to the distinction suggested by Rayner and Malone (1988) and Robinson and Timmerman (1993) (focused on values, meanings, and motivations) [74,75]. This distinction can be seen along with the exposure-correlation (local or global). Incorporating the subjective and interpretive viewpoints in a single paradigm is well established in the studies to date [76,77]. For quantitative evaluations, the recent approach in the field is to incorporate critical qualitative and narrative scenarios with global modeling [78,79], a situation in which it is also possible to use multifaceted evaluations on the sub-global level in multiscale assessments (MAs).
In this literature analysis, three categories of scenarios were identified: external (in which the determinants are external factors that participants in the affected system cannot influence), internal (the emphasis is on internal factors that are fully influential), and systemic (such as the case of the present research, which includes both external and internal factors). The most popular method of integrating elements is the matrix, represented in a scenarios-axes technique, as shaped by van't Klooster and van Asselt (2006): scenario-axes as the backbone to scenario development as building scaffolding or as a foundation [80]. The widespread representation issue is that it uses only two of the most critical driving forces (as axes) with a decisive impact on the system analyzed. This approach caters to the idea that the primary source of errors in scenario modeling is the inability to integrate multiscale phenomena such as the regional approach opposed to the global approach. Models cannot account for evolutionary dependencies between the global and regional structures/networks such as the advent of irreversible phenomena [81][82][83].
Results
The results are presented below according to the three methodological steps discussed below in the Discussion section.
Cluster Analysis
The first analysis applied to the raw and standardized data was the correlation check.
The following correlations were discovered:
Results
The results are presented below according to the three methodological steps discussed below in the Discussion section.
Cluster Analysis
The first analysis applied to the raw and standardized data was the correlation check.
The following correlations were discovered: The correlations are presented in Figure 5. Clustering algorithms were run, and the following results were obtained, as presented in Appendix A.
Significant differences appeared between the clusters created from raw data and standardized data, which led to a need to consolidate data to eliminate the erroneous weight of each correlated variable in the final results. This consolidation took place in the standardized data table; a treatment acknowledged as reducing biases in the analysis, with the following consolidation measures taken:
•
All pollutants were clustered into one variable (the mean average of the three variables). • All variables with correlations higher than 75% were eliminated; therefore % population in M&Q was eliminated.
•
In the second application of the correlation matrix, the only correlations higher than 75% were energy intensity vs. pollutant (78%) and no. of companies M&Q and population in M&Q (75%), which led to elimination from the analysis of the energy intensity and population in M&Q.
75% were energy intensity vs. pollutant (78%) and no. of companies M&Q and population in M&Q (75%), which led to elimination from the analysis of the energy intensity and population in M&Q.
After this consolidation, data were considered suitable for an unbiased running of the clustering algorithm. Due to the consolidation, only integrative clustering was considered, as, for instance, Stage 2 was obsolete. After this consolidation, data were considered suitable for an unbiased running of the clustering algorithm. Due to the consolidation, only integrative clustering was considered, as, for instance, Stage 2 was obsolete.
The unbiased analysis generated five clusters, as follows (see also Figure 6): Table 1 shows the level of pollution produced by the two energy complexes responsible for 97% of the electricity generated from coal sources, CEH and CEO. Using Equation (1) from the methodology [54] and the datasets provided in Appendix B, we presented the average thermal efficiency as well as the emissions estimated for every coal-based power plant (CO2, SO2, and PM 2.5). Moreover, given the average years of CEH and CEO, the efficiency is expected to decrease, while without any additional investment in new technologies, greenhouse gas emissions are expected to increase. Simultaneously, the lack of investment and low thermal efficiency will be reflected in the level of gas emissions and the energy losses. The coal sector has one of the most considerable losses in gross generated power. Figure 7 shows that more considerable losses in the energy production process are incurred for coal, oil, and gas (approximatively 14% of the Table 1 shows the level of pollution produced by the two energy complexes responsible for 97% of the electricity generated from coal sources, CEH and CEO. Note: All values are in Gigagrams. * Extrapolation based on electricity produced and similar levels of pollution with CEO. ** Calculated using Equation (1).
Efficiency in Energy Production Analysis
Using Equation (1) from the methodology [54] and the datasets provided in Appendix B, we presented the average thermal efficiency as well as the emissions estimated for every coal-based power plant (CO 2 , SO 2 , and PM 2.5). Moreover, given the average years of CEH and CEO, the efficiency is expected to decrease, while without any additional investment in new technologies, greenhouse gas emissions are expected to increase. Simultaneously, the lack of investment and low thermal efficiency will be reflected in the level of gas emissions and the energy losses. The coal sector has one of the most considerable losses in gross generated power. Figure 7 shows that more considerable losses in the energy production process are incurred for coal, oil, and gas (approximatively 14% of the gross energy production for both categories), having an essential share in the energy production mix (16% for oil and gas, and 24% for coal). The problem caused by these losses is all the thornier for coal-fired power plants, as they are financially inefficient due to the high costs of CO 2 allowances. A loss of 14% of the gross energy produced by these power plants does nothing but put additional pressure on the budgets of the two energy complexes. gross energy production for both categories), having an essential share in the energy production mix (16% for oil and gas, and 24% for coal). The problem caused by these losses is all the thornier for coal-fired power plants, as they are financially inefficient due to the high costs of CO2 allowances. A loss of 14% of the gross energy produced by these power plants does nothing but put additional pressure on the budgets of the two energy complexes.
Scenario Development
The usual method for developing scenarios is to plot them on a matrix structure, as described in the methodology, starting from two critical factors. Based on previous stages of our research and literature in Romania's coal mining regions, we considered the two critical factors to be economic growth and energy efficiency. Thus, the scenarios presented in Figure 8 are proposed in an exploratory manner. A detailed description of these scenarios, validated by qualitative data collection that would translate them into normative scenarios, is the next step in our research.
The energy efficiency considered here, for scenario development, refers to Romania's ability to adhere to the Just Transition in the coal mining regions and to shift its energy mix to a more sustainable one. The economic growth done in the traditional manner of pushing the production is energy-intensive; therefore, achieving economic growth while keeping a high energy efficiency is challenging. The goal of the scenarios is to allow for proposals of specific policies that might increase the probability of the occurrence of scenario B from Figure 4.
Scenario Development
The usual method for developing scenarios is to plot them on a matrix structure, as described in the methodology, starting from two critical factors. Based on previous stages of our research and literature in Romania's coal mining regions, we considered the two critical factors to be economic growth and energy efficiency. Thus, the scenarios presented in Figure 8 are proposed in an exploratory manner. A detailed description of these scenarios, validated by qualitative data collection that would translate them into normative scenarios, is the next step in our research.
Scenario Development
The usual method for developing scenarios is to plot them on a matrix structure, as described in the methodology, starting from two critical factors. Based on previous stages of our research and literature in Romania's coal mining regions, we considered the two critical factors to be economic growth and energy efficiency. Thus, the scenarios presented in Figure 8 are proposed in an exploratory manner. A detailed description of these scenarios, validated by qualitative data collection that would translate them into normative scenarios, is the next step in our research.
The energy efficiency considered here, for scenario development, refers to Romania's ability to adhere to the Just Transition in the coal mining regions and to shift its energy mix to a more sustainable one. The economic growth done in the traditional manner of pushing the production is energy-intensive; therefore, achieving economic growth while keeping a high energy efficiency is challenging. The goal of the scenarios is to allow for proposals of specific policies that might increase the probability of the occurrence of scenario B from Figure 4. The energy efficiency considered here, for scenario development, refers to Romania's ability to adhere to the Just Transition in the coal mining regions and to shift its energy mix to a more sustainable one. The economic growth done in the traditional manner of pushing the production is energy-intensive; therefore, achieving economic growth while keeping a high energy efficiency is challenging. The goal of the scenarios is to allow for proposals of specific policies that might increase the probability of the occurrence of scenario B from Figure 4.
Scenario A assumes that Romania would lose its economic drive due to global crises and diminishing competitiveness. However, it has managed to go through the Just Transition, and the energy efficiency of the entire economy is on the rise, with the support of renewables.
Scenario B might be considered as the best-case scenario and assumes a successful passing through the Just Transition while maintaining economic growth. This scenario would ask for smart policies that increase the share of services in the economic growth.
Scenario C may be considered as the worst-case of a failure in improving energy efficiency, which is a failed transition to a greener economy while losing competitiveness and growth.
Scenario D indicates that the current status is continued.
Discussion
We focused our article on Romania as a case study, as according to our analysis, it faces the highest vulnerability with regard to the ongoing energy transition in the European Union. As such, we accounted for both systemic vulnerabilities and policy measures. Romania's situation is thus in contrast to other member states in CEE such as the Czech Republic, who have put forward mediation measures to counter the coal phase-out's negative impacts and take full advantage of the Just Transition Mechanism. Even Polandhome to the largest coal-burning power station in Europe and still actively pursuing coal exploitation and energy production-has managed to establish new pathways of transition and regional transformation [66,86].
Based on our data, Romania is estimated to lose approximately 25% of its current production facilities given the coal phase-out and up to 40% if hydrocarbons are targeted under the Green Deal. Most of the energy production capacities to be lost are coal-based. The majority of those facilities, built during the communist period, have already surpassed their standard period of life, which, on the one hand, brings this country closer to the target of carbon neutrality. Nevertheless, on the other hand, this creates significant economic and social pressures in the affected regions due to narrow specialization and high reliance on the extraction of coal. In Romania, there are two regions where this problem is most stringent and where public policy support for the transition has to be specifically focused: the Vest and Sud-Vest regions. According to Eurostat data, the coal plants' energy efficiency in Romania's two regions of interest is on average 30%, well below the EU average of 35%. More than 50% of total SO 2 and NOx emissions are released from coal mining activities in these regions in terms of air pollution. As such, innovative solutions are needed to mediate the transition's shock and change the local development models.
In the transition to sustainable energy, current Eurostat estimates place cumulative job losses in the coal sector, by 2030, to be between 3000 and 6000 in the Vest region and between 6000 and 15,000 jobs in the Sud-Vest region. The part is profoundly affected by deindustrialization and out-migration, which have led to "shrinking cities" (i.e., urban areas faced with a rapid and drastic decrease of the population). Romania is facing a decline in human capital and reduced flexibility to reconversion and transition by a narrow horizon of regional specialization, an exodus of workers, a lack of allocated resources to entrepreneurs and start-ups, a deterioration of primary education and VET training, and an overall precarity of entrepreneurial culture [87][88][89]. All these effects are leveraged by the phenomena mentioned earlier. According to the European Commission 2019 Annual Report on Intra-EU Labor Mobility, 173,000 Romanians were hired in other EU countries in 2018, up by 7 percent than the previous year. Therefore, Romania is the EU Member State sending the most active movers; their numbers could be much higher in the next few years if the professional reconversion and reintegration into the workers' labor market from the sectors affected by the green transition process are not managed efficiently.
Romania's energy mix is well balanced compared to other member states from CEE, and the 2030 climate and energy framework targets have mostly been reached. However, the energy consumption from coal-fired power plants increases during the winter months from 24% to 40% of the energy mix, meaning that in the short run, in the case of mining closure, Romania would need to rely on imports.
Investments in the energy infrastructure are needed primarily because of the low efficiency and high pollution generated by the current facilities. Second, outside Bulgaria, in the CEE region, Romania has the worst situation in terms of arrears of utility bills (14.4% of the country's population have delays in payment of utilities), and almost 10% of households fail to keep their homes adequately heated. In the absence of investments in alternative energy production sources, the closure of coal-based energy production will worsen these indicators.
Climate transition will significantly influence public and private spending in the coming years. Impact assessments and knowledge-sharing will be of paramount importance in ensuring that public authorities, investors, companies, cities, and people across the EU can develop the proper tools to engage in a just transition for all. Evidence-based policies and community-tailored solutions can contribute significantly to the successful pursuit of the Green Deal objectives at the subnational level. Therefore, this article can contribute to the evidence-based policy-making related to the Just Transition of coal regions in Romania. Our findings suggest that given the complexity of Romania's energy transition and its socio-economic costs, the commitment to the Green Deal's objectives is fundamentally linked to the extent of the political will at a national level.
The shift of scenarios from prospective to normative to implementable policies is based on various data analysis layers. It starts from a status quo assessment, identification of best practices, forecasting data using usual econometric methods, proposal of prospective scenarios, and, lastly, definition of policies meant to turn the latter to policies. These policies are either meant to increase the probability of a particular scenario (such as the best case) or provide mitigation if another scenario occurs (such as the worst case).
The scenarios proposed for the Romanian coal mining regions rely on the previous assessments of the regions themselves and the Just Transition requirements and desiderates. Plagued by the shrinking cities phenomenon and unemployment, these mono-industrial regions are confronted with the unprecedented need to shut down what is perceived to be an essential industry. Therefore, tracking best practices from other similar countries is relevant: apart from the cluster results that place Romania next to Estonia, Croatia, Latvia, and Finland, the measures used by states such as Germany or Poland (in clusters of their own) can also be integrated. Romania is clustered with smaller countries, more agile in terms of deployment of policies, further down the line in an alternative development of business ecosystems, with different economy make-ups is both interesting and challenging. It forces policy-makers to look outside the usual cluster partners for Romania of Poland and Bulgaria. As intended in our analysis, clustering is the first step toward identifying best practices, the latter being the subject of a different stage in our research project and, therefore, not covered in this article.
Another element to consider in the transfer to policies is the forecast of energy efficiency, based on current data. The two power plants are beyond their use period, and their energy efficiency is decreasing, so the trend is not hard to plot. Further modeling integrating four different options of business as usual, small alterations meant to keep the current level of efficiency from decreasing, extensive alterations, and complete shut-down, must be plotted in a dedicated in-depth analysis of the two power plants. This econometric forecast comports data unavailable at the time of our research and also constitutes a separate section in our research.
Finally, the proposed prospective scenarios are straightforward and explained. Nonetheless, the proposal of policies for each of them must integrate the growing complexity of the issue. For instance, in the case of stakeholders involved in the process, there are at least the following: the European Union, the European Commission, the national government, the regional (county) administration, the local administration, the business environment at European, national and regional level (considering the integration of coal in various supply chains), the employees of the two power plants and their families, the citizens in those regions relying on coal for heating, the unions of the employees, and green NGOs. This list is by no means exhaustive. However, it shapes a very complex landscape of stakeholders, at times with opposite needs and wants. The fact that the coal is mainly located in two Romanian regions means that the multi-level perspective is needed for a proper, deep-running, transformative regional shift; therefore, the next research lines should focus on regional holistic models, building on the econometrics of the RHOMOLO model [90], a spatial computable general equilibrium model, created by the Joint Research Centre for the European Commission, focusing on EU regions.
The normative scenarios will have to tackle this complexity, maintaining the idea that, ultimately, regardless of the data provided and the in-depth analysis, a shut-down is a political decision. However, anticipatory governance must allow for the data analyses and the resulting scenarios to be provided so that the political decision to be taken considers all implications. This article is a first step toward proposing anticipatory governance for the coal mining regions of Romania. In the second stage of our research, as presented in Figure 2, the future studies stage, the topics of MLP for Romania (identification of niches and drivers for a specific regional socio-technical regime), a reshaped Shift Index for the two coal-mining regions, and the adequation of the scenarios in a public policy setting (at national and local level) will be proposed in an integrated document. Whether it translates into real policies in the next period is beyond the article's scope and its authors' leverage. | 2021-05-05T00:09:56.640Z | 2021-03-09T00:00:00.000 | {
"year": 2021,
"sha1": "cadf5e5d866b12489510fd3b3d6b77e658de907d",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1073/14/5/1509/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "196d1e1b500d9d48cda6bc6ab61d41547e8b0dd4",
"s2fieldsofstudy": [
"Political Science"
],
"extfieldsofstudy": [
"Business"
]
} |
195844737 | pes2o/s2orc | v3-fos-license | Acclimation temperature changes spermatozoa flagella length relative to head size in brown trout
ABSTRACT Temperature is a ubiquitous environmental factor affecting physiological processes of ectotherms. Due to the effects of climate change on global air and water temperatures, predicting the impacts of changes in environmental thermal conditions on ecosystems is becoming increasingly important. This is especially crucial for migratory fish, such as the ecologically and economically vital salmonids, because their complex life histories make them particularly vulnerable. Here, we addressed the question whether temperature affects the morphology of brown trout, Salmo trutta L. spermatozoa. The fertilising ability of spermatozoa is commonly attributed to their morphological dimensions, thus implying direct impacts on the reproductive success of the male producing the cells. We show that absolute lengths of spermatozoa are not affected by temperature, but spermatozoa from warm acclimated S. trutta males have longer flagella relative to their head size compared to their cold acclimated counterparts. This did not directly affect sperm swimming speed, although spermatozoa from warm acclimated males may have experienced a hydrodynamic advantage at warmer temperatures, as suggested by our calculations of drag based on head size and sperm swimming speed. The results presented here highlight the importance of increasing our knowledge of the effects of temperature on all aspects of salmonid reproduction in order to secure their continued abundance.
INTRODUCTION
As ectotherms, fish are directly influenced by the temperature of their environment. This makes water temperature one of the most ubiquitous of environmental factors affecting fish physiological processes, including development, growth, metabolic scope and reproduction (Brett, 1971;Casselman, 2002). Air temperatures directly affect the temperature of freshwater systems (IPCC, 2013;Isaak et al., 2010;Jonkers and Sharkey, 2016;van Vliet et al., 2011), making these habitats and their inhabitants particularly susceptible to the effects of global warming through human-induced, accelerated climate change (Almodóvar et al., 2012;IPCC, 2014). Migratory fish are especially affected by changes in water temperatures due to their complex life histories and the dependency of successive life stages on favourable thermal conditions (Crozier et al., 2008;Mathes et al., 2010). The predicted, continued increase in global air temperatures over the next century (IPCC, 2013) therefore gives cause for concern regarding the reproduction, and ultimately persistence, of migratory fish species, including the ecologically and socio-economically important salmonid family.
Salmonids perform anadromous or potamodromous reproductive migrations to their natal spawning grounds (Hinch et al., 2005). As capital breeders, they migrate in a catabolic state, relying entirely on endogenous energy reserves to fuel final maturation, migration and reproduction (Kinnison et al., 2003). This metabolic restriction can create direct energetic trade-offs between different facets of reproduction, especially when energy expenditure is altered by environmental factors such as temperature (reviewed by Fenkes et al., 2016). Therefore, salmonids represent ideal models for studying the impacts of temperature alterations on migratory fish reproduction.
Increased river water temperatures increase the metabolic costs of locomotion in fish, affecting the initiation (Cooke et al., 2008;Juanes et al., 2004;Quinn and Adams, 1996;Robards and Quinn, 2002), progress (Berman and Quinn, 1991;Goniea et al., 2006;High et al., 2006) and ultimately the success of salmonid migrations (i.e. prespawning mortality; Hinch et al., 2012). While these effects are well established, the sub-lethal consequences of thermal challenges for post-migratory reproductive behaviour and physiology remain poorly understood (Fenkes et al., 2016;Nadeau et al., 2010). Recently, it was shown that sperm swimming speed (a reliable predictor of fertilisation success; Simmons and Fitzpatrick, 2012) of brown trout, Salmo trutta, is reduced in males that are acclimated to increased water temperatures, compared to males kept at temperatures normally experienced during the reproductive season of the species (Fenkes et al., 2017). The authors (Fenkes et al., 2017) attributed their results to a potential temperature-mediated delay in maturation and sperm production, but found that warm acclimated males compensated for this delay later on in the spawning season, when temperaturedependent differences in sperm swimming speed were no longer observed.
Differences in sperm swimming speed are commonly attributed to differences in sperm morphology, as sperm size can be positively correlated with swimming speed (Fitzpatrick et al., 2010;Simmons and Fitzpatrick, 2012). However, the relationship between sperm form and function is far from clear (Humphries et al., 2008): while increased flagellum length does theoretically increase thrust production, sperm swimming speed is negatively impacted by the drag force acting on the sperm head. It may therefore be the relative sizes of sperm cells' constituent parts, rather than their absolute lengths, which can reliably predict sperm swimming speeds (Humphries et al., 2008). Sperm morphology has been shown to vary throughout the reproductive season in several fish species [e.g.
Barbus barbus (Alavi et al., 2008); Scophthalmus maximus (Suquet et al., 1998)] and these changes may be attributed to ageing and maturation of the sperm (Alavi et al., 2008). In light of the known effects of acclimation temperature on maturation, the question arises whether temperature directly affects sperm morphology in fish such as salmonids.
Spermatogenesis and sperm release occur in distinct cycles in salmonid fishes. Early spermatozoa can be released before peak maturation, and residual spermatozoa remain afterwards, before being resorbed within the testicular lobe, beginning another cycle of spermatogenesis and release (e.g. Oncorhynchus mykiss; Billard, 1986). Morphological differences may exist between early, ripe and late spermatozoa (Alavi et al., 2008;Suquet et al., 1998). Given that exposure to increased acclimation temperature during the sperm maturation process can delay peak maturation (Fenkes et al., 2017;Lahnsteiner and Leitner, 2013) and may deplete stored energy, morphological differences in spermatozoa might therefore be expected in semen samples from differentially acclimated fish. Increased temperatures have been shown to reduce the size and speed of spermatozoa in a tropical fish (Poecilia reticulata; Breckels and Neff, 2013). However, the effects of temperature on the morphology of salmonid sperm have not been investigated to date.
Here, the morphology of spermatozoa from cold and warm acclimated male S. trutta was compared. This study described and identified different types of morphological deformations in trout sperm, and compared their prevalence in semen samples from differentially acclimated donor males. Sperm head length (L H , µm), head width (W H , µm), flagellum length (L F , µm), total sperm length (L T , µm) and head surface area (A H , µm 2 ) of photographed sperm cells were measured. Flagellum length to head length ratio (L F /L H , dimensionless) and flagellum length to head surface area ratio (L F / A H , µm −1 ) were additionally calculated to be used as morphological predictors of sperm swimming speed. In accordance with previous findings (Breckels and Neff, 2013), a difference in absolute measurements of sperm constituent parts was predicted to exist between the acclimation groups, with a possible reduction in flagellum and total sperm lengths for warm acclimated males. Based on previous evidence (Humphries et al., 2008), a positive correlation between flagellum length to head length and/or head surface area ratio and sperm swimming speed was hypothesised. The temperature of water determines its viscosity (Korson et al., 1969), which in turn has significant impacts on the drag forces experienced by microscopic, moving objects such as sperm cells, because they operate at low Reynolds numbers (Taylor, 1951). Here, drag force (D, N) experienced by sperm heads at different water temperatures was calculated theoretically using Stoke's law and compared between sperm cells from warm and cold acclimated male donors. It was hypothesised that drag would decrease when the spermatozoa were swimming in higher water temperature; however, due to a lack of previous evidence, we could not reliably predict whether the acclimation temperature of the male trout would influence drag.
Deformed spermatozoa
Four types of morphological deformation were identified and spermatozoa were categorised accordingly either as 'normal' (no deformation; Fig. 1A), 'kink' (Fig. 1B), 'coil' (Fig. 1C), 'short' (Fig. 1D) or 'tailless' (Fig. 1E). Kink cells were characterised by a bend in the flagellum. Coil cells showed signs of repeated flagellum bending resulting in knotting. Tailless cells lacked a flagellum altogether and short cells had drastically shortened flagella compared to a normal cell. The proportion of different types of morphologically deformed sperm cells did not differ between cold and warm acclimated samples (Table 1).
Sperm morphology
Sperm head surface area (A H , µm 2 ), flagellum length (L F , µm), total length (L T , µm) and flagellum length to head length ratio (L F /L H ) did not differ between cold and warm acclimated male sperm cells (Table 2). However, a significant increase in sperm flagellum length to head surface area ratio (L F /A H , µm −1 ) was evident for warm acclimated male sperm cells compared to cold acclimated male sperm cells (Table 2, Fig. 2).
Sperm morphology effects on sperm swimming speed
Sperm flagellum length to head length ratio (L F /L H ) did not affect sperm swimming speed at 10 s post activation in either sperm activation temperature (8°C or 13°C). L F /A H ratio and acclimation temperature also did not impact on sperm swimming speed at either activation temperature, and no interaction between the terms was detected (Table 3).
Drag force
Drag force, D, on sperm heads was not affected by the acclimation temperature of the donor males (Table 4). Drag force decreased with increasing sperm activation temperature (Table 4) and this effect was similar in both male acclimation temperature groups, as evidenced by the non-significant interaction between these terms (Table 4). However, post-hoc least squares means analysis revealed that theoretical drag was significantly decreased at 13°C compared to 8°C activation temperature in warm acclimated males (t=2.47; d.f.=14; P=0.03), but not in cold acclimated males (t=1.66; d.f.=14; P=0.12).
DISCUSSION
While no differences in absolute size of sperm morphological parameters were detected, sperm from warm acclimated males had significantly higher sperm flagellum length to head surface area ratios than their cold acclimated counterparts. Morphological parameters did not affect sperm swimming speed, but theoretical drag (driven by smaller head size) experienced on the sperm heads was decreased at higher temperature (when the viscosity of water is lower), for sperm from warm acclimated males. Acclimation temperature did not affect the frequency of flagellar deformities in the S. trutta sperm.
Exposure to increased temperatures has previously been shown to result in an increase of sperm cells with pyriform (short, narrow, posteriorly compressed) heads (Merino sheep, Ovis aries, Rathore, 1968; Duroc boars, Sus scrofa, Suriyasomboon et al., 2005; Holstein bulls, Bos taurus, Vogler et al., 1993), as well as an increase in the prevalence of tailless spermatozoa (Rathore, 1968;Vogler et al., 1993) and acrosomal abnormalities (Rathore, 1968). Other flagellar abnormalities such as coiling and bending have previously been identified, but reported changes in their frequency in response to temperature are contradictory. Suriyasomboon et al. (2005) reported no change in the number of Duroc boar spermatozoa with flagellar abnormalities, while an increase with temperature was reported in Holstein bulls (Vogler et al., 1993). However, our current understanding of the effects of temperature on sperm morphology, especially in externally fertilising ectotherms, is limited. We did not identify head abnormalities in S. trutta spermatozoa, but the semen samples contained high numbers (approximately 30%) of spermatozoa with flagellar abnormalities as well as altogether tailless cells. While it is possible that these abnormalities, especially partial or complete loss of the flagellum, may be preservation artefacts, 30% is similar to previous measurements of percentages of non-motile cells in freshly extracted S. trutta semen (Fenkes et al., 2017).
In one of the first accounts of a temperature effect on sperm morphology in an ectotherm, Drosophila melanogaster spermatozoa were found to be longer (increased total length) for males kept at higher temperatures (Blanckenhorn and Hellriegel, 2002). The only previous study investigating the effect of temperature on sperm morphology in a fish showed that increased acclimation temperature was associated with a reduction in sperm flagellum length and swimming speed (Trinidadian guppy, P. reticulata, Breckels and Neff, 2013). Neither study (Blanckenhorn and Hellriegel, 2002;Breckels and Neff, 2013) investigated sperm head and flagellum morphology separately. Tropical fish species, such as P. reticulata generally operate at a much narrower thermal range compared to temperate species (Breckels and Neff, 2013) such as the brown trout used in our study. The discrepancy in this study's results and the findings for P. reticulata sperm may indicate that temperate species are more resistant to the effects of warming on their spermatozoa morphology. Therefore, longer-term exposure than implemented in this present study, perhaps throughout development, may be necessary to induce radical changes in trout spermatozoa morphology and motility comparable to those observed in tropical fish.
Flagellum length to head surface area ratio is theoretically a reliable predictor of sperm swimming speed (Humphries et al., 2008). However, the findings of this study did not demonstrate a (D) 'short' cell and (E) 'tailless' cell. Photographs were taken at 400× magnification ( phase contrast microscopy) for illustrative purposes; counts were conducted using 400× magnification dark field microscopy. Scale bars: 0.01 mm.
difference in the average speed of spermatozoa from samples with different average flagellum length to head length or flagellum length to head surface area ratios for either acclimation temperature treatment. Sperm flagellum length and total length have previously been linked to decreased sperm longevity [e.g. Gadus morhua (Tuset et al., 2008); Salmo salar (Gage et al., 2002)], but, similar to the present study, no effect on initial sperm swimming speed was detected. In S. salar, mid-piece size and ATP content were positively correlated, as were sperm flagellum length and sperm energy charge (Vladićet al., 2002). Thus, a longer flagellum appears to require higher, more effective ATP provision in order to allow the sperm to reach an egg, and this is provided by a larger midpiece, containing more active mitochondria (Vladićet al., 2002). Here, however, warm acclimated S. trutta males had relatively longer flagella and smaller heads (containing the mid-piece) than cold acclimated males, as evident in the increased L F /A H ratio.
Within the low Reynolds number environment in which sperm operate, viscosity is the dominating force determining speed, while inertia is negligible. Decreased head size or a change in shape could provide an advantage as drag force on the head is congruently reduced. The results of this study show that the theoretical drag experienced on the sperm head was lower at higher activation temperatures for warm acclimated but not cold acclimated males. All terms except head radius and speed are constant in Stoke's law (Eqn 3). Therefore, if D is reduced for warm acclimated males that have smaller heads (a), they must be swimming at a similar speed (U) to cold acclimated males. This suggests that warm acclimated males lack the power to generate the thrust required to take advantage of the reduced head drag. Therefore, the results support the idea that the warm acclimated males have a smaller power unit (mid-piece) as well as a smaller head. However, the changes in morphology (increased L f /A h ratio) appear to allow the spermatozoa of warm acclimated males to increase their swimming speed to the level achieved by cold acclimated male sperm. What drives the need for the morphological change is not clear, but the energy constraints associated with higher acclimation temperature may not have allowed for the production of spermatozoa morphologically similar to those of cold acclimated males. Further in-depth investigations into the effects of short-and long-term exposure to increased temperature on salmonid spermatozoa morphology as well as cytophysiology (e.g. ATP content) throughout the reproductive season are needed to confirm these effects.
An additional explanation for our findings, and a caveat of this study, is that the effects of intra-male variation in sperm characteristics could have masked a possible link between spermatozoa morphology and swimming speed. As highlighted by Fitzpatrick et al. (2010), the typically high levels of variability in spermatozoa morphology and motility parameters within individual samples can mask length-velocity relationships at the intraspecific level. Therefore, while a positive relationship between sperm head size to flagellum length ratio has been described in other externally fertilising species (Simpson et al., 2014), and may also exist in trout, any link is likely to have been weakened because variation within ejaculates was not accounted for (Simpson et al., 2014).
Conclusion
This study identified a change in the relative dimensions of salmonid spermatozoa in response to acclimation temperature. This change did not affect sperm motility, but had possible hydrodynamic consequences by affecting theoretical drag experienced by the moving cell. Currently, we do not know whether these findings are applicable to other teleost fish. Nevertheless, this study provides a foundation for future studies, and highlights the need to increase our limited understanding of the impacts of temperature across all aspects of migratory fish reproduction. Increasing knowledge of temperature driven trade-offs within and between each reproductive stage is essential if we are to maintain the abundance of migratory fish during the predicted changes to the global climate.
Experimental setup
In October 2015, 3-year-old male S. trutta were obtained from Dunsop Bridge Trout Farm Ltd. (Clitheroe, UK) and individually PIT tagged (Biomark, Inc., Boise, ID, USA) upon arrival. A previous study utilised the same individuals, and detailed housing conditions are described therein (Fenkes et al., 2017). Briefly, individuals were housed at equal numbers in two outdoor tanks, under natural photoperiod and gradually (on average 0.4°C day −1 over 17 and 22 days, respectively) declining water temperature to induce maturation. One tank was then maintained at a 'warm' experimental temperature of 13°C, and the other at a 'cold' experimental temperature of 8°C. Until they ceased feeding at the onset of the spawning season, the trout were offered commercial trout pellets (Skretting, Trouw Ltd., Northwich, UK)] daily. Semen sampling for sperm swimming speed (data from Fenkes et al., 2017) and sperm morphological assessment (present study) was carried out after 4 weeks of differential temperature acclimation (8th and 9th December).
Semen sampling
The semen samples used in the present study were collected and utilised in a previous study (Fenkes et al., 2017) and the sample collection protocol is detailed there. Briefly, males were lightly sedated via immersion in a buffered tricaine-methanesulfonate (MS-222) solution. Anaesthetised males were removed from the anaesthetic bath, the urogenital/anal region was dried, the bladder and bowel were emptied, and semen was carefully expressed by applying gentle pressure to both sides of the ventral mid line. Semen was captured directly into clean Eppendorf tubes; uncontaminated semen samples were immediately sealed and placed into an ice-cooled container. Samples contaminated with water, urine or faeces were discarded. Fish were moved into an oxygenated recovery bath before being transferred back into their holding tanks upon full recovery. Semen samples from N=8 cold and N=9 warm acclimated males (after 4 weeks of differential temperature acclimation) were used for sperm motility assessments in a previous study (Fenkes et al., 2017). For N=8 cold and N=8 warm acclimated males, a subsample was preserved in 10% neutral buffered formalin at a dilution of 1 part semen to 1000 parts formalin for the subsequent sperm morphology measurements described here. For this current study, sperm swimming speed data (from Fenkes et al., 2017) were used only from those males where associated sperm morphology measurements could be obtained.
Ethical note
Experimental procedures were covered by a UK Home Office project licence (licence number 40/3584, licence holder H.A.S.) and were approved by the University of Manchester's ethical committee.
Deformed spermatozoa identification and count
Each formalin-preserved semen sample was grid-scanned and spermatozoa were viewed at 400× magnification using dark field microscopy (UB 200i Microscope series, Proiser -Projectes i Serveis R+D S.L., Paterna, ES), scaled to a stage micrometre, using XIMEA CamTool Version 4.13 (XIMEA GmbH, Münster, D) and ImageJ 1.49v (http://imagej.nih.gov/ij) software. The first ∼100 cells in view were counted (if the number of cells in the last counted field of view caused the total to exceed 100, all cells in that field of view were counted, increasing the total number of cells counted accordingly) and different types of morphological deformation were identified and categorised. Aggregations of spermatozoa where individual cells were indiscernible were excluded from the count.
Sperm morphology measurements
Sperm morphology measurements were subsequently obtained from sperm cells photographed at 400x magnification, as above. Cells to be photographed were chosen sequentially through a grid-scan of each mounted sample to avoid repeat recording. Deformed cells were excluded from measurements. Measurements of sperm head length (L H , µm; measured from flagellum insertion to head apex), sperm head width (W H , µm; measured centreperpendicular to L H ) and flagellum length (L F , µm) were obtained for 40 cells per male. The mid-piece was not measured, as it was too small to distinguish from the head using light microscopy, as is the case in most fishes (Gage et al., 2002). Using the above measurements, sperm total length (L T , µm; L H +L F ) as well as head surface area (A H , µm 2 ) were calculated. Under the assumption that the sperm head approximates a prolate spheroid (where the polar radius>equatorial radius), head surface area is given as: where r e is the equatorial radius of the spheroid (0.5×head width W H ), r p is the polar radius of the spheroid (0.5×head length L H ) and e is the ellipticity of the spheroid, given by: (adapted from Humphries et al., 2008). As suggested by Humphries et al. (2008), flagellum length to head length ratio (L F /L H , dimensionless) and flagellum length to head surface area ratio (L F /A H , µm −1 ) were calculated to be used as morphological predictors of sperm swimming speed.
Sperm swimming speed assessment
As detailed in Fenkes et al. (2017), sperm swimming speed was measured as average path velocity (V AP , µm s −1 ) from video recordings of activated sperm obtained under 250× magnification, using an automated computer assisted sperm quality analysis plugin [CASA_automated plugin, www.ucs. mun.ca/~cfpurchase/CASA_automated-files.zip; see Purchase and Earle (2012) for further documentation] for ImageJ 1.49v (32-bit) (http://imagej. nih.gov/ij). Sperm from each male were activated with distilled water at both acclimation temperatures ('activation temperature'; 8°C and 13°C). Sperm swimming speed has been shown to be a reliable predictor of fertilisation capacity (e.g. curvilinear velocity in S. salar; Gage et al., 2004). Here, sperm swimming speed (V AP ) in each sample was recorded as an average of all sperm cells in the field of view every 2 s from 10 s after activation. Sperm swimming speeds recorded at 10 s after activation in both sperm activation temperature treatments (8°C and 13°C) of each sample (from cold and warm acclimated males) are used in this present study to assess the effects of warm acclimation and associated sperm morphology on sperm swimming speed.
Drag
In addition to the above measurements and derived variables, theoretical drag force [D, Newton (N); Eqn 3] of sperm heads was compared between acclimation temperature groups. Drag force was calculated according to Stoke's Law (Dusenbery, 2009) as: where µ is the dynamic viscosity (Pa s) of the water with which sperm were activated, a is the radius of the head (here, head width 0.5×W H , m) and U is the flow velocity relative to the head (here, V AP , m s −1 ). Literature values for dynamic viscosity were obtained from Korson et al. (1969): for our 'cold' activation water temperature (8°C), µ is the average value measured by Korson et al. (1969) for water at 5°C and 10°C, while for our 'warm' temperature (13°C), we used the average µ measured for water at 10°C and 15°C. For the flow velocity U, V AP at 10 s post activation, measured at the respective activation temperatures (8°C or 13°C) by Fenkes et al. (2017) was used. As a result, theoretical values of drag on 40 sperm heads for each male were obtained at both activation water temperatures (i.e. 80 measurements in total per male).
Values were averaged across all 40 cells per male in each activation water temperature treatment and converted to pico Newtons ( pN) for use in further analyses.
Deformed spermatozoa
To assess whether deformed spermatozoa counts differed between acclimation temperature treatments, generalised linear models with binomial error distribution were performed using deformed spermatozoa count/total cell count for each sample as response and donor acclimation temperature (cold/warm) as independent variable (car package 2.1-3; Fox and Weisberg, 2011).
Sperm morphology
Mixed effects models were created to determine whether sperm morphology measurements differed between acclimation temperatures. Intra-male variation was taken into account by inclusion as a random effect in the models. Visual evaluation (quantile-quantile plots) of the residual distribution in the models for L F /L H ratio and L F /A H ratio gave no indication of significant deviations from normality and linear mixed effects models [lme4 package 1.1-12 (Bates et al., 2015) and lmerTest package 2.0-32 (https://CRAN.R-project.org/package=lmerTest)] were therefore fitted in these cases. However, quantile-quantile plots showed that the residuals in the models for the remaining measurements (head surface area A H , flagellum length L F and total length L T ) were not within confidence limits for normality. Generalised linear mixed effects models with penalised quasilikelihood and log-normal error family [nlme package 3.1-128 (http:// CRAN.R-project.org/package=nlme) and MASS package (Venables and Ripley, 2002)] were fitted in these cases.
Sperm morphology effects on swimming speed
To test whether sperm morphology could predict sperm swimming speed, linear models (car package 2.1-3; Fox and Weisberg, 2011) were performed with average path velocity at 10 s post activation (V AP , µm s −1 ; data from Fenkes et al., 2017) as continuous response variable, and L F /L H or L F /A H (averaged across 40 spermatozoa per male, because only averages of speed were measured) as independent variables. Because L F /A H differed between acclimation temperature groups (described above), acclimation temperature (cold/warm) was included as an additional independent variable in the respective models. Separate models were created for V AP measured at 8°C and 13°C, respectively. Visual evaluation (quantile-quantile plots) of the residual distribution in the models for V AP at 8°C gave no indication of significant deviations from normality and general linear models were fitted in those cases. However, quantile-quantile plots showed that the residuals in the models for V AP at 13°C were not within confidence limits for normality and generalised linear models were fitted in those cases.
Drag
To test whether drag on the sperm head (D, pN) differed between acclimation temperatures, a linear mixed effect model [lme4 package 1.1-12 (Bates et al., 2015) and lmerTest package 2.0-32 (https://CRAN.R-project.org/ package=lmerTest)] was performed with drag force as the continuous response variable and male acclimation temperature (cold, 8°C and warm, 13°C) as well as activation temperature for sperm (cold, 8°C and warm, 13°C) as categorical independent variables. Male subject ID nested within acclimation temperature was included as a random effect term. Visual evaluation (quantile-quantile plots) of the residual distribution in the model gave no indication of significant deviations from normality. Least squares means [lmerTest package 2.0-32 (https://CRAN.R-project.org/package= lmerTest)] were calculated as post-hoc comparisons between factor levels. | 2019-07-10T13:05:09.575Z | 2019-07-08T00:00:00.000 | {
"year": 2019,
"sha1": "bbc8b83fa067159a3ce33bf91be2c64513d2666b",
"oa_license": "CCBY",
"oa_url": "https://bio.biologists.org/content/biolopen/8/7/bio039461.full.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "bbc8b83fa067159a3ce33bf91be2c64513d2666b",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
54864799 | pes2o/s2orc | v3-fos-license | Combinatorial, Bakry-\'Emery, Ollivier's Ricci curvature notions and their motivation from Riemannian geometry
In this survey, we study three different notions of curvature that are defined on graphs, namely, combinatorial curvature, Bakry-\'Emery curvature, and Ollivier's Ricci curvature. For each curvature notion, the definition and its motivation from Riemannian geometry will be explained. Moreover, we bring together some global results and geometric concepts in Riemannian geometry that are related to curvature (e.g. Bonnet-Myers theorem, Laplacian operator, Lichnerowicz theorem, Cheeger constant), and then compare them to the discrete analogues in some (if not all) of the discrete curvature notions. The structure of this survey is as follows: the first chapter is dedicated to relevant background in Riemannian geometry. Each following chapter is focussing on one of the discrete curvature notions. This survay is an MSc dissertation in Mathematical Sciences at Durham University.
Contents
Chapter 1
Background in Riemannian Geometry
In this chapter, we provide substantial background material from Riemannian geometry, which will prepare the readers to be able to compare to discrete analogues in graphs in later chapters. First in Section 1.1, we introduce the Gauss-Bonnet theorem, Cartan-Hadamard theorem, and Cheeger constant, which are three examples of global concepts of manifolds that can also be illustrated as geometric features in graphs as we will see in combinatorial curvature in Chapter 2. Next in Section 1.2, we consider linear operators on manifolds including gradient, divergence, Laplacian, and Hessian. They are ingredients in Bochner's formula, which is the main motivation for Bakry-Émery curvature in Chapter 3. In Section 1.3, the crucial operator, Laplacian, and its smallest eigenvalue have been investigated in Lichnerowicz Theorem. In Section 1.4, we state and prove the theorem of Bonnet-Myers. In Section 1.5, we explain the problem of finding average distance of two balls, which motivates Ollivier's Ricci curvature in Chapter 4. Lastly in Section 1.6, we give examples of manifolds and their representing graphs, and then discuss about their curvature in different notions.
Gauss-Bonnet, Cartan-Hadamard, and Cheeger constant
The purpose of this section is to present theorems about curvature in Riemannian geometry, which will be compared to the discrete analogues in combinatorial curvature in Chapter 2. The content of this section is divided into two parts. In the first half, we introduce (without proof) Gauss-Bonnet theorem and Cartan-Hadamard theorem. In the second half, we give the definition of Cheeger isoperimetric constant (or in short, Cheeger constant), and give the statements and sketches of proof for another two theorems that are related to Cheeger constant.
Gauss-Bonnet and Cartan-Hadamard
Gauss-Bonnet theorem states that, for any closed surface (i.e. a compact two-dimensional manifold without boundary), its total curvature is equal to its Euler's characteristic multiplied by 2π. A proof of this theorem can be found in e.g. [4, pp. 274-276]. where K is Gaussian curvature, and dA is the area element, and χ(M ) is the Euler's characteristic of M .
Euler's characteristic is a global topological invariant of a surface. In particular, if M is orientable then χ(M ) = 2 − 2g, where g is a genus of M . For example, a two-dimensional sphere of radius r has Gaussian curvature equal to r −2 everywhere, and its surface area is 4πr 2 . Hence the total curvature is equal to S 2 KdA = 4π = 2πχ(S 2 ).
While Gauss-Bonnet theorem mentions the total curvature of manifolds, many other theorems (e.g. Bonnet-Myers, and Lichnerowicz) refers to properties of manifolds that have the same sign of curvature everywhere. Among those theorems, Cartan-Hadamard theorem gives an implication when a manifold has non-positive sectional curvature everywhere. The statement of the theorem is given as follows, and a proof of the theorem can be found in [5, pp. 149-151].
Theorem 1.2 (Cartan-Hadamard). Let M n be a complete and simply connected Riemannian manifold (of dimension n) with sectional curvature K x (α) ≤ 0 for all x ∈ M and for all two-dimensional plane α ⊂ T x M . Then M is diffeomorphic to R n , and the exponential map exp x : T x M → M is diffeomorphism.
In words, the theorem implies the "infiniteness" of such manifold, in the sense that every geodesic (starting from any point and going in any direction) can be extended infinitely.
Cheeger constant
In [7], J. Cheeger introduced a constant h of a manifold, representing an "isoperimetric ratio", and then proved an inequality that related this constant h to λ 1 , the smallest nonzero eigenvalue of Laplacian (see Section 1.3). The constant and the inequality were named after him as the Cheeger constant and Cheeger's inequality. For an advance notice, the two following theorems and their proofs involve Laplacian operator (whose definition and details can be found in Section 1.2 and 1.3). Some of formulas are not explained in this paper, but will be referred to [6,13,18,19].
Theorem 1.4 (Cheeger's Inequality). Let (M n , g) be a compact Riemannian manifold. Then where λ 1 is the first nonzero eigenvalue of Laplacian on M .
Proof. Suppose that M is compact. Let f be an eigenfunction corresponding to λ 1 : ∆f + λ 1 f = 0, and partition M into three sets: Assume that 0 is a regular value of f , that is, the preimage M 0 = f −1 (0) is a (n − 1)-dimensional submanifold of M (otherwise we can work with a function f + for arbitrary small ). Further assume vol(M + ) ≤ 1 2 vol(M ) (otherwise we can work with a function −f ).
Performing integration by parts (in other words, integrating the Product rule 1.10 and then applying Divergence theorem), for any vector field X, we have because f vanishes on the boundary of M + (which is M 0 ). In particular, choose X = grad f , then the above equation can be read as Apply Cauchy-Schwarz's inequality and use that f |grad f | = 1 2 |grad f 2 |, The rest is to prove that The co-area formula applied to the positive function f 2 gives ) is a submanifold of M (or an empty set), with a smooth (or empty) boundary ∂H t = f −1 ( √ t) for almost every t (as long as √ t is regular value of f ).
, so by the definition of the Cheeger constant: holds for almost every t ≥ 0. Integration over t ∈ [0, ∞) finally yields Next theorem asserts that Cheeger constant is strictly positive for a manifold whose curvature is negative and bounded away from zero. The discrete analogue of this theorem can be found in Theorem 2.8. Theorem 1.5. Suppose that a complete manifold M has negative sectional curvature bounded above by −K 0 < 0 (hence M is non-compact, by Cartan- where the above inequality is due to grad d x 0 ≤ 1. In order to achieve (1.1), it suffices to show that ∆d In polar coordinates (r, φ), the Laplacian of a function f = f (r, φ) can be written as where H(r, φ) is the mean curvature, and ∆ Sr(x 0 ) is Laplacian restricted to S r (x 0 ) the sphere of radius r centered at x 0 . The derivation of this formula is analogous to the one given in [18,Equation (2)].
Laplacian operator and Bochner's formula
In this section, we start with the definitions and properties of operators on Riemmanian manifolds, namely gradient, divergence, Laplacian, and Hessian. Then we state (without proof) Bochner's formula, which serves to be an essential background for Bakry-Émery curvature in Chapter 3.
Definition 1.6 (Gradient, divergence and Laplacian). Gradient operator grad : C ∞ (M ) → X(M ) maps a smooth real function f to a smooth vector field grad f such that its evaluation at any point x ∈ M is defined by the inner product: for every w ∈ T x M . Here w(f ) is a differentiation of f in direction of the vector w.
Divergence operator div : X(M ) → C ∞ (M ) maps a smooth vector field X to a smooth real function div X defined at each point x ∈ M by where the mapping is considered from the tangent space T x M onto itself, and ∇ is Levi-Civita connection.
Proposition 1.7. In local coordinates, Since most of the time, functions are evaluated at a fixed point x, without ambiguity we may omit the terms x in the writing. Moreover, we write Proof. Write vector field grad f (x) = i a i (x) ∂ ∂x i x with respect to local coordinates (or in short, grad f = i a i ∂ i ).
By definition of gradient, we have It follows that, for a fixed k, where δ ik is Kronecker delta. The equation (1.3) immediately follows.
In the definition of div X, the mapping v → ∇ v X can be represented as a which is independent to the choice of frame E i 's (not needed to be orthonormal).
In particular, choose E i = ∂ i for all i, we have On the other hand, for each fixed i, we have where the derivative term ∂ ∂x i det g can be calculated as The explicit calculation of ∆ in local coordinates follows immediately from Proposition 1.7.
Corollary 1.8. In local coordinates, Remark 1.9. For example, in R n with Euclidean metric, the laplacian ∆ is The following proposition is the product rule of gradient, divergence, and Laplacian.
Proof. By the definition of gradient and divergence, the product rule in part (a) and (b) is induced from the product rule of directional derivative and the product rule of Levi-Civita connection, respectively. For part (c), we have A fundamental property of the Hessian is symmetry: Proof. We have where in the second line of equations, we use the metric property of ∇: Similarly, we have and therefore where [X, Y ] is the Lie bracket of vector fields, and the second line of equations is due to the torsion-freeness of ∇: The Hessian tensor can also be represented by a matrix A = [a ij ] w.r.t. an arbitrary orthonormal frame Moreover, the norm ||Hess f || is defined as in Hilbert-Schmidt norm: which is independent to the choice of orthonormal frame E i 's.
Proposition 1.13. The following two relations hold between Hessian and Laplacian.
(a) tr(Hess f ) = ∆f Proof. Part (a) follows directly from definitions: For part (b), we apply Cauchy-Schwarz's inequality to part (a): We are now ready for the statement of Bochner's formula, an equation that merges the defined operators together and connects to Ricci curvature. This formula is a fundamental motivation of the Bakry-Émery curvature notion introduced in Chapter 3. We omit the proof of the formula; see [13, Proposition 4.15] for details.
For any smooth function f ∈ C ∞ (M ), the identity holds pointwise on M .
Eigenvalues of Laplacian and Lichnerowicz theorem
Let (M n , g) be a compact connected Riemannian manifold. Eigenvalues of Laplacian operator on M are real numbers λ such that there exists a nontrivial solution f ∈ C 2 (M ) (i.e. twice continuously differentiable) to the system of equations Such function f is called an eigenfunction corresponding to λ. In case, M is a closed manifold, the condition f = 0 on ∂M may be removed.
The eigenvalues λ of Laplacian are known to be real, positive, and arrangeable in an increasing order (see [6, pp. 8]): In Lichnerowicz's theorem, the first (i.e. smallest) nonzero eigenvalue λ 1 is estimated from below, under an assumption that Ricci curvature is strictly positive and and bounded away from zero. Here we prove in a special case where M is a closed manifold. A proof in general case where M is compact can be referred to e.g. [13,Theorem 4.70].
Proof.
Consider an eigenfunction f satisfying ∆f + λf = 0. Upon scalar multiplication of f , we may further assume that The curvature assumption can also be expressed as Ric x (v) ≥ K|v| 2 for all v ∈ T x M . By applying Proposition 1.13 and this curvature assumption to Bochner's formula above, we obtain where the last equality is the product rule : Integrate the above inequality over M , and use the fact that M ∆|grad f | 2 = 0 and M ∆f 2 = 0 by the Divergence Theorem (since M has no boundary). We In particular, for λ 1 > 0, we have λ 1 ≥ n n−1 K as desired.
Bonnet-Myers theorem
Bonnet-Myers theorem is a classical theorem in Riemannian geometry. It states that a connected and complete manifold with Ricci curvature bounded below by a positive number must be compact. It is the main theorem of our paper that we will discuss about in all of the discrete curvature notions in later chapters.
Theorem 1.16 (Bonnet-Myers). Let (M n , g) be a connected and complete Riemannian manifold. Suppose there is a constant r > 0 such that the Ricci curvature satisfies One way to prove this theorem is to apply the second variation formula of the energy, as presented by Carmo [5, pp. 191-201]).
Proof. Let p, q ∈ M be two arbitrary points in M . By Hopf-Rinow theorem (see [5, pp. 145-148]), there exists a minimal unit-speed geodesic c : [0, a] → M joining x and y, that is c(0) = p, c(a) = q, |c (t)| = 1 for all t, and d(p, q) = (c) = a. It suffices to prove that a ≤ πr, because we can then conclude diam(M ) ≤ πr and the compactness of M (from its being complete and bounded).
First, we will construct proper variations of c as follows. Choose unit vectors e 1 , e 2 , ..., e n−1 in T p (M ) such that they, together with c (0), form an orthonormal basis of Since X i (0) = X i (a) = 0, it means that for every s ∈ (−ε, ε) the curve F i (s, −) has the same endpoints as the curve c. In other words, F i is a proper variation of c.
The energy for the curve F i (s, −) is defined by The second variation formula of energy states that is sectional curvature of the two-dimensional plane spanned by c (t) and V i (t)}.
Summing the equation (1.9) over index i and using the fact that we then have The relation (1.10) then implies a ≤ πr as desired.
Remark 1.17. The diameter bound diam(M ) ≤ πr is sharp for the round sphere S n r := {x ∈ R n+1 : x = r}. More importantly, the only manifolds for which the bound is sharp are the ones that are isometric to the round sphere S n r ; this result is known as Cheng's rigidity result (see [8]).
Average distance between two balls
In [23], Ollivier suggests that for two points x and y of a manifold, the average distance between small balls centered at x and at y can be greater or smaller than the distance d(x, y) depending on the Ricci curvature. This statement can be explained more precisely as follows.
Let (M n , g) be a connected and complete manifold. Let x and y be two points in M . By Hopf-Rinow theorem, completeness of M implies that there exists a minimal geodesic c joining x and y. Further assume that c is unit speed, so c can be parametrized as c : be the ball of a small radius r around x, and define B r (y) similarly.
which means x can be reached by the unit-speed geodesic c w , starting from x with the initial unit velocity w ∈ S x M and travelling for a period of time ε. Consider w := P δ c (w), the parallel transport of w along the curve c for a period of time δ. Thus w ∈ S y M . Then y ∈ B r (y) is a corresponding point of x ∈ B r (x), given by y := exp y (εw ). See Figure 1.1. The first task is to estimate the distance d(x , y ), and the second task is to derive the average distance of d(B r (x), B r (y)) by averaging over all x ∈ B r (x). In the above setting (with further assumption that w ⊥ v), the distance between x and y is estimated by where K(v, w) is the sectional curvature of the two-dimensional plane spanned by {v, w}.
Proof. For s ∈ [0, δ], let v s := d ds c(s) be the velocity of the curve c at time s, and let w s := P s c (w) be the parallel transport of w along the curve c for a period of time s. Therefore, v s , w s ∈ S c(s) M for all s ∈ [0, δ], and v s , w s is constant in s ∈ [0, δ]. Moreover, v 0 = v, w 0 = w, and w δ = w .
i.e. F (s, −) = c s is a geodesic for every s ∈ [0, δ]. For a fixed s 0 , let J s 0 be a variational vector field associated to the variation F of the geodesic c s 0 , that is Hence, J s 0 is a Jacobi field along c s 0 and satisfies the Jacobi equation We aim to compute the length of γ which then becomes an upper bound for the distance d(x , y ). First, note that The terms f (0), f (0), and f (0) can be calculated as follows: where the last equality holds true by a linear approximation of a contin- which yields the proposition.
In fact, the inequality sign in the proposition can be replaced by the equality, as ε → 0 and δ → 0 (see Proposition 6 in [23]). Moreover, the averaging procedure as discussed in [23, pp. 58] yields the average distance between B r (x) and B r (y): (1.11)
Examples of graphs and manifolds
We introduce three examples of graphs that represent different classes of manifolds, and calculate curvature in Bakry-Émery and Ollivier's Ricci notations. Definition of graphs can be found at the beginning of Section 2.1, and details of these two curvature notations are provided in Chapter 3 and 4. We then verify that the nature of curvatures in such graphs correspond to the manifolds they represent. Here, the curvature calculation is performed in Graph Curvature Calculator written by Stagg and Cushing (see [10] and the website http://www.mas.ncl.ac.uk/graph-curvature/), in the setting of "normalized laplacian (with ∞ dimension)" for Bakry-Émery curvature, and "Lin-Lu-Yau" for Ollivier's Ricci curvature. Example 1.20. The antitree graph is the infinite graph constructed by placing complete graphs K n , n = 1, 2, 3, ... (in an increasing order of n) and connecting every vertex of K i to every vertex of K i+1 for all i ∈ N. The antitree graph represents a (elliptic) paraboloid. A paraboloid is a manifold with positive curvature everywhere, but its curvature is reaching zero at a point further away from the paraboloid's vertex. It is good to note that Bonnet-Myers theorem does not apply, and a paraboloid is indeed non-compact. As shown in Figure 1.3, the curvature of the antitree is calculated to be {0.5, 0.212, 0.092, 0.049, ...} in Bakry-Émery curvature and {0.6, 0.15, 0.068, 0.039, ...} in Ollivier's Ricci curvature. This calculation suggests evidentially that the antitree is an infinite graph whose curvature is also reaching zero.
A precise formula to calculate curvature of a generalized family of antitrees can be found in [11]. A dumbbell graph is a graph obtained by connecting two complete graphs K n and K m with a single edge. Such edge represents a "bottleneck" of a manifold. In general, a bottleneck of a manifold is negatively curved (i.e. in a saddle shape), and as expected, a dumbbell graph also has negative curvature around its bottleneck, as shown in Figure 1 The idea behind Gauss-Bonnet Theorem comes from a relation between the sum of interior angles of a triangle (formed by three geodesics) on a surface and the total curvature inside that triangle (see [14]). When a surface is triangulated (i.e. partitioned into small triangles, or polygons), it resembles planar graphs. Gaussian curvature, which explains angles on surfaces, is translated into combinatorial curvature, which explains "angles" in planar graphs. Hence these two curvature notions describe the geometry of surfaces very similarly.
Planar tessellations
We shall start with the definition of graphs. A graph G, written as G = (V, E) consists of a set V of elements called vertices (singular: vertex), and a set E whose elements are edge, each of which connects a pair of vertices (called endpoints of an edge). Throughout this paper, we assume graphs to be undirected, which means edges have no direction, and to be simple, which means they contain no loop (i.e. an edge whose endpoints are the same vertex) and no multiple edges (i.e. more than one edges sharing the same pair of endpoints). When vertices u and v are connected by one (and only) edge e in E, we may say that u is adjacent to v (written as u ∼ v or u e ∼ v). For a vertex v ∈ V , the degree of v, denoted by d v , is the number of vertices that are adjacent to v. A (finite) path is a sequence of (finite) edges which connect a sequence of all distinct vertices (except possibly the first and the last): v 1 The length of a path is the number of edges in its sequence. For two vertices u and v, the combinatorial distance function d(u, v) is defined to be the length of shortest path connecting u and v. By convention, set d(u, u) = 0 for all vertices u, and set d(u, v) = ∞ if there is no path connecting u and v. Moreover, a graph is said to be connected if, for every pair of vertices u and v, there exists a path connecting u and v.
In the setup of combinatorial curvature, graphs are required to be planar , so that the notion of faces can be introduced. A planar graph is a graph G = (V, E) that can be embedded in R 2 without self intersecting edges. The union of edges e∈E e, when realized in R 2 , divides the entire space R 2 into connected components. The closure of each component in R 2 \ e∈E e is called a face. Let F be the set of all faces, so we may consider it as an additional structure of a planar graph G: G = (V, E, F ). We further assume graphs to be locally finite. A planar graph G is locally finite if every point of R 2 has an open neighborhood intersecting only finitely many faces of G.
Local finiteness prevents graphs from clustering in arbitrarily small area.
In [2], O. Baues and N. Peyerimhoff define conditions for locally finite planar graphs to be tessellations as follows.
Definition 2.1 (Tessellation). A connected and locally finite planar graph G is a planar tessellation, or just tessellation, if it satisfies the following conditions.
(i) Every edge is contained in exactly two different faces.
(ii) Every bounded face is a polygon: it is homeomorphic to the closed disk D, and its boundary is a simple cycle (i.e. a finite path in which the first and the last vertices coincide). The edges of the cycle are called sides of the polygon.
(iii) The intersection of any two distinct faces are either an empty set, or a vertex, or an edge.
Condition (iii) suggests the convexity property for polygons. Figure 2.1 shows two examples where the condition (iii) breaks.
Remark 2.2. There are two cases of planar tessellations that we are interested in. First is an infinite tessellation: it contains infinitely many faces, and every face is bounded. Second is a finite tessellation: it contains exactly one unbounded face, which is homeomorphic to R 2 \D and its boundary is For a face f ∈ F of a tessellation, let d f denote its degree: the number of vertices (or equivalently, the number of sides) of the boundary of polygon f . The conditions on tessellations imply that Two combinatorial curvature notations are defined as follows.
For a vertex v, the (vertex) curvature is summed over all faces f having v as their vertex.
In fact, for a fixed vertex v, the number of faces f incident to v is equal to its degree |v|. Hence, the (vertex) curvature can be defined in another way as The motivation behind this definition of curvature is "angular defect", which can be explained as follows. If each face f were to be realized as a regular polygon of equal side length, the inner angle of polygon f would be (1 − 2 d f )π and the sum of angles of all faces f 's at the vertex v would then be If κ(v) < 0, then the sum of angles at v is more than 2π, which means that these polygonal faces around v form a saddle-shape surface around v. On the other hand, when κ(v) > 0, the sum of angles at v is less than 2π, and therefore the point v behaves like an elliptical point. In other words, the sign of κ(v) (negative/zero/positive) corresponds to the geometry of the surface at point v (hyperbolic/euclidean/spherical).
Combinatorial Gauss-Bonnet and Cartan-Hadamard
In Riemannian geometry, Gauss-Bonnet theorem states that for a closed surface M , the total curvature of S can be related to its Euler's characteristic by the formula (see Section 1.1): In a case when G is a finite planar tessellation, or equivalently a finite tessellation of S 2 (see Remark 2.2), the Euler's characteristic of G is given by χ(G) = χ(S 2 ) = 2. Gauss-Bonnet theorem has the following discrete analogue for a finite planar tessellation. Proof.
We use the fact that v∈V d v = 2|E|, since each edge is counted twice in the sum. Moreover, the order of double summations is interchangeable since the sets V and F are finite. Lastly, |V |−|E|+|F | = 2 is the Euler's characteristic formula applied for a finite connected planar graph.
Next, we investigate graphs that have the same sign of curvatures everywhere. Let us start with non-positively curved graphs.
Corollary 2.5. A tessellation that has non-positive curvature at every vertex must be infinite.
Proof. Follows immediately from Gauss-Bonnet formula.
Next theorem is a main result from Baues and Peyerimhoff's paper [2, Theorem 1], which is considered as a discrete analogue of Cartan-Hadamard theorem in Riemannian geometry. We omit the proof of this theorem.
Theorem 2.6 (Combinatorial Cartan-Hadamard). Let G = (V, E, F ) be a tessellation. For a fixed vertex v 0 ∈ V , define the cut locus of f 0 to be In words, the theorem asserts that, when using any vertex v 0 as a base point, there exists no vertex x where the distance function d v 0 (x) := d(v 0 , x) attains the local maxima. Equivalently, it means that every geodesic (starting at any v 0 ) can be extended infinitely, as similarly stated in the theorem of Cartan-Hadamard (see Theorem 1.2).
Cheeger constant and isoperimetric inequality on graphs
In Section 1.1, we learn that a simply connected and complete surface M with negative (sectional) curvatures uniformly bounded above by −K 0 < 0 (hence M is non-compact by Cartan-Hadarmard theorem) satisfies the isoperimetric inequality: for all compact surfaces H ⊂ M with boundary ∂H (see Theorem 1.5).
In graphs, Cheeger constant can be defined and the isoperimetric inequality can be read analogously as in the following definition and theorem. The Cheeger constant is then defined to be where the infimum is taken over all finite subset W ⊂ V such that |W | ≤ Proof. First of all, G is infinite, by Corollary 2.5. For any finite subset W ⊆ V , let G W = (W, E W ) denote the finite subgraph of G induced by W , such that E W ⊆ E is the set of all edges with both endpoints in W . As a subgraph of a planar graph, G W is also planar, and hence inducing the set of faces, namely F W . It is not always true that F W ⊆ F , in particular, if the tessellation is infinite.
This proof involves two steps. Firstly, for given any finite Without loss of generality, assume that W 1 has the minimum isoperimetric ratio:
It follows that
Next, construct a set W ⊆ V by adding into the set W 1 all vertices v ∈ V (if they exist) such that v lies in U , the union of all bounded faces of G W 1 . Now consider the induced subgraph G W = (W , E W ) with the set of faces F W . Geometrically, the difference between the graph G W 1 and G W is that G W 1 was connected but may not have been a tessellation, whereas G W is "simply connected" and it is a tessellation. Figure 2.2 shows an example when G W 1 has a non-polygonal face, but G W has nicely tessellating faces. Note that Each bounded face of G W has no vertex v ∈ V in its interior, because otherwise v would be included in W in the construction step. In other words, This is impossible, since G W is finite but G is infinite. Therefore,
Part 2
The assumption on κ(v, f ) implies that for any finite because each edge in E W has both endpoints in W and each edge in ∂ E W has exactly one endpoint in W . Moreover, since we restrict the sum to be summed only over the faces f ∈ F W ∩ F , each of which is a polygon whose vertices are in W .
Combining ( In particular, for our choice of W ⊂ V from Part 1 we have where the last equality applies Euler's formula |W | − |E W | + |F W | = 2 for a finite connected planar graph W . We can now conclude |∂ E W | vol(W ) ≥ 2K 0 as desired.
Combinatorial Bonnet-Myers
At the end of [17], Higuchi conjectures that everywhere positive combinatorial curvature implies the finiteness of graphs. This conjecture can be regarded as a discrete analogue to a weak version of the Bonnet-Myers theorem.
Conjecture 2.9 (Higuchi). A tessellation that has positive curvature at every vertex must be a finite graph.
Let us investigate two examples of tessellations, namely prism and antiprism.
A prism is a graph with 2n vertices u 1 , v 1 , u 2 , v 2 , ..., u n , v n with edges joining Its faces consist of two n-gons, and n quadrilaterals. See Figure 2.3.
In its embedding in R 2 , the unbounded component represents one of the two n-gonal faces. For every vertex v of the prism, there are one n-gon and two quadrilaterals incident to it. Hence the combinatorial curvature can be calculated by Each vertex v of an antiprism has one n-gon and three quadrilaterals incident to it, so the curvature is As shown above, prism and antiprism demonstrate two classes of tessellations that have positive curvature everywhere. Although both prism and antiprism are finite graphs, their numbers of vertices can be arbitrary large. In [12], DeVos and Mohar proved Higuchi's conjecture and provided a further insight about the finiteness that: all everywhere-positively-curved tessellation (except prisms and antiprisms) have a uniform upper bound on the number of their vertices, and they asked for a sharp bound. In [24], the authors gives an example of one such graph with 208 vertices. On the other hand, it was recently proved in [15] that all such graphs have at most 208 vertices, hence 208 is the optimal number. Chapter 3
Bakry-Émery Curvature
While the previous curvature notion was based on a discrete version of the Gauss-Bonnet Theorem in two-dimension, the curvature notion in this chapter, introduced by D. Bakry and M. Émery [1], is based on Bochner's formula from Riemannian geometry. Graphs are no longer assumed to be planar, and their dimensions are not restricted to two. Instead, the dimension can be chosen to be an arbitrary positive real number, including ∞.
CD inequality and Γ-calculus
Bochner's formula states that for every smooth real function f ∈ C ∞ (M ) and at every point x ∈ M , Further, defined at each point x the curvature term K x := inf v∈TxM Ric(v) |v| 2 which gives a lower bound for Ricci curvature term: Recall also Proposition 1.13: Hess f 2 (x) ≥ 1 n (∆f (x)) 2 . Combining these two inequalities into the equation (3.1), we have the so called "curvaturedimension" inequality, According to [1], define bilinear operators Γ and Γ 2 as follows.
Observe that the above curvature-dimension inequality involves the Γ and Γ 2 terms, which were defined merely via the Laplacian. This allows us to consider curvature-dimension property on any space, once the Laplacian is specified on such space.
Definition 3.2. Let X be a space and C(X) be the function space of X, that is the set of all functions f : X → R, equipped with the addition and scalar multiplication rules: Assume that the X has Laplacian operator ∆ defined on it. Fix a number n ∈ R + ∪ {∞} to be the dimension of X. The curvature at each point x ∈ X is defined to be the maximal number K x such that the inequality holds true for all functions f ∈ C(X) Moreover, for a fixed real number K, we say that X satisfies CD(K, n) if K x ≥ K for all x ∈ X; in other words, holds for all x ∈ X and for all f ∈ C(X).
Here the operators Γ and Γ 2 on X are also defined as in the equation (3.3) and (3.4).
In particular, Laplacian on graphs can be specified as follows.
for all vertices x ∈ V . In terms of matrix representation, we can write Laplacian as where D is the diagonal matrix whose entries are the vertex degrees: D xx = d x , and A is the agjacency matrix: A xy = 1 if x ∼ y and 0 otherwise.
This notion is sometimes called the normalized Laplacian (in contrast to the non-normalized one, where the factor 1 dx is dropped). Here are some properties of operators ∆ and Γ defined on graphs.
In particular, Proof.
(a) Straightforward calculation from the definition gives The second identity in part (a) follows immediately.
(b) From Arithmetic-Quadratic mean (AM-QM) inequality, In Section 1.3, we obtain Lichnerowicz's bound on first nonzero eigenvalue by taking integral on the Bochner's formula and applying the Divergence theorem. Here, we imitate a similar result in a discrete analogue.
Theorem 3.5 (B-E Lichnerowicz). Let G = (V, E) be a finite connected graph satisfying CD(K, n) condition for some K > 0. Then the first nonzero eigenvalue with respect to the Laplacian operator ∆ satisfies λ 1 ≥ n n−1 K.
In fact, the condition that G is finite can be removed, since the condition CD(K, n) when K > 0 already implies the finiteness of G by Bonnet-Myer's theorem, which we will discuss later on in this chapter.
Proof. Suppose f is an eigenfunction satisfying ∆f + λf = 0. Due to a scalar multiplication to f , we may assume x∈V d x f 2 (x) = 1. Now we aim to compute the total sum of all terms in the CD(K, n) condition: From the definition of Γ(f ), we have Here we used the fact that x∈V d x ∆(f 2 )(x) = 0 due to the discrete Divergence theorem (Proposition 3.4(c)) applied to the function f 2 , and the fact that ∆f = −λf . Therefore, the total sum of Γ(f ) is Similarly, the total sum of Γ 2 (f ) can be calculated as Therefore, Moreover, the total sum of (∆f ) 2 is simply Combining equations (3.8), (3.9), and (3.10) into CD(K, n), we obtain Therefore, λ 1 ≥ n n−1 K as desired.
Motivation of the defined Laplacian in graphs
We have seen Laplacian in R n with Euclidean metric. In particular when n = 2, Express the derivatives in terms of finite differences, By discretizing R 2 as Z 2 and set h = 1, the discrete Laplacian then becomes as we treat x ± e i 's to be the neighbors of x (see Figure 3.1).
Heat semigroup operator
In this section, we introduce another operator, namely heat semigroup operator, which will be a useful tool in the proof of Bonnet-Myers later on in this chapter.
Definition 3.6. Let X be a space with Laplacian operator ∆. For t ∈ [0, ∞), a heat semigroup operator P t : C(X) → C(X) is defined by The operator P t is differentiable in t, and its derivative satisfies Basic properties of P t are listed in the following proposition Proposition 3.7. Let P t be the heat semigroup operator defined as in above. Then Although this proposition holds in great generality, we will prove it here in the case of normalized Laplacian ∆ on finite graphs.
Proof. Recall from the definition that ∆ = D −1 (A − Id), which is a bounded operator, so we may write e t∆ = ∞ n=0 t n ∆ n n! , and thus Note that B := Id + D −1 (A − Id) is also a bounded operator, and it has all entries nonnegative. Therefore, e t∆ = e −t e tB = e −t ∞ n=0 t n B n n! also have all of its entries nonnegative. For a function f ∈ C(V ) such that f ≥ 0, it is represented by a column vector f whose entries are nonnegative.
Thus P t f = e t∆ f has all entries nonnegative, meaning P t (f ) ≥ 0.
that is P t f ≤ F as desired.
In [16], Gong and Lin prove that the condition CD(K, ∞) can be characterized in term of P t as in the following theorem. This theorem serves as a part in the proof of Bonnet-Myer's theorem.
for all x ∈ V and all bounded f : V → R.
Note that F (0) = Γ(P t f )(x) and F (t) = e −2Kt P t Γf (x) are the terms on lefthand side and right-hand side of the inequality (3.11). We need F (0) ≤ F (t), so it suffices to prove that F (s) ≥ 0 for all 0 < s < t.
Product rule and chain rule of the differentiation give With the relation ∂ ∂s P s = ∆P s = P s ∆ substituted into the second term in the bracket above, we can then pull out P s and obtain F (s) = e −2Ks P s h(x) where h denotes the operator due to the condition CD(K, ∞). The proposition (3.7) implies that P s h ≥ 0, which gives F (s) ≥ 0 as desired.
Bakry-Émery Bonnet-Myers
Bonnet-Myers in the sense of Bakry-Émery states that a graph with strictly positive Bakry-Émery curvature bounded away from zero must be a finite graph, and the bound of diameter can be estimated in term of curvature.
Here, we give a proof in case of ∞-dimension. The theorem also holds for any dimension n < ∞ but with a different bound on diameter (see [22,Theorem 2.4]). By triangle inequality, holds for all t > 0.
The next two steps are to prove that |f (x) − P t f (x)| ≤ 1 K holds for any x, and that |P t f (x 0 ) − P t f (y 0 )| → 0 as t → ∞. This will guarantee L ≤ 2 K .
• First, the fundamental theorem of calculus gives Therefore, for any neighboring vertices x ∼ z, and then using again the triangle inequality to deal with vertices at longer distance.
As in the previous part,
Chapter 4
Ollivier's Ricci Curvature Ollivier's Ricci curvature notion is motivated from the "phenominon" (which is the exact word that Ollivier chose to describe in [23, pp. 4]) that Ricci curvature determines whether the average distance of two balls around x and y is larger or smaller than the distance between x and y (see Section 1.5).
Ollivier regards this average distance of two balls as "transportation distance" between two measures. We shall start with the concept of transportation distance (namely Wasserstein distance). Alternatively, see Villani's [25] for a broader introduction to this topic. Given any µ, ν ∈ P (V ), a transport plan (or, in short, a plan) from µ to ν is a function π : π(x, y) and ν(y) = x∈V π(x, y).
Furthermore, define (µ, ν) to be the set of all transport plans from µ to ν, and the (transportation) cost of a plan π is given by cost(π) = x,y∈V d(x, y)π(x, y).
In words, π(x, y) represents the amount of mass being transport from a vertex x to a vertex y according to the plan π. The cost for transporting per unit mass is the (combinatorial) distance function d.
Similarly, one can check that x π 3 (x, z) = ρ(z). Thus π 3 ∈ (µ, ρ). Moreover, the total cost of π 3 is less than or equal to the cost of π 1 and the cost of π 2 combined: By considering over all π 1 ∈ (µ, ν) and π 2 ∈ (ν, ρ), we can then conclude Wasserstein distance W 1 (µ, ν) represents a minimal total cost (when considered among all possible plans) of transporting masses which are distributed as in µ to masses which are distributed as in ν. The subscript 1 in W 1 indicates that the cost function is d 1 .
In general, calculating W 1 (µ, ν) directly by finding an optimal transport plan π can be very difficult. An easier alternative method is via the following Kantorovich Duality Theorem (see [25, pp. 19], or alternatively see further discussion in Section 4.2).
Theorem 4.4 (Kantorovich Duality).
inf y) is the space of all Lipschitz continuous functions on V with Lipschitz constant 1. Such 1-Lipschitz function Φ yielding the maximum is called an optimal Kantorovich potential.
The method is to find a plan π ∈ (µ, ν) and a function φ ∈ 1-Lip such that Then Duality Theorem asserts that such π and φ are an optimal transport plan and an optimal Kantorovich potential, respectively, and the terms in (4.3) must have the value of W 1 (µ, ν). An explicit calculation of W 1 (µ, ν) will be shown in Example 4.6.
Definition of Ollivier's Ricci curvature
Let G = (V, E) be a graph. Consider a transition matrix P defining a lazy simple random walk on G with the probability p to stay unmoving at any vertex (and hence p is called idleness parameter) and equal probability to move to any one of its neighbor. In other words, the probability of moving from x to y in one-time step is Definition 4.5. Let G = (V, E) be a graph. For any vertex x ∈ V , let the measure δ x ∈ P 1 (V ) be the Dirac measure, that is, Further, for p ∈ [0, 1], define a probability measure µ p x := P δ x , that is, The Ollivier's Ricci curvature (with idleness p) is defined at a pair of (different) vertices x, y ∈ V as The motivation behind the definition of this curvature notion comes from the estimation (1.11). The average distance d(B r (x), B r (y)) can be realized as W 1 (µ p x , µ p y ), and then the term K p (x, y) is essentially approximated to the Ricci term (up to some constant factor). For different values of idleness p, the distribution µ p x looks differently around x. For example, when p = 0, µ p x resembles a uniformly distributed sphere, while for p = 1 dx+1 , µ p x resembles a uniformly distributed ball (see Figure 4.1) When p = 1, W 1 (µ p x , µ p y ) = W 1 (µ x , µ y ) = d(x, y) which implies K 1 = 0 identically. Further, Lin-Lu-Yau [21] introduced the curvature notion which a further insight has been proved in [3] and [9] that certainly, holds for all x, y ∈ V and all p ∈ [ 1 2 , 1).
Let us provide an example of how to calculate K p (x, y), in case x and y are neighbors.
Example 4.6. Given a graph G as shown in Figure 4.2. We will consider the transport problem from µ p x (where masses distributed at x for p unit and at u, w, y for 1−p 3 unit each) to µ p y (where masses distributed at y for p unit and at x, v, z for 1−p 3 unit each). In consideration of two possible cases, depending on the value of idleness p whether p ≤ 1−p 3 or not, we construct in each case a transport plan from µ p x to µ p y that we claim to be optimal.
We then obtain a 1-Lipschitz Φ : V → R, and therefore problem as follows. Let x 1 = x and x 2 , ..., x m be all neighbors of x. Similarly, let y 1 = y and y 2 , ..., y n be all neighbors of y.
A transport plan π ∈ (µ, ν) may be represented as an (mn)-column vector π = π(x 1 , y 1 ), ...., π(x m , y n ) T , since π takes value zero at any other point (z, w) = (x i , y j ). Also, define the cost-function vector to be a constant (nm)-column vector The above constraints can then be read as Aπ = b for some constant matrix A of dimension (m + n) × (mn) matrix A whose entries are 0's and 1's.
Therefore W 1 (µ, ν) is a solution to the primal problem (P): min π≥0 d T π subjects to Aπ = b, and its dual problem (D) can be written as where Φ ∈ R n+m represents the values of Φ(x i )'s and Φ(y j )'s, and the constraint A T Φ ≤ d encodes 1-Lipschitz condition of Φ among the vertices x i 's and y j 's (and the 1-Lipschitz condition can then be extended among all vertices in V ).
The strong duality theorem from Linear Programming then asserts that the solutions of (P) and (D) coincide, which is essentially the statement of Kantorovich Duality Theorem 4.4.
One more crucial aspect from Linear Programming is the Complimentary Slackness theorem: if π * and Φ * give optimal solutions to the above (P) and (D) respectively, then π * T (d − A T Φ * ) = 0. It can be written equivalently as follows.
For any x i , y j such that π * (x i , y j ) > 0, then
Ollivier Bonnet-Myers
In this section, we prove of Bonnet-Myers theorem in the sense of Ollivier's Ricci curvature, which is fairly straightforward (compared to the one in Riemannian geometry or the one in Bakry-Émery). In addition, we provide a proof of Lichnerowicz theorem, which is referred to the proof in [21,Theorem 4.2].
First, we introduce an important lemma, which says that the curvatures between two neighbors give the lower bound for the curvature globally.
Lemma 4.8. Let G = (V, E) be a graph, and p ∈ [0, 1). If K p (x, y) ≥ K holds for all neighboring pairs x ∼ y, then K p (x, y) ≥ K for all x, y ∈ V .
Note that in Lemma 4.8 and Theorem 4.9, the p-idleness curvature K p may be replaced Lin-Lu-Yau curvature K LLY (and the diameter of the graph G is bounded by diam G ≤ 2 K ). This is due to the relation (4.4). Theorem 4.10 (O Lichnerowicz). Let G = (V, E) be a finite connected graph. Assume there exists K > 0 such that K LLY (x, y) ≥ K for all x ∼ y. Then the first nonzero eigenvalue λ 1 ≥ K.
Acknowledgements
I would like to express my deepest appreciation to my supervisor, Professor Norbert Peyerimhoff, who invites me to this field of research and gives suggestions throughout my dissertation. I would also like to thank David Cushing for his intelligible introduction lectures of the discrete curvature notions, as well as his creative research ideas in this topic.
I would like to thank Department of Mathematical Sciences, Durham University, and especially Dr. Wilhelm Klingenberg for lectures that prepare me with good background knowledge in Riemannian geometry.
Lastly, I would like to thank my family and friends who always support me, and Royal Thai government who provides a scholarship for my current study. | 2018-03-23T17:21:44.000Z | 2018-03-23T00:00:00.000 | {
"year": 2018,
"sha1": "e09516fe773951e0563fe89a3b858ce2d0ec36a2",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "e09516fe773951e0563fe89a3b858ce2d0ec36a2",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
70798165 | pes2o/s2orc | v3-fos-license | Gamma Emitting Radionuclides in Soils from Selected Areas in Douala-Bassa Zone, Littoral Region of Cameroon
A study of natural radioactivity levels in some composites of eighteen soil samples selected within Douala-Bassa zone of Littoral Region has been evaluated.The samples were analysed using gamma spectrometry based broad energy germanium detector (BEGe 6350). The activity profile of radionuclide shows low activity across the studied areas. The obtained mean values of Ra, Th, and K in the two campuses were 25.48 Bq/kg, 65.96 Bq/kg, and 39.14 Bq/kg for Campus 1 and 24.50 Bq/kg, 66.71 Bq/kg, and 28.19 Bq/kg for Campus 2, respectively. In terms of health analysis, some radiation health hazard parameters were calculated within the two campuses. The mean values of radium equivalent activity were 122.81 Bq/kg and 122.08 Bq/kg, absorbed dose rate in air was 99.13 nGy/h and 98.18 nGy/y, annual outdoor effective dose was 0.12mSv/y and 0.12mSv/y, and external health hazard index was 0.34 and 0.33 in Campus 1 and Campus 2, respectively. These health hazard parameters were seen to be below the safe limit of UNSCEAR 2000 except the absorbed dose rate in air and the annual outdoor effective doses which are relatively high compared to the values of 60 nGy/h and 0.07mSv/y. These results reveal no significant radiological health hazards for inhabitance within the study areas.
Introduction
Gamma radiation emitted from naturally occurring radioisotopes, also called terrestrial background radiation, represents the main source of radiation of the human body.Natural environmental radioactivity and the associated external exposure due to gamma radiation depend primarily on the geological and geographical conditions and appear at different levels in the soils of each region in the world [1,2].Only radionuclides with half-lives comparable with the age of the earth or their corresponding decay products existing in terrestrial materials such as 232 Th, 238 U, and 40 K are of great interest.Abnormal occurrences of uranium and its decay products in rocks and soils and thorium in monazite sands are the main sources of high natural background areas that have been identified in several areas of the world [3].Outdoors exposure to this radiation originates predominantly from the upper 30 cm of the soil [1].According to the literature of natural radioactivity in soil, there is lack of information on natural radioactivity levels in soils from various living sites in Cameroon.Radionuclides in soil generate a significant component of the background radiation exposure to the population [3].
The knowledge of specific activities or concentrations and distributions of the radionuclides in soil is of great interest for many researchers throughout the world and serves as the reference in documenting changes to environmental radioactivity due anthropogenic activities or any release of radioactive elements [4,5].Monitoring of radioactive materials is therefore of primary importance to humans, organisms, and environmental protection.The accumulation of such radioactivity may substantially contribute to collective radiation dose received by the local population living within this particular environment.Radiation exposure can damage living cells, causing death in some of them and modifying others.
There have been many surveys to determine the background levels of radionuclides in soils, which can in turn be related to the absorbed dose rates in air.All of these spectrometric measurements indicate that the three components of external radiation field, namely, from the emitting radionuclides in the 238 U and 232 Th series and 40 K, made approximately equal contributions to the externally incident -radiation dose to individuals in typical situations both outdoors and indoors.Since 98.5% of the radiological effects of the uranium series are produced by radium and its daughter products, the contribution from the 238 U and other contributions of 226 Ra precursors are normally ignored.
The aim of the present study tends to assess the specific activity and examine radiation hazard indices of the naturally occurring radionuclides ( 226 Ra, 232 Th, and 40 K) in soil samples from the two campuses of the University of Douala-Cameroon using broad energy gamma-ray spectrometry based high purity germanium detector.
Overview of the Study Area.
The field experiment was carried out at the two campuses of the University of Douala-Cameroon (04 ∘ 03 14.8 -04 ∘ 03 29.7 N and 09 ∘ 44 00.1 -09 ∘ 44 45.2 W).The studied sites are located within the Douala-Bassa zone where the geology of the region is compromised by the sedimentary rocks, namely, by the tertiary to quaternary sediments as seen in Figure 1.These sedimentary rocks found in the Douala-Bassa zone (within the Douala basin) consist of poorly consolidated grits and sandstones that occasionally display bedding with a few intercalations of limestone and shale.Soils in Douala-Bassa zone vary from yellow through brown to back, freely drained, sandy ferralitic [6].
Samples Collection and Preparation
Techniques.Composites of eighteen soil samples were randomly chosen from the two campuses of University of Douala (seven from small area coverage of Campus 1 ESSEC situated at Angel-Raphael and eleven from large area coverage of Campus 2 located at Ndong-Bong Douala-Bassa).The vertical or near vertical surface was dressed (scraped) to remove smeared soil.This was necessary to minimize the effects of contaminant migration interferences due to smearing of material from other levels.Each composite sample was a mixture of five samples collected within an area of 5 m 2 separated from each other by a distance of 300 m to cover the study site and to observe a significant local spatial variation in terrestrial radioactivity.Each sampling point was marked using a global positioning system (GPS).Four samples were collected at the edges (end corners) and one at the centre.These five samples collected at a depth of approximately 20 cm from the top surface layer were mixed thoroughly to form a composite sample and packed into a polyethylene bag to prevent contamination.The samples were transferred into the laboratory after they were labelled accordingly.At the laboratory, the samples were air-dried for a week then oven-dried at 105 ∘ C for 24 hours.The dried samples were grinded into powder and sieved through a 2 mm wire mesh to obtain a uniform particles size.In order to maintain radioactive equilibrium between 226 Ra and its daughters, the soil samples were then packed in a 360 mL air tight polyethylene cylindrical container, dry-weighed, and stored for a period of 32 days for equilibrium between the long-lived parent and daughter nuclides.
Experimental
Each sample was subjected to a coaxial gamma-ray spectrometer consisting of broad energy germanium detector (BE6530) manufactured by Canberra Industries.The resolution of this detector is 0.5 keV at 5.9 keV for 55 Fe, 0.75 keV at 122 keV for 57 Co, and 2.2 keV at 1332 keV for 60 Co.The detector is placed in a low-level Canberra Model 747 lead shield with thickness of 10 cm.
The energy distributions of the radioactive samples were generated by the computer inbuilt Multiport II Multichannel Analyzer (MCA).Each sample was counted for 86400 seconds for effective peak area statistics of above 0.1%.
Following the sample analysis process, the specific activity concentration in Becquerel per kilogram (Bq⋅kg −1 ) for each radionuclide was calculated after background separation using the Genie-2000 software incorporated with cascade summing correction coefficient.
Assuming a state of secular equilibrium between 238 U and 232 Th and their respective decay daughter products, the following relatively intense gamma-ray transitions were used to measure the activity concentrations for the abovementioned radionuclides.
Absorbed Dose Rate in Air (𝐷).
A direct connection between radioactivity concentrations of natural radionuclides and their exposure is known as the absorbed dose rate in the air at 1 metre above the ground surface.The mean activity concentrations of 226 Ra (of the 238 U series), 232 Th, and 40 K (Bq⋅kg −1 ) in the soil samples were used to calculate the absorbed dose rate using the following formula provided by UNSCEAR [7] and European Commission [8]: where is the absorbed dose rate in nGy⋅h −1 and Ra , Th , and K are the activity concentrations of 226 Ra ( 238 U), 232 Th, and 40 K, respectively.The dose coefficients in units of nGy⋅h −1 per Bq⋅kg −1 were taken from the UNSCEAR (2000) report [7][8][9].
Annual Effective Dose Equivalent.
The absorbed dose rate in air at 1 metre above the ground surface does not directly provide the radiological risk to which an individual is exposed [10].The absorbed dose can be considered in terms of the annual effective dose equivalent from outdoor terrestrial gamma radiation which is converted from the absorbed dose by taking into account two factors, namely, the conversion coefficient from absorbed dose in air to effective dose and the outdoor occupancy factor.The annual effective dose equivalent can be estimated using the following formula [7,11]: The values of those parameters used in the UNSCEAR report (2000) are 0.70 Sv⋅Gy −1 for the conversion coefficient from absorbed dose in air to effective dose received by adults and 0.20 for the outdoor occupancy factor [7].
Radium Equivalent Activity.
As a result of nonuniform distribution of natural radionuclides in the soil samples, the actual activity levels of 226 Ra, 232 Th, and 40 K in the samples can be evaluated by means of a common radiological index called radium equivalent activity (Ra eq ) [10,12].It is the most widely used index to assess the radiation hazards and can be calculated as given by Beretka and Mathew [10,12]: where Ra , Th , and K are the activity concentrations of 226 Ra, 232 Th, and 40 K in Bq⋅kg −1 , respectively.The maximum permissible value of the radium equivalent activity is 370.00Bq⋅kg −1 [7,10].This value corresponds to an effective dose of 1 mSv for the general public and radiation dose rate of 1.50 mGy⋅y −1 [7,13].
2.1.4.External and Internal Hazard Indices.Many radionuclides occur naturally in terrestrial soils and rocks and, upon decay, these radionuclides produce an external radiation field to which all human beings are exposed.In terms of dose, the principal primordial radionuclides are 232 Th, 238 U, and 40 K.The decay of naturally occurring radionuclides in soil produces a gamma-beta radiation field in soil that crosses the soil-air interface to produce exposures to humans.The main factors which determine the exposure rate to a particular individual are the concentrations of radionuclides in the soil and the time spent outdoors.To limit the radiation exposure attributable to natural radionuclides in the samples to the permissible dose equivalent limit of 1.00 mSv⋅y −1 , the external hazard index based on a criterion has been introduced using a model proposed by Krieger (1981) which is given by [7,14]: In order to keep the radiation hazard insignificant, the value of external hazard index must not exceed the limit of unity.The maximum value of ex equal to unity corresponds to the upper limit of radium equivalent activity of 370.00 Bq⋅kg −1 [15,16].
In addition to the external hazard, radon and its shortlived products are also hazardous to the repository organs.To account for this threat, the maximum permissible concentration for 226 Ra must be reduced to half of the normal limit (185.00Bq⋅kg −1 ).The internal exposure to carcinogenic radon and its short-lived progeny is quantified by the internal hazard index ( in ) given by the expression [17].
Results and Discussion
The activity concentrations of 226 Ra, 232 Th, and 40 K in soil samples from the University of Douala-Cameroon have been measured and presented in Table 1 with the geological coordinates of each sampling point.The radiological health hazards indices in the investigated soil samples have been calculated and displaced in Table 2.The comparison of specify activities of 226 Ra, 232 Th, 40 K in soil samples from the University of Douala-Cameroon with data from other countries is reported in Table 3.
For Campus 1, in Tables 1 and 3, the activity concentrations of 226 Ra varied from 21.99 ± 0.68 to 29.17 ± 0.87 Bq/kg with an average of 25.48 Bq/kg.The activity concentrations of 232 Th and 40 K ranged from 59.14 ± 1.41 to 65.88 ± 1.55 Bq/kg with an average value of 65.96 Bq/kg and from 13.93 ± 2.88 to 70.89 ± 3.70 Bq/kg with a mean of 39.15 Bq/kg, respectively.
For Campus 2, in Tables 1 and 3, the activity concentrations of 226 Ra, 232 Th, and 40 K ranged from 21.99 ± 0.68 to 2(a) and 2(b).As shown in the figures, the radioactivity concentration slightly varied from one point to another.These variation observed in both studied sites may result from the nonuniform distribution of radioactivity contents present under the earth crust.It is generally considered that igneous rocks contain higher levels of radioactivity than sedimentary rocks.The areas under study are part of the Littoral Region observed to be the major sedimentary basin of Cameroon [6].This formation has variations in sediments, limestone, shale, and clay.From the recorded activities of 226 Ra, 232 Th, and 40 K in the present study, it can be noticed that the obtained average value of 232 Th in both locations was observed to be comparably higher than both of 226 Ra and 40 K in almost all the soil-sampling locations.This could be due to the high content of thorium present in sedimentary rocks.
Comparing the average activity values of 226 Ra, 232 Th, and 40 K obtained in both studied sites, as shown in Figure 3(a), it can be seen that the obtained average values of 226 Ra, 232 Th, and 40 K in the analyzed soil samples from both studied sites were relatively the same with the exception of the 40 K average value which was slightly higher in Campus 2 than in Campus 1.The observed similar variation range in activity concentrations of 226 Ra, 232 Th, and 40 K is due to the fact that both studied sites are closed to one another and the soil samples collected in both sites originated from the same geology formation.The slight difference in activity concentration average value of 40 K is also due to the irregular distribution of uranium, thorium, and potassium contents present in the studied soils.
The calculated average activity values of 226 Ra, 232 Th, and 40 K in both studied sites were compared with the established worldwide ones by UNSCEAR [7] as represented in Figure 3(b).It can be observed that the values obtained in both studied sites are comparably lower that the recommended worldwide values with the exception of the 226 Ra values.
The observed activity concentrations of 226 Ra, 232 Th, and 40 K in the present work were compared with other published values obtained from the literature of radioactivity in soil by many authors as dispatched in Table 3.The obtained average
6
ISRN Spectroscopy It can be seen that the average values of 232 Th recorded in the present study were slightly lower than the published values recorded in China (Xiaz-hung area), Ghana (Greater Accra), and India (Himwchal Pradesh) [18,20,21] and higher than the recorded and published average values in Namibia and Nigeria Delta [23,25].The present values were compared favourably with the recorded average values published by other countries selected from the worldwide investigation of natural radioactivity in soils.
Uniformity with respect to exposure to radiation defined in terms of radium equivalent activity to compare the specific activity of geological materials contain in different amounts of 226 Ra, 232 Th, and 40 K.This was calculated and the results were presented in Table 2.It can be seen that values ranged from 116.36 Bq/kg to 144.46 Bq/kg with an average of 122.81 Bq/kg in Campus 1 and from 100.93 Bq/kg to 141.11 Bq/kg with a mean value of 122.08 Bq/kg in Campus 2. The obtained values of radium equivalent in the present investigation are comparably less than the safe limit (370.00Bq/kg) recommended by UNSCEAR [7].
The ionising radiation affects the biological systems and it depends, along with the other factors, on the time and place of exposure and population involved.In most cases, the risk appears to be higher outdoors than indoors.As shown in Table 2, the calculated absorbed dose in air is in the range of 90.95 nGy/h to 115.77 nGy/h with a mean of 99.13 nGy/h in Campus 1 and from 81.92 nGy/h to 112.96 nGy/h with an average of 98.18 nGy/h in Campus 2. The estimated annual outdoor effective dose in the present study varies from 0.11 mSv/y to 0.14 mSv/y with a mean of 0.12 mSv/y and from 0.10 mSv/y to 0.14 mSv/y with an average of 0.12 mSv/y in Campuses 1 and 2, respectively.The external health hazard index calculated in the present study ranges from 0.31-0.40 with a mean of 0.34 and from 0.28-0.39 with an average of 0.33 in Campuses 1 and 2, respectively.The obtained values of the absorbed dose rate in air and the annual outdoor effective dose in the present investigation are comparably higher than the recommended values of 18.00-93.00(60.00) nGy/h and 0.07 mSv/y UNSCEAR [7], whilst those of external health hazard index obtained are comparably less than the unity.
Conclusion
The natural radioactivity levels of 226 Ra, 232 Th, and 40 K have been measured in soils from the selected areas within the Douala-Bassa zone in the Littoral Region of Cameroon using gamma spectrometry based broad energy germanium detector (BE6530).The recorded mean values of 232 Th in both studied sites were relatively high than those of 226 Ra and 40 K. Considering the nonuniform distribution of radioactivity in geological materials, the radium equivalent was calculated and observed to be lower than the recommended safe value (370.00Bq/kg) by UNSCEAR.The radiological health hazards parameters calculated in the present work were comparably higher than the recommended safe limit of the absorbed dose rate in air, the annual outdoor effective dose, and external health hazard index by UNSCEAR except those values of the external health hazard index which were less than unity.
The results obtained in this work have established baseline information on natural radioactivity in the two campuses of the University of Douala-Cameroon.It is expected that the results obtained may be used as baseline data for future work.
Figure 1 :
Figure 1: Map indicating the study area.
(a)226 Ra concentration was calculated as a weighted mean of the activity concentrations of the gammarays of 214 Pb (295.1 keV, 351.9 keV), 214 Bi (609.3 keV and 1120.29 keV), and its specific gamma-ray at 186.2 keV.Interference correction due to the presence of 185.7 keV energy peak of 235 U has been taken into account and subtracted accordingly.(b) The gamma-ray photopeaks used for the determination of the 232 Th contents were 338.4 keV, 911.2 keV, and 969.11 keV of 228 Ac and 238.6 keV of 212 Pb.(c) 40 K was directly determined by using 1460.8(10.7%) gamma-ray.
Figure 2 :
Figure 2: (a) Distribution of specific activities of 226 Ra, 40 K, and 232 Th in soil samples from Campus 1.(b) Distribution of specific activities of 226 Ra, 40 K, and 232 Th in soil samples from Campus 2.
Figure 3 :
Figure 3: (a) Comparison of the mean specific gamma activities in soil from both studied sites.(b) Comparison of the mean specific gamma activities in soil samples with the worldwide value.
Table 1 :
Specify activities of 226 Ra, 232 Th, and 40 K in soil samples from Campuses 1 and 2 of the University of Douala.
Table 2 :
The radiological health hazard parameters due to the activity contents of 226 Ra, 232 Th, and 40 K in soil samples.
Table 3 :
Comparison of specific gamma activities (Bq/kg) in soil with that of other countries. | 2019-01-11T13:59:13.541Z | 2014-01-22T00:00:00.000 | {
"year": 2014,
"sha1": "c5c4958d021bd7bae3d4b8691da98c13179bb66b",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/archive/2014/245125.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "a8eab653afd6d67dbdb603367838809c7b1ddb67",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
268515668 | pes2o/s2orc | v3-fos-license | Vibration Energy Harvesting with Printed P(VDF:TrFE) Transducers to Power Condition Monitoring Sensors for Industrial and Manufacturing Equipment
A vibration energy harvesting system based on fully printed piezoelectric transducers is realized. The transducers have a butter fl y-like architecture based on two single-optimized cantilevers with a resonance frequency tuned to 49.5 Hz and can be easily mounted to an industrial engine. By comparing single-and multistack con fi gurations of the piezoelectric layers combined with full-wave or voltage doubler recti fi ers, the power transfer characteristics and impedance can be matched to the electrical requirements of the sensing circuitry. A single stack with 21 μ m thickness results in the maximum power output of 14.4 μ W at a vibration velocity of 11.5 mm s (cid:1) 1 , typical for industrial engines. The system is used to power a wireless sensor node on a 1 kW rotary pump in normal operation. The system can harvest up to 138 mJ within 24 h, suf fi cient for daily remote monitoring of the engine ’ s vibration spectrum and temperature state. The system is thus suitable as a low-cost, ecofriendly power source for industrial IoT applications.
Introduction
Energy harvesting has been attracting great attention as it holds promise for meeting the sustainability requirements for the upcoming Internet of Things (IoT) [1][2][3][4] and Industry 4.0. [5]n many IoT application fields (e.g., condition monitoring in industrial environments), devices must be inexpensive, lightweight, wireless, energy saving, or even self-sustaining in order to simplify installation and maintenance. [6]Currently, most commercial IoT sensors are battery powered and have a limited lifetime.However, in industrial environments, the sensor nodes used for condition monitoring often have to work as standalone units in positions difficult to reach once installed, and thus the use of batteries and in particular their replacement is impractical, [7] especially since it may be deployed in difficult places. [8]A sustainable, ecofriendly, and low-cost alternative to power these devices is by harvesting ambient energy available at the sensor node, which is otherwise just dissipated to the surrounding environment.There are many ambient energy sources, such as solar, [9] radio frequency, [10] temperature gradients, [11] and kinetic or vibrational energy. [12,13]n many cases, a multisource harvesting approach can be an interesting way to boost the energy output. [14]n this work, we focus on energy harvesting in an industrial environment, in particular harvesting the kinetic energy of vibrations arising from electrically driven engines.In order to convert this kinetic energy into electricity, a triboelectric, piezoelectric, or electromagnetic coupling mechanism can be harnessed.Piezoelectric and triboelectric coupling mechanisms are more suited to make integrated nanogenerators than the electromagnetic one, as they are easier to miniaturize, in exchange of a lower output. [15]19][20] In contrast to TENGs, piezoelectric nanogenerators (PENGs) have the advantage of being simple in design as they do not require a controlled and reproducible contact between two surfaces in order to function. [21]Usually these generators are realized of piezoceramic materials, which excel in a high intrinsic energy conversion efficiency. [22,23]However, these are brittle and in case of the best-performing lead-based ceramics even pose a risk to the environment. [24]A lead-free and flexible alternative are the ferroelectric polymer poly(vinylidene fluoride) (PVDF), and the printable ferroelectric copolymer poly(vinylidenetrifluoroethylene) (P(VDF:TrFE)).P(VDF:TrFE) can be processed on cheap, flexible, and lightweight substrates such as PET using large-area, scalable printing techniques, enabling a customized design for specific requirements. [25]Although it has significantly lower intrinsic piezoelectric coefficients and conversion efficiency than its ceramic counterparts (d 33 around 30-35 pCN À1 compared to PZT's 300 pCN À1 ), [25,26] the ease of processing, mechanical robustness, and potential for large-area fabrication and high integration density can outweigh this intrinsic deficiency in many application fields.Here, we prove the viability of these printed devices as power sources for IoT sensor nodes.
Godard et al. demonstrated a vibration PENG based on a cantilever with printed polymer multilayers. [27]Its remarkably high power output of about 1 mW was achieved using input vibrations with acceleration as high as 5.8 G at 33 Hz.This was done using a shaker.In a real environment, many vibration sources that can be used for energy harvesting generate much lower acceleration values, as shown in Figure S1, Supporting Information.In other studies, the acceleration values might be more realistic, but the frequency they are tuned to is rather high for industrial settings. [28]A more realistic scenario, according to ISO 10816, [29] considers lower vibrational velocities around 5 mm s À1 root mean square (RMS) for small pumps and frequencies in the range of 20-60 Hz.If we suppose a main vibration frequency peak at 50 Hz, it corresponds to acceleration RMS value of around 1.6 m s À2 .When the PENG is used to power a sensor electronics including wireless transmission, one needs to further consider the voltage requirements and operating range.In literature, piezoelectric harvester systems can be found with output power ranges from few μW to 2 mW, with operating frequencies from very low frequency (0-10 Hz) to several kHz. [30] comparison of the efficiency of these systems is not straightforward as the excitation conditions are different, like vibration velocity, frequency, size of harvester, etc.An overview of the output powers and operation conditions can be found in ref. [30].
In this work, we demonstrate an energy-autonomous condition monitoring with an energy harvesting system (EHS) that provides the power for a sensing chip (SC), as shown in Figure 1.The SC can be used to monitor the temperature and vibration of motors for timely diagnosis of potential problems.As a realistic scenario for condition monitoring, we choose to monitor a rotary pump that generates the vacuum of a highvacuum (HV) chamber.
Vibrational Piezoelectric Energy Nanogenerator (VPENG)
The key element of the EHS system in our sensor node is the vibrational PENG (VPENG).The VPENG has the form of two cantilevers in a butterfly-like arrangement with a clamping region in the center.It consists of a piezoelectric transducer printed on top of a polyethylene terephthalate (PET) substrate.The transducers are made of the ferroelectric copolymer P(VDF:TrFE) sandwiched between screen-printed PEDOT:PSS electrodes.The design of the transducer as well as images of the two vertical layer configurations are depicted in Figure 2.
In the "single" configuration, a single electrode pair is used and 1-3 layers of the piezoelectric polymer are printed on top of each other to obtain different stack thicknesses.In the "multi"configuration, two stacks of the transducers are printed, where the electrodes are connected in an interdigital manner to connect the vertically aligned stacks in parallel.Figure 2b shows the scanning electron microscopy (SEM) images of cross sections of two devices printed according to the different stack configurations.The shown device with single configuration has three layers of P(VDF:TrFE) printed on top of each other resulting in a maximum stack thickness of 20.8 μm.The multidevice features two W c , substrate thickness d s , and piezoelectric layer thickness d p of the cantilever, together with the tip mass M , must be adjusted so that the eigenfrequency of the cantilever matches the vibration frequency of the targeted system.In this scheme, the active layer area is the area where the piezoelectric transducer is present in either of the two stacking configurations.This layer covers the substrate over a length L p , which must be adjusted to maximize the transducer's power output.An image of the printed harvester is shown in Figure 6.b) Schemes and representative SEM images of transducers with the two different layer stacks (scale bar: 2 μm).The single stack consists of one layer of P(VDF:TrFE) sandwiched between electrodes.In the multistack arrangement, another electrode is added in-between and the two stacks are electrically connected in parallel.PET was used as substrate material, while the electrodes were printed with the conductive polymer PEDOT:PSS.A protective coating was applied on top of the transducers.SEM images were colorized to highlight the different layers of the fully printed transducers.
transducer stacks, each with a double layer of P(VDF:TrFE) and a thickness of 13.4 μm.Table 1 summarizes the fabricated and tested devices following both configurations, indicating the thickness of the piezoelectric stacks and the number of stacks in the multiconfiguration.
Since the polar domains of the as-prepared screen-printed semicrystalline ferroelectric films are randomly oriented, they have to be aligned in an external electric field larger than the material's coercive field E c .This process is called "poling" and is described in detail in the Experimental Section.During poling, a macroscopic remnant polarization P r is built up, which is proportional to the piezoelectric constants d ij [31] and, thus, as a main figure of merit is a measure for the transducer's sensitivity (required for both sensing and harvesting). [32]Representative D(E) poling hysteresis curves of a single-and multistack sample are shown in Figure S2, Supporting Information, where the positive-up negative-down (PUND) procedure was applied to obtain the switching polarization P(E) from the recorded electric displacement D. [33] The obtained remnant polarization values P r for the different transducers configuration are summarized in Table 1.An average polarization of 64 AE 2 mCm À2 was achieved.With the multistack configuration, it has to be noted that although the footprint area of the device is the same as with the single-stack configuration, the electrode area is doubled.This way, a higher charge generation and thus a higher current output can be achieved with the same transducer area, as will be shown later.
As a resonating system, the cantilever's resonance frequency must be fit to the main vibration component in order to achieve efficient energy conversion.In our demonstration, it is the vibration of a rotary vacuum pump driven by an electric engine.The vibration frequency of an electric motor will be close to its rotation speed, which is a function of the input current frequency (often net frequency) and the number of poles. [34]In Europe, that means the typical target frequency will be close to 50 or 25 Hz, reduced by the rotor slip under load condition.Our targeted system for energy harvesting with our VPENG is a rotary pump with a peak acceleration at around 49.5 Hz as measured with a laser Doppler vibrometer.Finite element model (FEM) simulations and experiments were carried out to optimize the cantilever geometry for harvesting from this vibrational source.
Parameter Study from FEM Model
FEM simulations were performed to investigate the vibration modes of the VPENG cantilevers for the different geometry parameters and with adding a tip mass.The geometry is shown in Figure 2a.The eigenfrequency of the transducer, which needs to match the motor's main vibration frequency, depends on the cantilever geometry parameters including the length L c , width W c , substrate thickness d s , and piezoelectric layer thickness d p .It can be further tuned by adding a tip mass M to the edge of the cantilever.Figure 3 shows the simulated eigenmodes of a cantilever with the normalized z-direction strain component and a point tip mass located at 1 mm from the tip.The active, piezoelectric layer couples the local strain with the material polarization and is characterized by the length L p .On the physical samples, this layer consists of PEDOT:PSS electrodes sandwiching the P(VDF:TrFE) layer, either in a single-or multistack configuration as described above.In the simulation, the electrodes are only included as boundary conditions for the voltage field, while P(VDF:TrFE) is implemented as an orthotropic linear elastic piezoelectric material, as shown in previous works. [35,36]The change in local piezoelectric polarization, and therefore surface charge density, is proportional to the local strain component parallel to the poling direction (z-direction).Thus, in order to avoid charge cancelations, the strain field amplitude must have the same sign in the area covered by the electrode (~defined by L p ).We can see that this is true for a cantilever working in the fundamental mode (eigenmode 1, blue area) as shown in Figure 3.At higher modes (eigenmodes 2-4), the presence of compressive and tensile strains in z-direction (red and blue regions) generates piezoelectric charges of opposite sign, resulting in partial surface charge cancellation in the active transducer area.Thus, for the higher eigenmodes, more complex electrode designs would be necessary to avoid these cancellation effects, which would make the fabrication process more complicated.To maximize the charge generation and thus the energy output of the VPENG, the optimum transducer active area must be found for each mode of vibration to be used.
Once the substrate material is fixed, the resonance frequency is determined by geometric parameters and tip mass.As shown in Figure 4, the main change in resonance frequency comes from the length and substrate thickness of the cantilever.In contrast, the active, piezoelectric layer thickness d p has a very small effect on the resonance frequency compared to the other parameters.The cantilever width W c has a moderate effect on the resonance frequency compared to its length L c .This strong dependence on the cantilever length is expected, since, from beam theory, the fundamental frequency f r for a rectangular isotropic cantilever follows the relation [37] where m is the mass per unit length of the cantilever, E young is the Young's modulus of the cantilever, and I is the area moment of inertia of the cross section and directly proportional to the beam width W c .λ r is the frequency number, which are solutions of the characteristic equation As mentioned above, another important factor that comes into play for the harvester energy output is the length of the active transducer area L p .
In Figure 5, we calculated the maximum output of transducers with different lengths L p of active layer at impedance matching conditions for a mass load of 0.22 g on the tip.By setting up short-circuit conditions (voltage difference equal to 0) and open-circuit conditions (charge transfer between electrodes equal to 0), short-circuit current (I SC ) and open-circuit voltage (V OC ), respectively, were obtained at resonance condition.For doing so, we performed a frequency sweep around the resonant frequency for each length of the active part as shown in Figure 5 This trend in the electrical output in dependence of L p can be understood as follows.When the cantilever oscillates in the fundamental mode, the strain is concentrated at the clamped edge.With increasing L p , more charges are generated from the piezoelectric response due to bending strain, meaning an increased current in the dynamic response.However, the strain amplitude decreases along the cantilever length direction, causing a flattening of the slope in the current trend.Furthermore, in the vicinity of the tip mass, the deformation causes even an inverse strain response (normalized z-strain > 0 in Figure 3).Consequently, for high L p (>18 mm), that is, when the piezoelectric layer edge approaches the tip mass position, the locally generated surface charges have opposite polarity and diminish the total charge amount and current response.These two effects cause a decrease of slope in the short current curve of Figure 5.
With regard to the voltage output, the trend is related to both the generated charges Q and the transducer's capacity.
While the capacity grows with area and therefore proportional to L p , the generated charge does not increase equally for the factors mentioned above.Thus, the voltage output of the transducer will reduce with increasing L p , as shown in Figure 5.The maximum power output is achieved for an electrode length roughly half of the cantilever length (L p between 15 and 18 mm).
According to this, a cantilever of 29 mm length and 82 mm width can be tuned to a fundamental frequency of around 49 Hz with a mass of just 0.22 g fixed slightly (1 mm) inward the cantilever edge.This way, the transducer's eigenfrequency would match the fundamental frequency of a rotary pump, as shown in Figure S1, Supporting Information, and is measured with a laser vibrometer.A transducer's active layer length L p about half the length of the cantilever is expected to deliver maximum power output.In contrast, a shorter cantilever may reach the same resonance frequency with higher mass, but the power output will be reduced due to the smaller active transducer area.
Electrical Output Performance of the VPENGs
In order to investigate the frequency-dependent sensing/harvesting properties of the VPENGs, we integrated the transducers into a 3D-printed protective housing, which also provides a mechanism to clamp the transducers, see Figure 6a.This VPENG box was then mounted on an electromagnetic shaker to simulate a vibrating machine, as shown in Figure 6a.The tested VPENGs were tuned to a fundamental frequency of 49 Hz by adjusting the geometry and tip mass to the values obtained from simulation.A sinusoidal vibration with an RMS acceleration of 4 m s À2 was applied, which corresponds to RMS velocity of 11.5 mm s À1 .In Figure 6b the output power P out of the different transducer configurations is plotted as a function of load resistance R L .
The harvesters showed a maximum power output range from 4 to 14 μW at an optimum load between 0.5 and 1 MΩ.The power output values, including the peak power, the open-circuit voltage (V OC ), and short-circuit current (I SC ), are summarized in Table S1, Supporting Information.From Figure 6 and Table S1, Supporting Information, we can see that the output power is increased for thicker layers of the piezoelectric material (red curves, Single-1 to Single-3), which can be attributed to the enhanced output voltage level V OC .According to Equation (3), the voltage scales inversely with the capacitance and thus proportionally to the layer thickness, whereas the piezoelectric charge is caused by the bending strain, which hardly varies with the layer thickness.The device with multistack approach (Multi-1) delivers .The theoretical peak load power P sim (blue line) was calculated as 1 2 V OC;RMS ÃI SC;RMS , which is the expected power output at impedance matching condition.(The small ripples in the calculated power plot are a consequence of finite mesh sizes of the FEM model.)In the model, the cantilever was excited at its natural frequency, around 49.5 Hz for the different length of the active transducer areas L p with an acceleration RMS value of 4 ms À2 .The mass is placed 1 mm from the edge of the cantilever.The range of L p resulting in maximum power output is indicated in gray in the graph.
approximately twice the current compared to the single stack with the same active layer thickness (Single-2) while the open-circuit output voltage is the same for both devices.As shown in Figure 6b, the power output is roughly doubled.It can be also noted that the peak power for Single-3 is slightly higher than the one simulated in Figure 5, even though they should be similar.The simulated value is 10.6 μW compared to the experimental 14.4 μW.Both, the simulated short-circuit current and open-circuit voltage, are smaller than the experimental ones, which may be due to an underestimated damping factor in the simulation.The values are compared in Table S1, Supporting Information.
The open-circuit voltage level of the VPENG has to be high enough to reach the requirements of the electronics it must power.A commercial energy harvesting chip like LCT 3588-1 from Analog Devices has an input voltage range of 2.7-20 V. [38] The used prototype sensor chip platform from Infineon is able to operate down to 1.5 V.However, higher open-circuit voltage levels are preferred to achieve higher energy stored in a capacitor.Thus, in view of energy storage with a capacitor, a thicker layer might be preferred in order to achieve a higher voltage level.As shown in the next section, the right choice of the rectifier circuit is also important to optimize the energy harvesting condition.
Performance of Vibration Energy Harvesting on a Rotary Pump
After studying the power output of the VPENGs in a simulated environment, the harvesting capability of the EHS was tested in a more realistic setting.For doing so, we harvested the vibration energy of a rotary pump.This rotary pump with a nominal power of 0.55 kW generates the vacuum of a high vacuum (HV) chamber in our lab and is shown in Figure S3, Supporting Information.This pump operates nonstop in order to hold the required vacuum level in the chamber (apart from venting of the chamber and sample loading) and thus provides a continuous source of vibration to harvest.
The harvested energy was measured by monitoring the voltage level of a storage capacitor connected to a rectifier (cf. Figure 1).Figure S3, Supporting Information, shows an image of the setup and the electric diagram.The EH box was magnetically attached to a magnetic metal sheet that was glued with the two-sided scotch tape to the carcass of the motor.The vibration spectrum of the motor-driven pump was measured at a steady state with a laser Doppler vibrometer and a peak velocity of 4.6 mm s À1 was found at 49.5 Hz, which corresponds to around 1.4 ms À2 .When the chamber is opened and closed again, the vibration increases slightly during evacuation of the chamber due to the increased workload on the pump.The vibration in this case varies between of 4.5 mm s À1 and 6.7 mm s À1 .After around 15 min, the motor reaches the steady state.The eigenfrequency of the VPENG was fine tuned to the vibration frequency by slightly adjusting the tip mass position in situ.The VPENG was connected to a rectifying circuit, which was either a voltage doubler (VD) or a full-wave rectifier (FWR) and feed a 4.7 mF storage capacitor; see Figure S3, Supporting Information.In order to test just the harvesting capabilities, the condition monitoring chip was not connected and the storage capacitor was charged until reaching the saturation voltage (V sat ) that was dependent on the V OC of the used VPENG and type of rectifier.The stored energy E S (Equation ( 4)) and output power P EH (Equation ( 5)) of the EHS can be derived from the storage capacitance C and the voltage V c measured at the capacitance.).The VD clearly offers higher output voltages as expected.For samples Single-2 and Multi-1 with similar thickness of the piezoelectric layer, the saturation voltage is comparable; however, the energy transfer at lower voltage levels is noticeably better with the Multi-1 configuration.The overall energy transfer is significantly higher when using a VD as compared to the FWR.e) and f ) Transferred power versus voltage according to Equation (5).With the VD, a high transfer power can be maintained over a large voltage range.A Savitzky-Golay filter with a 100-point window was applied to filter the high-frequency ripple in the voltage plot.
Since the power is calculated from the voltage stored in the capacitor, it accounts for all losses occurring during rectification.
Figure 7 summarizes the measured output energy and power of the EHS with different VPENGs and rectifiers.It is evident that thicker piezoelectric layers lead to higher saturation voltage levels, as we can see in Figure 7a,b.Sample Single-3 shows the highest output with voltage levels of 2.6 and 3.8 V for the FWR and VD, respectively, after a duration of 8 h.Comparing the two rectifier types, one observes that within the same time window more energy is harvested with the VD compared to the FWR.The output peak power, though, is slightly higher for the FWR, but concentrated in a small window at lower voltage levels, as revealed in Figure 7e,f.For the sake of completeness, Figure S5 and S6, Supporting Information, show the power output versus time.These plots indicate that though the VD circuit allows for higher harvested energy, the FWR is more efficient at the onset of charging.
Next, we can compare the two VPENG configurations with similar active layer thickness per stack, that is, Single-2 and Multi-1.As expected, the achieved saturation voltage levels are comparable.However, from Figure 7c,d one can see that the Multi-1 has a better energy transfer at lower voltage levels compared to the Single-2.By inspecting the power, as shown in Figure 7e,f, we realize that the harvesting peak power is roughly doubled with the stack connected in parallel, providing higher current levels.When the output voltages needed for powering a user electronics are well below the saturation voltage, shorter harvesting times can thus be achieved with the multiconfiguration.
To power the SC, a high enough voltage level is necessary.The integrated sensor chip used in this test requires minimum 1.5 V to operate, so the available energy is at most ΔE ¼ CðV 2 sat À ð1.5VÞ 2 Þ.From previous tests, we see that sample Single-3 can reach the highest voltage levels.Thus, it is the most suited sample for powering the SC.To increase the available energy for powering the chip, we used four 4.7 mF capacitors in parallel, giving a total storing capacity of 18.8 mF.When repeating the charging with a FWR, a capacitor voltage of 3.15 V was achieved after 24 h, close to the saturation voltage.This voltage corresponds to 92.5 mJ of stored energy.The charging characteristics of this run are summarized in supplementary Figure S4, Supporting Information.The peak charging power was 2.7 μW and was reached after 2.5 h of charging when the voltage level was about 1.5 V.It must be remarked that this value differs from the one on Figure 7 due to different storing capacities.
Once the charging was complete, the condition monitoring chip was connected.This operation is shown in the Video 1, Supporting Information.Monitoring the energy consumption by the chip, as plotted in Figure 8, reveals that the most energy-expensive operation is the chip wakeup, consuming 26 mJ with a peak power of 9 mW.Afterward, it requires around 3 mJ to connect with the PC via Bluetooth low energy (BLE) and around 2 mW to perform continuous measurements.After 20 s of measurement, the cap voltage dropped to 1.5 V, forcing the chip to shut down.
Using this same setup, we performed a stability test.For this test, the transducer was operated continuously for 50 h at realistic vibration conditions (on rotary motor), which corresponds to around 8.82 million bending cycles at a resonance frequency of 49 Hz.After this 50 h continuous operation, we did another 50 h run.There was no significant change in the output power and the voltage level of the storage capacitor between both runs, as shown in Figure S6a,b, Supporting Information, respectively.
Finally, we used a combination of FWR and VD for enhanced energy harvesting adopted to the needed voltage levels and energy transfer characteristics of the two rectifier types.Since the sensor chip's minimum operation voltage is 1.5 V, which coincides with the peak power value of the FWR, we used a FWR to charge the capacitor up to this voltage, which took 3 h.Then, a VD replaced the FWR in order to achieve higher voltage values and thus higher maximum energy levels within the same total time.The charging curve, as depicted in Figure 9, is smooth, apart from a small discontinuity during circuit exchange.The saturation voltage is higher for the combined approach, and thus more energy is transferred into the capacitor within the same time.A comparison of the power output between the two experiments (FWR only vs. FWR þ VD) in Figure 9b reveals that the energy transfer is high with the FWR after complete discharge of the capacitor, but drastically drops after reaching its power peak after 3 h.The exchange to the VD ensures that the power remains at a high level for a longer time.With this combination, an energy level of 138 mJ was reached after 24 h, which is 43.8% more than when using solely a FWR circuit.This result suggests that for standalone sensor nodes with noncontinual operation, that is, with frequent full discharge of the storage capacitor, a power management circuit that switches between FWR and VD configuration might increase the overall energy harvesting performance.We understand that further research is required to test the practicality of this concept.
Conclusion
In this work, we demonstrated vibration energy harvesting with fully printed PENGs under realistic vibration conditions.Singleand multistacking configurations of the VPENGs were evaluated in combination with full-wave or VD rectification circuits at around 50 Hz.The geometry was optimized with FEM simulations, revealing a maximum power output when the piezoelectric layer covers about half the length of the cantilever with a singletip mass.A maximum power output of 14.4 μW at 11.5 mm s À1 RMS vibration velocity and 49 Hz was obtained experimentally.When tested on a rotary vacuum pump in the application for self-sustained condition monitoring of the driving motor, a peak power of 1.6 μW was achieved during energy storage, for a vibration of 4.6 mm s À1 .Here, a single stack with a thicker active material is preferred to achieve higher saturation voltage levels, whereas a parallel connection of two stacks enables faster charging at the onset at low voltage levels.The overall energy transfer after 6 h was 25 mJ when using a VD for rectification, which was 66% more than the value achieved with FWR at the same harvesting condition.
The EHS was able to power a condition monitoring chip after harvesting the vibrations of a rotary pump over 24 h.With the stored energy of 93 mJ, the SC could power up, establish a Bluetooth low-energy connection, and perform measurements plus wireless data transmission for around 20 s.The EHS can thus be directly applied to monitor engines with a more or less constant peak vibration frequency during operation, such as pump or fan drives.
Finally, our study suggests that a combination of a FWR and a VD could be of advantage to speed up the charging after full discharge of the storage capacitor by switching from a FWR to a VD circuit once a low threshold voltage is achieved.With this strategy, the time necessary to harvest the required energy level for initiating the measurement was reduced from 24 h to 15 h compared to using a FWR only.
Experimental Section
VPENG Fabrication: The VPENG consists of a polyethylene terephthalate (PET) substrate cantilever with the piezoelectric transducer screen printed on top, following the procedure described in another stidy. [39]he electrodes were printed with PEDOT:PSS ink (Celvios SV4), while for the piezoelectric layer we used FC-20 powder from Arkema (monomer ratio of VDF:TrFE = 80:20), dissolved in gamma-butyrolactone.The electrical connection lines were printed with Bectron CP 6612 silver ink.For the single-and multistack samples, several layers of the P(VDF:TrFE) ink were printed until the desired thickness indicated in Table 1 was achieved.The single-stack transducers had 1-3 piezoelectric layers sandwiched between the PEDOT:PSS top and bottom electrodes.For the multistack configuration, another electrode design was used.Here, the bottom and top electrodes were printed with the same screen and electrically connected.The second, intermediate electrode was printed after two piezoelectric layers, followed by another two piezoelectric layers to achieve electrical parallel connection.A schematic representation of the device's layer configuration is already shown in Figure 6b.
Electrical Poling
Step: A sinusoidal voltage with a frequency of 1 Hz was applied via the printed connection lines.The amplitude ranged from 700 to 2000 V depending on the thickness of the piezoelectric layer.We started with a low amplitude, and it was increased in successive poling steps until a saturation of polarization was achieved.At the end of the poling, PUND sequence was applied, thus obtaining the remnant polarization of the piezoelectric layer. [33]EM Model: An FEM model of the cantilever was implemented in COMSOL, where only the substrate and the active layer were considered as active physical domains.The substrate was modeled as an isotropic liner elastic material, while the piezoelectric layer used an orthotropic model with COMSOL's in-built piezoelectric coupling equations following our previous works.[35,36] An isotropic damping parameter of η = 0.015 was used to obtain realistic values at resonance.As electrical boundary conditions, the voltage at bottom was set to 0 V. On the top boundary, a terminal boundary condition was set.This boundary condition allows two modes depending of which quantity is set: voltage and charge mode.In voltage mode, the terminal was set to 0 V to simulate short-circuit conditions.The charge was then calculated as the integral of the displacement vector over the boundary area.Similarly, in charge mode the charge was fixed equal to 0C to Figure 9. Enhanced energy harvesting using a combination of FWR and VD for improved impedance matching.In the initial charging phase, FWR was applied, which was then replaced by a VD to obtain a higher voltage, that is, energy level.a) Voltage and energy plot.The rectifier circuits were exchanged at a level of 1.5 V, marked by a small discontinuity in the graph.b) Charging power comparison between only using FWR and using the combination of circuits. For theWR alone, the power drops quickly after reaching its peak, while with the FWD þ VD combination a much better overall energy transmittance is achieved.(The power was normalized to the respective peak power to compensate for variations in the machine vibration between the two measurement runs.).simulate open-circuit conditions.This condition allows to solve the electrostatic equation and retrieve the voltage at the boundary.
Energy Harvesting and Vibration Tests: For testing, the harvesting transducers were fixed to a 3D-printed protective box, as indicated in Figure 6a.The box was 3D printed with an Objet30 pro V2 3D printing machine using a Alphacam VeroBlackPlus photopolymer filament.The box clamps the transducer sheet in the middle, allowing both sides to oscillate like a butterfly.It also houses magnets in the bottom of the box to ease attachment to magnetic surfaces like machinery, as shown in the same figure.
The transducers were tested using a Dewesoft DS-PM-20 electromagnetic shaker with a sinusoidal drive signal from a Dewesoft Sirius DAQ.We measured the power output by connecting the transducer to a probe resistor and measuring the voltage drop with the same DAQ unit.The input impedance of the measurement channel was 10 MΩ.It must be considered then that the actual load resistance connected to the transducer is the parallel equivalent resistance between input impedance and probe resistor.With these parameters, current and power can be derived, respectively: where V rms is the RMS of the voltage signal and R is the load resistance.
For the energy harvesting tests, we connected the harvesting transducers to the rectifier circuit (FWR or VD) and a capacitor.The capacitor voltage was measured with a Keithley 6517 A electrometer.From the voltage value, stored energy and charging/consumed power were calculated via Equation ( 4) and ( 5).The rotary pump was a D16 BCS PFPE model from Trivac.The utilized Infineon chip was a prototype sensing platform module, not available commercially.During operation, temperature, acceleration, and pressure was measured.The chip had also the possibility to measure voltage, but it was not used on the experiment.The sampling rate of the chip was 0.8 Hz, and thus too low for vibration detection, but enough to monitor the temperature of the pump itself.
Figure 1 .
Figure 1.Energy-autonomous condition monitoring for electric motors.It harvests vibrational energy in industrial environments with an EHS to power SC.The EHS consists of VPENG, a rectifier circuit, and an energy storage device.The SC collects the temperature and vibration spectrum of an electric engine for condition monitoring and failure diagnosis.The collected data is sent to some gateway or computer via BLE.
Figure 2 .
Figure 2. a) Geometry of the VPENG, showing one half of the butterfly-like arrangement and the central clamping (fixed) area.The length L c , widthW c, substrate thickness d s , and piezoelectric layer thickness d p of the cantilever, together with the tip mass M , must be adjusted so that the eigenfrequency of the cantilever matches the vibration frequency of the targeted system.In this scheme, the active layer area is the area where the piezoelectric transducer is present in either of the two stacking configurations.This layer covers the substrate over a length L p , which must be adjusted to maximize the transducer's power output.An image of the printed harvester is shown in Figure6.b) Schemes and representative SEM images of transducers with the two different layer stacks (scale bar: 2 μm).The single stack consists of one layer of P(VDF:TrFE) sandwiched between electrodes.In the multistack arrangement, another electrode is added in-between and the two stacks are electrically connected in parallel.PET was used as substrate material, while the electrodes were printed with the conductive polymer PEDOT:PSS.A protective coating was applied on top of the transducers.SEM images were colorized to highlight the different layers of the fully printed transducers.
Figure 3 .
Figure 3. Simulated eigenmodes of a cantilever with L c = 30 mm, W c = 8 cm, L p = 16 mm, a PET substrate with d s = 175 μm, and a point tip mass M of 0.2 g.The geometry is the one shown in Figure2.The color scale represents the strain component on the Z direction, normalized to the maximum value for each eigenvalue.The thin black lines indicate the initial geometry, while the thick rectangle remarks the active area, in which is the area covered by the piezoelectric material (characterized by length L p ).The colored deformed layer shows the shape of the mode as well as the sign of the local strain (>0: tensile strain, <0: compressive strain).The displacement is scaled up for visualization purposes.
Figure 4 .
Figure 4. Variation of the simulated eigenfrequency depending on different sets of model parameters.Each panel shows the frequency shift when two parameters are changed, while the other parameters remain fixed.The header indicates the values of the fixed parameters for each row and column, respectively.As we can see, a change of tip mass M and length L c results in the highest variation of the eigenfrequency.A substrate thickness of 175 μm is necessary in order to achieve the targeted eigenfrequency of 49 Hz for a wide range of tip mass and cantilever length.For the top left panel, the eigenfrequency range of 49 AE 3 Hz is highlighted.
and plotted the current and voltage values together with the calculated theoretical output power P sim .With L p ranging between 1 and 29 mm, the frequency varied only by 2.2 Hz, which may be considered negligible compared to the other tuning parameters.This method introduces some noise on the output since the meshing changes for each length value, introducing numeric errors.Still, the trends are clear, with the short-circuit current (I sc ) increasing and the open-circuit voltage (V oc ) decreasing with length L p .
Figure 5 .
Figure 5. Simulation of short-circuit current I SC (RMS) (red line) and open-circuit voltage V OC (RMS) (black line) for varying active layer length L p (d p = 20 μm, d s = 175 μm).The theoretical peak load power P sim (blue line) was calculated as1 2 V OC;RMS ÃI SC;RMS , which is the expected power output at impedance matching condition.(The small ripples in the calculated power plot are a consequence of finite mesh sizes of the FEM model.)In the model, the cantilever was excited at its natural frequency, around 49.5 Hz for the different length of the active transducer areas L p with an acceleration RMS value of 4 ms À2 .The mass is placed 1 mm from the edge of the cantilever.The range of L p resulting in maximum power output is indicated in gray in the graph.
Figure 6 .
Figure 6.a) EHS with cantilevers mounted to the clamp mechanism integrated in a 3D-printed housing.The EHS can be placed either on a shaker for controlled excitation and characterization (left) or fixed to the chassis of the rotary pump via integrated magnets (right).For systematic excitation, the shaker applied a sinusoidal displacement, where the frequency was swept from 0 to 100 Hz and the RMS acceleration ranged from 0 to 15 ms À2 .The acceleration was measured with a single-axis accelerometer (not visible on the image).The tip mass allows to adjust the resonance frequency to 49 Hz.b) Power curve of VPENGs operating at a 49 Hz sinusoidal vibration with an RMS acceleration of 4 ms À2 , which corresponds to a RMS velocity of 11.5 mm s À1 .The stacking configuration and piezoelectric layer thickness is shown on the right.In all cases, L c = 29 mm, W c = 82 mm, M = 0.22 g.The eigenfrequency was tuned by adjusting the tip mass position.
Figure 7 .
Figure 7. Harvesting the vibration energy of a rotary pump at 49.5 Hz. a,b) Capacitor voltage and c,d) stored energy when using FWR or VD as rectifier circuit (C = 4.7 mF).The VD clearly offers higher output voltages as expected.For samples Single-2 and Multi-1 with similar thickness of the piezoelectric layer, the saturation voltage is comparable; however, the energy transfer at lower voltage levels is noticeably better with the Multi-1 configuration.The overall energy transfer is significantly higher when using a VD as compared to the FWR.e) and f ) Transferred power versus voltage according to Equation(5).With the VD, a high transfer power can be maintained over a large voltage range.A Savitzky-Golay filter with a 100-point window was applied to filter the high-frequency ripple in the voltage plot.
Figure 8 .
Figure 8. Power consumption of the SC and the energy stored in the capacitor.During operation, it can measure the temperature, the pressure, and the acceleration on the pump.The chip also has the possibility to measure a voltage level.The capacitor voltage was monitored with an electrometer.An example of a measurement is presented in Video 1, Supporting Information.
Table 1 .
Overview of samples, including their thickness, poling voltage, and remnant polarization. | 2024-03-19T15:03:53.646Z | 2024-03-16T00:00:00.000 | {
"year": 2024,
"sha1": "582fca96914e02476aa439c2545f7532f7be1a19",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1002/adem.202302140",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "d4769cfaf395918bd2b5b0e693a9a18534869002",
"s2fieldsofstudy": [
"Engineering",
"Materials Science"
],
"extfieldsofstudy": []
} |
8499322 | pes2o/s2orc | v3-fos-license | Indicators related to the rational use of medicines and its associated factors
ABSTRACT OBJECTIVE To evaluate indicators related to the rational use of medicines and its associated factors in Basic Health Units. METHOD This is a cross-sectional study carried out in a representative sample of Brazilian cities included in the Pesquisa Nacional sobre Acesso, Utilização e Promoção do Uso Racional de Medicamentos – Serviços, 2015 (PNAUM – National Survey on Access, Use and Promotion of Rational Use of Medicines – Services, 2015). The data were collected by interviews with users, medicine dispensing professionals, and prescribers; and described by prescription, dispensing, and health services indicators. We analyzed the association between human resources characteristics of pharmaceutical services and dispensing indicators. RESULTS At national level, the average number of medicines prescribed was 2.4. Among the users, 5.8% had antibiotic prescription, 74.8% received guidance on how to use the medicines at the pharmacy and, for 45.1% of users, all prescribed medicines were from the national list of essential medicines. All the indicators presented statistically significant differences between the regions of Brazil. The dispensing professionals that reported the presence of a pharmacist in the unit with a working load of 40 hours or more per week presented 1.82 more chance of transmitting information on the way of using the medicines in the dispensing process. CONCLUSION The analysis of prescription, dispensing, and health services indicators in the basic health units showed an unsatisfactory proportion of essential medicines prescription and limitations in the correct identification of the medicine, orientation to the patients on medicines, and availability of therapeutic protocols in the health services.
INTRODUCTION
Rational use of medicines requires users to use the appropriate medicine for their clinical condition in doses that meet their individual health needs, during an appropriate period and at the lowest cost to themselves and the community 20 .
The non-rational use of medicines may have a negative impact on population health, including avoidable adverse events and microbial resistance 24 . Adverse medicine events are estimated to account for 3.5% of hospital admissions 6 . According to one study, the occurrence of these events resulted in health services expenditure of about $ 21 million per 100,000 adult population 11 . The evaluation of the activities of Pharmaceutical Services (PS) is fundamental to promote the access and rational use of medicines. To assist in the PS evaluation, the World Health Organization (WHO) has developed indicators that can be used reproducibly so that methods are reliable and comparable across different studies and locations 15 . According to a document published in 2007, PS monitoring and evaluation can be performed at three levels. Level I concerns aspects of structure and organization process of the pharmaceutical industry. Level II targets the results of the national drug policy and is measured in public and private services and in households, in addition to the domains of access, quality, and rational use of medicines. The evaluation is conducted by a survey based on visits to state and municipal pharmacy supply centers; to public health units that perform ambulatory care and dispensing of medicines; and to private pharmacies of retail trade, being adaptable to the type of study that will be conducted. Level III details specific aspects of the organization of the pharmaceutical sector 23 . According to a systematic review involving 900 studies conducted in 104 countries, the analysis of indicators on the rational use of medicines indicated that the inappropriate use of pharmaceuticals remains a public health problem 12 . This review included studies performed in public primary health care services. A similar scenario was observed in a multicenter study conducted in Brazil in 2004, which observed 40.1% of antibiotic prescriptions, 6.9% of injectable medicines, and 78.3% of medicines present in the list of essential medicines 9,16 . To evaluate the rational use of medicines in Brazil, current data are needed, from a representative sample of the Brazilian population that uses Unidades Básicas de Saúde (UBS -Basic Health Units).
The Pesquisa Nacional sobre Acesso, Utilização e Promoção do Uso Racional de Medicamentos -Serviços (PNAUM -National Survey on Access, Use and Promotion of Rational Use of Medicines -Services) aimed to characterize the organization of pharmaceutical services in the primary care of the Brazilian Unified Health System (SUS) -to promote the access and rational use of medicines -, as well as to identify and discuss the factors that interfere in the consolidation of pharmaceutical services in the cities.
This study integrates PNAUM -Services and aims to evaluate indicators related to the rational use of medicines in the UBS and its associated factors.
METHODS
This study is part of PNAUM -Services, a cross-sectional, exploratory, evaluative study, composed of a survey of information in a representative sample of cities, primary health care services, users, physicians, and professionals responsible for dispensing of medicines in the five regions of Brazil. The sampling plan considered the several study populations and estimated the various sample sizes for each of these populations 1 . The sample size was estimated by algebraic expression a . The sample sizes adopted in each region were 120 municipalities, 300 health services, and 1,800 users. The sample was stratified in capitals (26 and the Federal District); biggest cities (0.5% biggest cities in the region, totaling 27) and smallest cities (546 cities chosen by lot). From these 120 selected municipalities, 60 were selected by region, totaling 300 in the a The sample was estimated by the algebraic expression , where: P = 0.50 is the proportion of individuals to be estimated for being the one that leads to the largest sample size; Z = 1.96 is the value in the reduced normal curve for the 95% confidence intervals; deff is the effect of the design; d is the sampling error in percentage points. country, in which the health services were chosen by lot. Health posts, health centers or UBS, and mixed units were included in the lot, according to the Cadastro Nacional de Estabelecimentos de Saúde (CNES -National Register of Health Establishments).
Face-to-face interviews were conducted with users, physicians, and those responsible for dispensing of medicines in the primary health care services, as well as telephone interviews with those responsible for pharmaceutical services in the cities, using a structured questionnaire specific to each category. The observation of the facilities of pharmaceutical services and availability of medicines were verified by observation script. A manual and a glossary of technical terms were developed for each research instrument. After the training of the interviewers, a pretest was carried out, involving cities with different population sizes, aiming to validate and improve the instruments. The data were collected from July 2014 to May 2015.
PNAUM considered as dispensers the professionals responsible for delivering the medicines to users, who may be pharmacists, nurses, nursing assistants, pharmacy assistants, or other professional category.
The data were described by prescription, dispensing, and health services indicators, outlined for the study. The list of indicators and the criteria for their calculation were based on those recommended by the WHO to evaluate the rational use of medicines 23 , with adaptations and propositions carried out by the researchers ( Table 1). The option of using indicators based on those recommended by WHO was performed to allow the comparison of PNAUM results with those obtained by national and international studies. Table 1. List of prescription, dispensing, and health services indicators related to the rational use of medicines. National Survey on Access, Use and Promotion of Rational Use of Medicines -Services, 2015.
Indicator Criteria for Calculation
Prescription Average number of prescription medicines We considered the number of medicines used by users who had at least one medicine prescribed by a doctor or dentist for the calculation of the average.
Proportion of users with antibiotic prescription
We considered the number of users who used at least one antibiotic as numerator and the number of users who used at least one medicine prescribed by a doctor or dentist as denominator. We considered as antibiotics the bacteriostatic and bactericidal antibacterial medicines, in systemic and topical use presentations. We considered the following categories in the Anatomical-Therapeutic-Chemical classification level (ATC): D06A-Antibiotics for topical use, D06BA-Sulfonamides, D06C-Antibiotics and chemotherapeutics combinations, J01-Antibacterials for systemic use, and J04-Antimycobacterials.
Proportion of users with injectable prescription
The numerator was the number of users who used at least one injectable medicine and the denominator was the number of users who used at least one medicine prescribed by a doctor or dentist.
Proportion of users with all prescribed medicines present in the national list of essential medicines
The numerator was the number of users with all prescribed medicines present in the National List of Essential Medicines of 2013, in force during the period of data collection.
The denominator was the number of medicines prescribed by a doctor or dentist.
Dispensing
Percentage of professionals dispensing medicines identified with name and dose We considered the number of professionals who reported dispensing medicines identified with name and dose as numerator and the number of professionals that act in dispensing medicines and who answered the questionnaire item on this topic as denominator.
Proportion of users who received guidance on medicines at the pharmacy We considered the number of users who reported receiving guidelines on medicines at the pharmacy as numerator and the number of users who had at least one medicine prescribed by a doctor or dentist as denominator.
Health services
Availability of relevant therapeutic protocols in the medical offices, reported by physicians We considered the number of doctors who reported the presence of relevant therapeutic protocols in the health units as numerator and the number of doctors interviewed as denominator.
Availability of a copy of the local or national list of essential medicines, reported by the dispenser We considered the number of dispensing professionals who reported the availability of a copy of the local or national list of essential medicines in the unit as numerator and the number of dispensing professionals interviewed as denominator.
A descriptive analysis of the variables used in the study was performed. The indicators were described according to regions of Brazil. The comparison of the indicators between the regions was done with Chi-square test for the categorical variables and with ANOVA for the continuous ones. We analyzed the association between human resources characteristics of pharmaceutical services and indicators of dispensing related to the rational use of medicines by logistic regression, with calculation of the odds ratio (OR). We considered a 5% statistical significance level and 95.0% confidence interval. SPSS version 22.0 was used for statistical analyses.
Participants signed an informed consent form. PNAUM -Services was approved by the National Research Ethics Committee of the National Health Council, by Opinion no. 398.131/2013. Table 2 shows the values of prescription, dispensing, and health services indicators related to the rational use of medicines. All indicators presented statistically significant differences between the regions. The average number of medicines prescribed in Brazil (2.4) was higher than the average value in the North (1.8), Midwest (2.0), and Northeast (2.2), but lower than in the South (2.9). The lowest proportion of users with antibiotic prescription was in the Southeast (3.8%) and the highest in the North (10.1%). The percentage of users who received guidance on medicines ranged from 71.4% in the South to 85.3% in the Midwest. The availability of a copy of Rename was reported by 80.5% of dispensing professionals in the North region and by 95.1% in the Southeast. The proportion of professionals who reported always transmitting information on how to use the medicines at the time of delivery of the pharmaceutical product was 90.9%. The dispensing professionals who participated in PS training in the two years prior to the interview had 1.49 more chance of dispensing medicines identified with name and dose (OR = 1.49, 95%CI, 1.36-1.62, p=0.00) and were less likely to provide guidance on how to use the products (OR = 0.86, 95%CI 0.77-0.96, p=0.01). The dispensing professionals whose units have full-time pharmacist were 1.82 times more likely to provide guidelines on the use of medicines (OR = 1.82, 95%CI 1.11-2.99, p=0.02). We observed no statistically significant association between dispenser performance in units with full-time pharmacist and dispensing of medicines identified with name and dose (OR = 1.17, 95%CI 0.67-2.03, p=0.58).
DISCUSSION
The analysis of prescription, dispensing, and health services indicators in the UBS pointed to aspects that should be considered in the consolidation of the Política Nacional de Assistência Farmacêutica (PNAF -National Policy of Pharmaceutical Services) to promote the rational use of medicines in primary health care.
The mean number of prescribed medicines observed in this study (2.4) was similar to that found by a multicenter study conducted in Brazil in 2004 9 (2.3) and was higher than the range of values considered standard for the indicator (less than 2) 7,22 . A systematic review of studies conducted in Africa identified an average prescription of medicines of 3.1 14 . In Brazil, this indicator presented significant regional variations, with 2.2 in the Northeast and 2.9 in the South, a situation that may be related to socioeconomic differences between regions. This result is in line with another cross-sectional study conducted in a sample of adult and older adult users of UBS, in which the prevalence of access to medicines for continuous use was higher in the South region than in the Northeast 17 . The authors attributed the difference to a higher proportion of users belonging to higher socioeconomic levels in the South 17 .
The analysis of the frequency of antibiotic prescriptions is performed to assess their overuse 23 , which leads to microbial resistance in the population 13 . In this study, the proportion of patients with antibiotic prescriptions was 5.8%, lower than the average values of 37% for Latin America 12 and 46.8% for Africa 13 observed in systematic reviews 12,14 . The value of this indicator was also lower than the established standard (less than 30%) 7,22 , suggesting that the proportion of antibiotic prescriptions is satisfactory in the UBS user population. In 2011, control of the dispensing of antimicrobial medicines was encouraged by the publication of RDC no. 20/2011 of the National Sanitary Surveillance Agency, which now requires retention of prescription in establishments dispensing products of this therapeutic class. Although the legislation regulates especially the dispensing process, we can assume that it has influenced the behavior of prescribers to increase caution regarding the prescription of antibiotics. However, this hypothesis and other possible factors associated with the use of antibiotics should be evaluated in a study outlined for this purpose. We observed regional variations in the proportion of antibiotic prescriptions, which was highest in the North region. This situation may have occurred because of the epidemiological profile of the region, with lower prevalence of chronic diseases compared to others with more favorable socioeconomic conditions, such as South and Southeast 3 .
A systematic review of international studies showed a frequency of injectable medicines prescriptions of about 20% 12 . In this study, injectable medicines were prescribed to 6.0% of users, a value similar to the proportion observed by a Brazilian study carried out in three different states 9 . Because of the risk of complications for incorrect administration of parenteral medicines, the prescription of injectable products has been restricted to procedures performed in the UBS itself and to medicines that are not available in the oral form in pharmaceutical market, such as insulin.
The use of standardized lists of medicines in health systems contributes to the promotion of quality of care when products are selected according to criteria of health need, efficacy, safety, quality, and cost 22 . In this study, the proportion of medicines present in Rename was 55.2%, lower than that observed in Latin America (71.4%) 12 , Africa (88.0%) 14 , and in another Brazilian study 9 (78.3%) carried out in 2004. The standard considered ideal for this indicator is 100% 7 , so that the proportion found by PNAUM -Services was unsatisfactory.
We observed that, for 45.1% of users, all prescribed medicines were included in Rename. According to a study carried out in a region of China, the doctor' s knowledge about medicines was associated with a higher prescription of essential medicines 19 . Researches on factors associated with the prescription of essential medicines in the Brazilian context are needed to subsidize policies of permanent education for prescribing professionals in SUS. Despite the importance of adopting the relation of essential medicines for the rational prescription of medicines, the limitations of Rename should be emphasized. A Brazilian research related Rename' s medicines to studies of the Global Burden of Disease in Brazil 10 . According to an analysis done for Rename' s 2012 edition, some causes of disability-adjusted life years (DALY) have not been fully addressed by the medicines on the list, such as oral conditions, cancer, and psychiatric diseases 10 .
The identification of medicines with name and dose at the time of dispensing was reported by 67.4% of dispensing professionals. A Brazilian study pointed out that 95.2% of the medicines offered by SUS had data related to name, concentration, manufacturer, batch, and expiration period 9 . A study carried out in the primary health care of Botswana found that 74% of medicines dispensed at health posts were identified by name 4 . To ensure that the required amount of tablets is provided for the treatment of patients in the UBS, the blisters are cut, which leads to problems in medicine identification. The absence of identification of the pharmaceutical products in the primary packaging observed in this study can lead to medication errors, such as the use of expired medicines or exchange for another product by the user 5 .
The dispensing of medicines involves patient orientations that contribute to the rational use of medicines, such as how to use them, time of treatment, major adverse reactions, and interactions with medicines and food 13 . The transmission of medicine guidelines is fundamental for the adherence to treatment and success of pharmacological therapy 21 . In our study, 74.8% of users reported having received information at the pharmacy on how to use the medicines, a proportion lower than that of a study conducted in a Brazilian city (92.5%) 18 . The professionals who reported the presence of a full-time pharmacist in the UBS had a greater chance of transmitting information to users. Dispensing professionals at Brazilian UBS may be pharmacists or assistants supervised by pharmacists or nurses. The presence and performance of the pharmacist in a weekly workload of 40 hours helps this professional so that he can guide or train assistants for the guidance on medicines.
The comparison of our results with others is difficult because of the differences in the adopted methodology: PNAUM questioned whether dispensers always delivered the medicines identified with name and dose to the patients. Dispensers who had PS training were more likely to dispense the medicines identified by name and dose, showing the importance of continuous health education actions for professionals involved in dispensing pharmaceuticals. In a contradictory way, professionals who participated in training were less likely to give guidance on how to use the medicines. This contradiction suggests that the training may have mainly addressed administrative procedures of dispensing as a matter of priority, but insufficiently the clinical aspects, such as patient orientation. However, this hypothesis must be tested by studies evaluating educational activities on the rational use of medicines regarding content, employed methods, and impact.
The availability of a Rename was of 89.5%, similar to that found by a research carried out in health facilities in Saudi Arabia (90%) 8 and lower than that observed by a study done in Pakistan (100%) 2 . However, the methodology used to estimate this indicator was different between the studies.
The therapeutic protocols are based on scientific evidence and reflect a consensus on the treatment of first choice for several clinical conditions, contributing to the promotion of rational prescription. In this study, 46.2% of physicians reported the availability of therapeutic protocols in the medical offices. This indicator was not evaluated in recent studies of the literature, but a Brazilian study conducted in 2004 indicated the availability of protocols for the treatment of tuberculosis in 43.3% of health units 9 .
Among the limitations of the study, we highlight that the data on the accomplishment of medicines identification and orientation to the users in the dispensing process were collected by the professionals' report and not by direct observation. The methodology used to estimate the indicators in PNAUM -Services differed in some situations from that used in other studies, which made it difficult to compare the results. Despite the limitations, our study presented an unprecedented panorama in the literature on the rational use of medicines in a representative sample of the UBS user population in Brazil.
Thus, we observed an unsatisfactory proportion of prescription of essential medicines and limitations in the correct identification of the medicine, guidance to patients on medicines, and availability of therapeutic protocols in the health services. The statistically significant difference in the values of the indicators between the regions of Brazil suggests that regional specificities should be considered in the formulation of policies aimed at increasing the rationality of the use of pharmaceuticals. The unsatisfactory proportion of prescription of essential medicines in the UBS points out the need for training SUS prescribers on the rational use of medicines.
Regarding the dispensing process, educational activities for professionals of primary health care units and their supervision by full-time pharmacists may help users to use the right medicine for their clinical condition and have access to guidelines on their pharmacological treatment.
Measures that qualify health, prescription, and dispensing services are needed to promote the rational use of medicines, which is one of the main goals of PNAF. | 2018-04-03T00:00:38.673Z | 2017-09-22T00:00:00.000 | {
"year": 2017,
"sha1": "637b083197819b36caa2a3d24b52c7760c4eab94",
"oa_license": "CCBY",
"oa_url": "https://www.revistas.usp.br/rsp/article/download/139771/135047",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "637b083197819b36caa2a3d24b52c7760c4eab94",
"s2fieldsofstudy": [
"Medicine",
"Political Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
10359867 | pes2o/s2orc | v3-fos-license | Burn Injury Alters the Intestinal Microbiome and Increases Gut Permeability and Bacterial Translocation
Sepsis remains one of the leading causes of death in burn patients who survive the initial insult of injury. Disruption of the intestinal epithelial barrier has been shown after burn injury; this can lead to the translocation of bacteria or their products (e.g., endotoxin) from the intestinal lumen to the circulation, thereby increasing the risk for sepsis in immunocompromised individuals. Since the maintenance of the epithelial barrier is largely dependent on the intestinal microbiota, we examined the diversity of the intestinal microbiome of severely burned patients and a controlled mouse model of burn injury. We show that burn injury induces a dramatic dysbiosis of the intestinal microbiome of both humans and mice and allows for similar overgrowths of Gram-negative aerobic bacteria. Furthermore, we show that the bacteria increasing in abundance have the potential to translocate to extra-intestinal sites. This study provides an insight into how the diversity of the intestinal microbiome changes after burn injury and some of the consequences these gut bacteria can have in the host.
Introduction
The gastrointestinal tract contains over 100 trillion microbes, termed the microbiota, that provide numerous benefits for the host such as metabolism and de novo synthesis of nutrients, protection against pathogenic microbes, and immune development and function [1]. Feedback between these organisms and the immune system is necessary for establishing tolerance along mucosal surfaces and maintaining the gut epithelial barrier [2]. Dysbiosis of the healthy intestinal microbiome is associated with numerous disease states: inflammatory bowel disease (IBD), autism, obesity, rheumatoid arthritis, and diabetes [3]. In IBD, it is suggested that alterations of the healthy microbiome activate the mucosal immune response, which increases intestinal permeability and allows for the translocation of microbes or microbial products into the circulation, thereby adversely impacting the host [4].
Sepsis is the leading cause of death in patients that suffer from severe trauma. It is hypothesized that sepsis stems from bacterial infections, toxins, or metabolic products that activate pattern recognition receptors and lead to a systemic inflammatory response in immunocompromised individuals [5]. Conversely, the healthy intestinal microbiome acts as a physiological microbial barrier which keeps commensal opportunistic pathogens in check by resisting microbial colonization. Therefore, it is important to understand how this microbiome is altered following injury and the role these commensal bacteria play in potentiating gut barrier dysfunction, bacterial translocation, and ultimately sepsis after injury.
Burn injury is one of the most common forms of trauma, and in patients with severe burns, 75% of all deaths are related to sepsis or infectious complications arising from injury [6]. Following insult, there is an immediate systemic inflammatory response that spreads throughout the body and affects secondary organs [7]. In addition to the skin, there is reported inflammation in the lungs, liver, and intestines after burn [8]. In the context of the gut, previous research has shown that burn injury leads to a mesenteric vasoconstriction and produces a hypoxic environment for the gut [9]. Subsequent, reperfusion of blood to the tissue produces drastic fluctuations of oxygen levels exacerbating cell stress, cell death, and ultimately leading to a breakdown of the epithelial barrier marked by increased intestinal permeability and bacterial translocation to mesenteric lymph nodes (MLN) [10]. The translocation of bacteria from the gut to MLN has been previously shown to correlate with sepsis [11]. Furthermore, there are numerous studies which suggest that Gram-negative bacterial infections play an important role in potentiating sepsis [12,13].
Therefore, we asked whether burn injury alters the homeostatic environment of the gut which allows for changes in the intestinal microbiome that favors the overgrowth of Gramnegative aerobic bacteria. This overgrowth of gut bacteria in combination with increased intestinal permeability may allow for the translocation of these bacteria to extra-intestinal sites increasing the risk of bacterial infections and predisposing patients to sepsis.
Ethics Statement
Patient Samples. Loyola University Chicago Health Sciences Division Institutional Review Board (IRB) approved these studies and informed written consent was obtained from all subjects (burn patients and controls) except burn patients with Fecal Management System (FMS). Samples from burn patients with FMS did not require a consent as the IRB waived the need for consent from the group of patients with FMS and all patient data was de-identified prior to analysis.
Feces samples were obtained from 4 burn patients admitted to Loyola University Medical Center Maywood, IL from December 2010 to November 2011; these patients sustained 25%, 32%, 44%, and 57% total body surface area (TBSA) burns, and samples were obtained 5-17 days post injury. The median age of burn patients is 49 ± 9.7 and it ranges from 36 to 59 years. There were one female and three males among burn patients included in this study. A fecal management system routinely emplaced for burned patients, was used to collect fecal samples.
Patients were selected who met the following criteria: adult male or female over 18 years of age who sustained full thickness burn injury >20% TBSA, and without pre-existing clinical infections, historical evidence of gastrointestinal diseases such as Ulcerative Colitis, Crohn's disease, or Celiacs disease, historical evidence of gastrointestinal Clostridium difficile infection, no antibiotic use (other than surgical prophylaxis), without peritonitis, AIDS, immune suppressing medications, or metastasized cancer.
Control Group. Patients with physiologically insignificant burns, i.e. superficial burns less than 10% of total body surface area (TBSA) were designated as controls. The median age of control group population is 39.6 ± 16.84 years and it ranges from 23 to 74 years. The average surface area of control population is 4.77 ± 2.44 which ranges from 1-8% TBSA. Control population include one female and seven males. A single fecal sample was obtained from 8 control patients and used as controls for comparison to those with significant burn injury. These patients did not require the use of a FMS. These patients were also subject to the above inclusion and exclusion criteria.
Animals. Male C57BL/6 mice, [8][9] week old, weighing 22-25 g, were obtained from Charles River Laboratories. All experiments were conducted in accordance with the guidelines set forth by the Animal Welfare Act and were approved by the Institution Animal Care and Use Committee at the Loyola University Chicago Health Sciences Division. The identification number assigned to our animal care and use protocol is IACUC 2012067. The animals were euthanized by CO 2 asphyxiation.
Burn Injury Procedure
Mice were anesthetized with xylazine (80 mg/Kg) and ketamine (1.25 mg/Kg) cocktail and their dorsal surface shaved. Anesthetized mice were placed in a template exposing~20% TBSA as calculated by the Meeh formula [14]. The mice were divided into two treatment groups, those receiving burn injuries or sham injuries. The burn group was then submerged in a water bath set to~85°C for~9 seconds while the sham group was submerged in a water bath set to 37°C. Following burn or sham procedures, all animals were resuscitated with 1ml of saline i.p. This procedure models a~20% TBSA full thickness third degree burn and an~15-20% mortality within 24-48 hours after injury. The burn injury procedure described in this proposal is widely used in many previous studies [15][16][17] and is performed under full anesthesia and has been histologically proven to incur a full thickness, insensate lesion [18]. The entire thickness of the dermis, including peripheral sensory endings, is destroyed [18]. The health of the mice is monitored constantly for four hours after the procedure to ensure that they wake up from the anesthesia. Mice are then returned to the animal care facility and given food and water ad libitum; and are monitored for any postoperative complications twice a day until the experiment is completed. Humane endpoints were considered based on overt signs and symptoms of sepsis (piloerection, squeaking, sensitive to touch, tearing). No animals met this criteria, therefore no animals were euthanized prior to experimental endpoints (one or three days post burn) in this study. 10/66 mice died following burn injury before they were observed to exhibit these signs of sepsis. Mice were sacrificed on days one and three following injury.
DNA and RNA Purification
One and three days after injury, the intestines of the mice were surgically removed, opened, and luminal contents were collected from the distal 5cm of the small intestine and the whole large intestine from the cecum. RNA was purified from this region of the small and large intestine tissue using RNeasy Mini Kit in combination with DNase digestion, according to the manufacturer's protocol (Qiagen, Valencia, CA, USA). For the human patient samples, the FMS was used to flush the bowel and collect feces from the burn patients. Control patients defecated normally and samples from this group were directly collected into sterilized cups. Genomic bacterial DNA was purified from mouse and human fecal samples using the Qiagen DNA Stool Mini Kit with an initial brief sonication step in lysis buffer ASL and a high temperature 95°C incubation step to improve bacterial cell lysis.
Microbial Community Structure Analysis
Genomic DNA (gDNA) from the feces of the small and large intestine of mice, and human stool samples was PCR amplified and prepared for next-generation sequencing (NGS) using a modified two-step targeted amplicon sequencing approach, similar to that described previously [19,20]. Genomic DNA was initially amplified with primers 27F and 534R (17), targeting the V1-V3 variable regions of Bacterial small subunit (SSU) ribosomal RNA (rRNA) genes. The primers contained 5' common sequence tags (known as common sequence 1 and 2, CS1 and CS2) as described previously [21]. The forward primer, CS1_27YF (ACACTGACGACATG GTTCTACA AGAGTTTGATCCTGGCTCAG) and CS2_534R (TACGGTAGCAGAGACTT GGTCT ATTACCGCGGCTGCTGG) were synthesized by Integrated DNA Technologies (IDT; Coralville, Iowa) as standard oligonucleotides. Common sequences are underlined. PCR reactions were performed according to the Human Microbiome Project (HMP) 16S 454 sequencing protocol [22], with some modifications. PCR amplifications were performed in 10 microliter reactions in 96-well plates. A mastermix for the entire plate was made using the 2X AccuPrime SuperMix II (Life Technologies, Gaithersburg, MD). The final concentration of primers was 500 nM. From 10-50 ng of genomic DNA was added to each PCR reaction. Cycling conditions were as follows: 95°C for 5 minutes, followed by 28 cycles of 95°C for 30", 56°C for 30" and 68°C for 5'. A final, 7 minute elongation step was performed at 68°C. Reactions were verified to contain visible amplification using agarose gel electrophoresis, in addition to no visible amplification in the no-template control prior to the second stage of PCR amplification.
A second PCR amplification was performed in 10 microliter reactions in a 96-well plate to incorporate Illumina sequencing adapters and sample-specific barcodes into amplicon pools. A mastermix for the entire plate was made using the 2X AccuPrime SuperMix II. Each well received a separate primer pair, obtained from the Access Array Barcode Library for Illumina Sequencers. The final concentration of each primer concentration was 400 nM, and each well received a separate primer set with a unique 10-base barcode (Fluidigm, South San Francisco, CA; Item# 100-4876). Separate reactions with unique barcodes were included for positive control, no-template control (reaction 1) and a second no-template control reaction containing only Access Array Barcode library primers. Cycling conditions were as follows: 95°C for 5 minutes, followed by 8 cycles of 95°C for 30", 60°C for 30" and 68°C for 30". A final, 7 minute elongation step was performed at 68°C. PCR yield of positive and negative controls and select samples were validated with Qubit fluorometric quantitation with the Qubit 2.0 fluorometer (Life Technologies) and with size and quantification employing an Agilent TapeStation2200 device with D1000 ScreenTape (Agilent Technologies, Santa Clara, California). After assessing no amplification in the negative controls, samples were pooled in equal volume and purified using solid phase reversible immobilization (SPRI) cleanup, implemented with AMPure XP beads at a ratio of 0.6X (v:v) SPRI solution to sample. This ratio removes DNA fragments shorter than 300 bp from the pooled libraries. Final quality control was performed using TapeStation2200 and Qubit analysis, prior to dilution to 4 pM for sequencing on an Illumina MiSeq. The pool was loaded on a MiSeq v3 flow cell at a concentration of 5.5pM and sequenced in 2x300bp paired end format using a 600 cycle MiSeq v3 reagent cartridge. Library preparation was performed at the DNA services (DNAS) facility, within the Research Resources Center (RRC) at the University of Illinois at Chicago (UIC). Library sequencing was performed at the Michigan State University (MSU) Research Technology Support Facility (RTSF).
Raw sequence data were imported into the software package CLC genomics workbench (v7.0; CLC Bio, Qiagen, Boston, MA). Sequences were quality trimmed (Q20) and reads shorter than 200 bases were removed. Due to amplicon size and quality trimming, forward and reverse reads could not be consistently merged. Therefore, only the forward read was used for community analyses. The trimmed sequences were exported as FASTA files. Subsequently, FASTA files were processed through the software package QIIME. Briefly, sequences were screened for chimeras using the usearch61 algorithm [23], and putative chimeric sequences were removed from the dataset. Subsequently, each sample sequence set was sub-sampled to the smallest sample size to avoid analytical issues associated with variable library size [24]. Sub-sampled data were pooled and renamed, and clustered into operational taxonomic units (OTU) at 97% similarity. Representative sequences from each OTU were extracted, and these sequences were classified using the "assign_taxonomy" algorithm implementing the RDP classifier, with the Greengenes reference OTU build [25,26]. A biological observation matrix (BIOM; [27]) was generated at taxonomic levels from phylum to genus using the "make_OTU_table" algorithm. The BIOMs were imported into the software package Primer6 for statistical analysis and visualization using group-average clustering, non-metric multidimensional scaling (NMDS), and analysis of similarity (ANOSIM), as described previously [28,29]. Differences in the relative abundance of individual taxa between a priori defined groups (e.g., control and burn patients) were tested for significance using the "group_significance" algorithm, implemented within QIIME. Tests were performed using the non-parametric Kruskal-Wallis one-way analysis of variance, generating a Benjamini-Hochberg false-discovery rate (FDR) corrected p-value. Taxa with an average abundance of <1% across the entire sample set were removed from such analyses.
Quantitative Analyses of Fecal Microbiome
Real time quantitative PCR (qPCR) was used to quantify bacterial SSU (16S) rRNA gene abundance, as described previously [30]. Primer sets targeting SSU rRNA genes of microorganisms at the domain level (i.e., Bacteria) and at the family level (i.e., Enterobacteriaceae) were used. Primers included 340F (ACTCCTACGGGAGGCAGCAGT) and 514R (ATTACCGCGG CTGCTGGC) for domain-level analyses and 515F (GTGCCAGCMGCCGCGGTAA) and 826R (GCCTCAAGGGCACAACCTCCAAG) for Enterobacteriaceae analyses. Primers were synthesized by Invitrogen. qPCR master mixes contained 1X iTaq Universal SYBR Green Supermix (Bio-rad), and 300 nM forward and reverse primers. For standards, 10-fold dilutions were made from purified genomic DNA from reference bacteria as described previously [30]. Reactions were run at 95°C for 3', followed by 40 cycles of 95°C for 15" and a 63°C (Bacteria) or 67°C (Enterobacteriaceae) for 60". Reactions were performed using a Step One Plus qPCR instrument (Applied Biosystems).
Histology
Small, 3-5mm sections of tissue were taken from the ileocecal wall and fixed in Carnoy solution overnight. Paraffin blocks were prepared by the Loyola University health Sciences Division Processing Core, 5 μm sections were cut, and 1 slide from each animal was H&E stained for tissue pathology. The procedure for fluorescent in-situ hybridization staining was performed as described previously with minor adjustments [31]. Slides were deparaffinized, dried, and incubated with the indicated probes at a final concentration of 1ng/μl in hybridization buffer (0.9M NaCl, 20mMTris-HCL, pH 7.5, 0.1% SDS) and left to incubate overnight at 50°C in a dark, humidified, Tupperware container. The probe sequences were as follows and purchased from Invitrogen [30,[32][33][34][35]: Universal bacterial probe EUB338: Alexa 555 5'-GC TGCCTCCCGTAGGAGT -3' Enterobacteriaceae probe ENTBAC 183: Alexa 488 5'-CT CTTTGGTCTTGCGACG -3' Following the incubation, the slides were washed 3x for 15min. in prewarmed wash buffer (0.9M NaCl, 20mMTris-HCL, pH 7.5,0.1% SDS) at 50°C. The slides were air dried, counterstained, and mounted using ProLong Gold Antifade Reagent with DAPI (Molecular Probes). The sections were imaged using a Zeiss Axiovert 200m fluorescent microscope and images were processed using Axiovision software.
Intestinal Permeability
One day after the burn or sham injury procedure the mice were gavaged with 0.4 ml of 22 mg/ ml FITC-dextran in PBS. After 3 hours, blood was drawn and the mice were sacrificed. The blood was centrifuged to collect the plasma, and read spectrophotometrically at 480 nm excitation and 520 nm emission wavelengths. The concentration of FITC-dextran in the plasma was determined by relating its absorbance to a standard curve of known FITC-dextran concentrations.
Intestinal Expression of Claudin 4, and 8. RNA from the distal small intestine tissue and large intestine was purified as described above and reverse transcribed to cDNA using High Capacity cDNA Reverse Transcription Kit (Life Technologies). Expression levels of claudin 4, and 8 were quantified by qPCR using TaqMan primer probes and Taqman Fast Advanced Master Mix (Life Technologies) and ΔCt calculations were conducted using the endogenous control gene Gapdh.
Cultivation of Micro-organisms. The mesenteric lymph nodes were aseptically removed, weighed, and homogenized in PBS to achieve a 50 mg/ml (MLN-wt/vol) concentration. Equal amounts of homogenate were plated on Tryptic soy agar plates with 5% sheep blood, and Mac-Conkey agar to grow total and Gram-negative bacteria respectively. The plates were cultured aerobically in a 37°C incubator with 5% CO 2 for 24 hours.
Statistical Analysis
Data are expressed as mean ± standard error of the mean (SEM). Differences between groups were determined by ANOVA with Tukey's post hoc test or Student's t-test using GraphPad InStat. P<0.05 was considered statistically significant.
Data Access
The amplicon sequence data from this study have been submitted to the NCBI Sequence Read Archive (SRA; http://www.ncbi.nlm.nih.gov/Traces/sra/sra.cgi) under the BioProject (PRJNA273295) accession number SRP052710. Sequences derived from mice were uploaded as two independent FASTQ files representing forward and reverse reads from each sample. In sequences derived from human feces, sequence reads were imported into the software package CLC genomics workbench and mapped against the Hg19 human genome reference. Reads mapping to the human genome (<0.05%) were removed from the dataset, and single FASTQ files, containing both forward and reverse reads were provided to the SRA.
Burn Injury and the Structure of the Human Intestinal Microbiome
To examine the structure of the intestinal microbiome after burn injury, deep sequencing of bacterial SSU rRNA genes (V1-V3 region) was performed, using a PCR-NGS approach. A minimum of 40,000 raw sequences was generated per sample. After chimera removal and subsampling, a biological observation matrix (BIOM) was generated using 25,000 sequences per sample. The fecal microbial community structure of control and burn injury patients was analyzed, and revealed a substantial and significant effect of burn injury (Fig 1). An analysis of similarity (ANOSIM) demonstrated a significant difference between control and burn injury patients (Global R = 0.632; p = 0.2%, 999 permutations; control (N = 8 individuals and 8 total samples) and burn injury patients (N = 4 individuals and 10 total samples)). Fecal microbial community richness at the family level was significantly higher for control patients relative to burn injury patients (an average of 32.63 families vs 27.60 families; p<0.02, two-tailed TTEST, unequal variance); no other calculated indices were significantly different (i.e. Pielou's evenness or Shannon index) (S1 and S2 Tables).
The gut microbial communities of control patients clustered together, and were divergent from all fecal samples of burn injury patients, regardless of sampling times. The two patients with the greatest TBSA had the most similar microbial community structure, regardless of sampling time, with Bray-Curtis similarity of >60% (Fig 1). Initially, the two patients with lower TBSA had distinct fecal microbial communities from those of patients with 44 and 57% TBSA. However, 11 days after injury, the fecal microbiome of the patient with 32% TBSA shifted towards those of the patients with 44% and 57% TBSA (Fig 1). The patients with 32%, 44%, and 57% TBSA died from sepsis, while the patient with 25% TBSA survived. Fecal microbial communities of control patients were dominated by bacteria from the families Bacteroidaceae, Lachnospiraceae, and Ruminococcaceae (Fig 1B), confirming earlier reports of the dominant intestinal bacteria [36][37][38]. The fecal microbiome of burn patients was significantly different from those of control individuals, and bacteria from the families Bacteroidaceae, Enterobacteriaceae, and Lachnospiraceae were the most abundant taxa in the fecal microbiome of burn injury patients (Fig 1B). Dramatic and significant differences in the relative abundance of these families were observed in fecal microbiome of control and burn patients (Table 1). In particular, the relative average abundance of bacteria from the family Enterobacteriaceae was higher in burn injury patients relative to control patients (average 31.9% to 0.5%). Conversely, significant decreases in the relative abundance of bacteria from the families Bacteroidaceae, and Ruminococcaceae were observed ( Fig 1B; Table 1).
The dramatic increase in the relative abundance of bacteria from the family Enterobacteriaceae was confirmed using quantitative PCR. Quantitative analyses of 16S rRNA genes of Enterobacteriaceae revealed a 37-fold increase in the relative abundance of Enterobacteriaceae in feces from burn injury patients relative to those from control patients (Fig 1C). Most, but not all the sequences assigned to the family Enterobacteriaceae could not be classified to the level of genus; however, bacteria from the genera Citrobacter, Enterobacter, Erwinia, Escherichia, Klebsiella, Proteus, Serratia, and Trabulsiella were detected. The most abundant taxon (OTU) detected in burn patients had a 16S rRNA gene sequence that was highly similar (>99.5%; 279/ 280 matching bases) to that of the adherent invasive E. coli strain O83:H1. The representative gene sequence of this taxon was 100% identical to a number of strains of bacteria from the genera Enterobacter and Escherichia. This single taxon represented nearly 60% of all Enterobacteriaceae sequences recovered in all samples.
The Effect of Burn Injury on the Mouse Intestinal Microbiome
The effect of burn injury on the gut microbial community was examined in a mouse model experimental system. These studies were performed to determine if (a) the shift in gut microbial community structure in human patients was reproducible in mice; (b) determine if similar microorganisms developed in the gut of burn injury mice as in humans; and (c) determine if differences in community structure were observed in multiple locations in the gastrointestinal tract. Microbial community structure was assessed in the large and small intestines of mice, one and three days after burn or sham burn treatment. Genomic DNA extracts were processed as described for human fecal samples, and a biological observation matrix (BIOM) was generated using 25,000 sequences per sample (Fig 2). Significant differences in microbial community structure between large and small intestine were observed, independent of treatment or date (ANOSIM, Global R = 0.619, p<0.002, 999 permutations). The effect of burn injury on the microbial community structure in the large intestine was smaller than that in the small intestine. Nonetheless, a moderate, but not significant shift, was observed in large intestine samples (ANOSIM, Global R = 0.218, p = 0.059, 999 permutations) across all time points. When samples from only the first day post-burn were considered, a significant effect was observed (ANO-SIM, Global R = 0.872, p = 0.008, 126 permutations). A similar effect was observed in the small intestine samples (ANOSIM, Global R = 0.265, p = 0.02, 999 permutations), particularly when only sham and day 1 samples were compared (Global R = 0.672, p = 0.008, 126 permutations). No significant differences in any calculated diversity index was observed between the small intestine microbiomes of no burn injury (sham) and burn injury mice (S1 and S2 Tables). In the large intestine microbiome, the evenness and diversity of the burn injury mice at 1 day was slightly, but significantly, different than that of sham mice or burn injury mice at 3 days (e.g. A statistically significant effect of burn injury was observed between sham and one-day burn injury mice in the small intestine (n = 9 sham, 7 burn mice, two-tailed t-test **, p< 0.01), and between one-day and three-day burn injury mice (n = 9 sham, 8 burn mice, two-tailed t-test *, p< 0.05). A statistically significant effect of burn injury was observed for the large intestine between oneday and three-day burn injury mice (n = 8 animals per group, two-tailed t-test *, p< 0.05). The most abundant bacterial families in sham and burn injury mice (small intestine) are indicated in pie charts (C), and taxa which were significantly different by Kruskal-Wallis one-way analysis of variance (*, FDR-P <0.05). doi:10.1371/journal.pone.0129996.g002 Shannon index of 2.32 vs 2.13 or 2.10; p<0.004; two-tailed TTEST, unequal variance; S1 and S2 Tables). The relative abundance of bacteria from the family Enterobacteriaceae in the gut of mice experiencing burn injury substantially increased one day after burn injury, relative to the sham control, and on average decreased three days after burn injury (Fig 2). The effect was significant in the microbial communities from the small intestine after one day (Fig 2; Table 1). In the small and large intestine, the relative abundance of Enterobacteriaceae decreased significantly from day one to day three, but was not significantly different at day three from the sham (Fig 2; Table 1). In addition, the relative abundance of other microbial families was significantly altered between treatments and time points. For example, in the analysis of small intestine microbial communities, the average relative abundance of SSU rRNA genes of bacteria from the "S24-7" group of the Bacteroidales and bacteria from the family Bacteroidaceae was significantly lower in burn injury mice at one day (Fig 2; Table 1).
The effect of burn injury on the microbial community in the large intestine was different than that observed for small intestine (Fig 2). The abundance of bacteria from the family Enterobacteriaceae was generally much lower in the large intestine than in the small intestine (on average, less than 1% of all bacterial sequences), regardless of condition (Table 1). Nonetheless, shifts in the relative abundance of bacteria from the family Enterobacteriaceae were observed, and the effect was significant by sequence analysis, though not by qPCR (Fig 2; Table 1). The average relative abundance of bacteria from the family Bacteroidaceae, Porphyromonadaceae, Erysipelotrichaceae, and Alcaligenaceae were all significantly higher in burn injury mice at day 1, though these taxa were of moderate or low overall relative abundance in the large intestine microbiome (Table 1). Abundant taxa in the large intestine, such as the "S24-7" group, and the families Lachnospiraceae, Prevotellaceae, Rikenellaceae, and Ruminococcaceae were not significantly differently abundant in the between sham and burn injury mice.
Bacterial Translocation of Enterobacteriaceae
Bacteria were identified in the small intestine using fluorescence in-situ hybridization (FISH) analysis, employing family-level (Enterobacteriaceae) and domain-level (Bacteria) oligonucleotide probes targeting the SSU rRNAs. These analyses were used to visualize the proximity of bacteria to the small intestinal villi. In the sham mice, Enterobacteriaceae were present in low relative abundance and were rarely attached to the intestinal villi (Fig 3). After burn injury, bacteria from the family Enterobacteriaceae were observed adhering to or adjacent to the small intestinal villi (Fig 3B).
The abundance of Enterobacteriaceae in the MLN was measured using qPCR of genomic DNA extracted from the MLN, and through bacterial cultivation. qPCR analyses detected Enterobacteriaceae one day after injury in the MLN (Fig 4A). To determine if these bacteria were viable, MLN homogenates were cultured aerobically for 24 hours on Tryptic Soy Agar (TSA) with blood to identify total aerobic bacteria, and on MacConkey Agar to identify Gramnegative aerobic bacteria, including Enterobacteriaceae. Colonies were observed to develop on TSA and MacConkey plates in all burn injured animals one day after injury, while no colonies were observed on the plates inoculated with homogenate from sham animals (Fig 4B). Three days after burn, some colonies were detected on the TSA plates, but no colonies were detected on the MacConkey agar.
Burn Injury Increases Intestinal Permeability
Increased gut leakiness can result in bacterial translocation from the gut to the lymph nodes. Intestinal permeability was measured in vivo one and three days after burn with a FITC-dextran permeability assay. Sham and burn injured mice were gavaged with FITC-dextran one and three days after burn. Three hours later, the concentration of this dye was determined spectrophotometrically in the plasma. An increase in the concentration of FITC-dextran was observed in mice one day after burn, and no change was observed three days after injury relative to the sham animals ( Fig 5A). In addition, gene expression of two tight junction proteins, claudin 4, and 8 were measured in the small and large intestine of sham and burn injury mice. Gene expression levels of claudin 4 and 8 decreased by~40% in the small intestine one day after injury (Fig 5B). A smaller, and not significant, change was observed in the large intestine ( Fig 5C).
Discussion
In this study, we show that burn injury alters the structure of the intestinal microbiome promoting the overgrowth of specific Gram-negative aerobic bacteria, but within the context of fairly limited effects on overall microbial diversity. The overgrowth of Enterobacteriaceae coupled with the increase of intestinal permeability seen one day after burn allows for the translocation of these bacteria to the mesenteric lymph nodes. This provides evidence that the gut may be a source of bacterial infections after burn injury, and a potential cause of sepsis.
Examining the structure of the intestinal microbiome of severely burned patients, we found that injury promotes the overgrowth of many under representative taxa while reducing the overall healthy diversity of bacteria. This shift in the microbiome is similarly seen in other inflammatory conditions, such as IBD [39], and consequently may also yield profound implications for treatment of infection and immune modulation in trauma patients. The most profound changes in the microbiome were dramatic increases in the abundance of γ-Proteobacteria, particularly those within the family Enterobacteriaceae. This family contains many opportunistic pathogenic bacteria, including those from the genera Escherichia, Klebsiella, Proteus, and Citrobacter, which are common in septic patients [11]. Bacteria from the family Enterobacteriaceae are potentially proinflammatory and have been shown to induce spontaneous colitis when transferred to wild type mice [34]. More research is needed to determine which strains of these bacteria elicit systemic inflammation after burn injury. Additional sequencing efforts, including assembly of full length SSU rRNA gene amplicons, and deep shotgun metagenome sequencing, will be instrumental in more accurately identifying the burn injury-associated Enterobacteriaceae, and in determining specific physiological capabilities enabling their dramatic overgrowth after burn injury.
In addition to overgrowth of potentially pathogenic bacteria, we observed reductions in potentially protective bacteria. The Lachnospiraceae are a Gram-positive family of bacteria within the phylum Firmicutes, and include bacteria from the genus Clostridium. Various species of spore forming bacteria under this cluster have been shown to ferment carbohydrates to produce butyrate, induce Treg induction, and prevent inflammation in models of colitis [40][41][42][43]. Reductions of this family of bacteria have been observed in IBD, and it is possible that these species of bacteria are also protective in maintaining the gut barrier integrity after trauma [39,44]. If so, reconstitution of these strains through probiotic supplementation may prove to More research is needed to identify the cause of the dramatic shifts in bacterial community structure associated with burn injury. Two potential mechanisms are increased intestinal inflammation and reductions of antimicrobial peptides. Previous research has shown that intestinal inflammation alters the intestinal microbiome, and allows for an overgrowth of Enterobacteriaceae. Bacteria from the family Enterobacteriaceae have been shown to outcompete other resident bacteria and reduce total bacterial numbers [45]. Another study demonstrated that host generated nitrate produced as a by-product of the inflammatory response can lead to boosts of E.coli in the inflamed gut [46]. In addition, α-defensins and C-type lectins, two classes of host-produced antimicrobial peptides, have been implicated in the establishment and regulation of the intestinal microbiota. Recent studies have shown that a reduction in αdefensins promote shifts in microbial communities, leading to the overgrowth of pathogenic bacteria and intestinal inflammation in Crohn's disease [32,47,48]. C-type lectins are another class of antimicrobials which protect against intestinal inflammation and colitis by segregating the commensal bacteria from the intestinal epithelium [49]. Therefore a potential decrease in these antimicrobials may help explain the shifts in bacterial abundance.
Our findings further demonstrate that burn injury leads to an increase in gut leakiness which allows for bacterial translocation to the MLN. Tight junction (TJ) proteins, such as claudins are indispensable in maintaining the permeability of the intestine. Diseases where TJ protein expression is altered have been shown to correlate with the translocation of bacterial products to the circulation [50,51]. There was a significant decrease in claudin 4, and 8, which accompanied the increase in Enterobacteriaceae seen in the small intestine following burn injury. Reports have shown various Proteobacteria with the potential to modulate claudin 4 expression and permeability in the intestine [52,53]. Reduced claudin 8 expression has been observed in diseases of intestinal barrier dysfunction such as Crohn's disease and in autism models where dysbiosis is also evident [50,51]. There seems to be a mutual relationship between dysbiosis of the microbiome and altered TJ proteins. However, it is not well established whether dysbiosis precedes and causes alterations in intestinal permeability, or whether altered permeability can directly change the microbiome.
To our knowledge this is the first study that investigates the structure of the intestinal microbiome in severely burned patients. The relatively few patient samples, their individual antibiotic regimens, and when the fecal management system was utilized in the clinical care of the patients are all confounding factors to this study. Nevertheless, comparison of the burn patients' intestinal microbiome with that of our mouse model revealed many similar trends providing strong evidence that trauma modifies the intestinal homeostatic environment, thereby resulting in alterations in the intestinal microbiome, and overgrowth of Enterobacteriaceae. Translocation of Enterobacteriaceae to the MLN and systemic Gram-negative bacteremia can lead to sepsis and multiple organ failure for burn patients.
Supporting Information S1 | 2017-07-21T04:03:31.253Z | 2015-07-08T00:00:00.000 | {
"year": 2015,
"sha1": "3441aabd06599ca1634ffee81120f7e111c892e1",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0129996&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3441aabd06599ca1634ffee81120f7e111c892e1",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
253284899 | pes2o/s2orc | v3-fos-license | Factors Associated with Functional Constipation among Students of a Chinese University: A Cross-Sectional Study
Functional constipation (FC) is prevalent worldwide and is an increasingly prominent problem among university students. However, there is a paucity of research on FC in university students. This study aimed to assess the prevalence of FC among Chinese university students by the Rome III criteria and investigate its associated factors. This cross-sectional study was conducted by online questionnaires among 929 university students at a Chinese university. Food consumption was assessed with the Semi-Quantitative Food Frequency Questionnaire (SQFFQ) and dietary patterns were analyzed using factor analysis. A binary logistic regression model was applied to clarify FC-associated factors. The prevalence of FC among university students was 5.1%. Interestingly, among university students, the prevalence of FC with “complex” dietary pattern was significantly higher than those with “vegetable, fruit, egg and milk-based” and “livestock and aquatic product-based” dietary pattern (9.9% vs. 3.1% vs. 2.8%, p < 0.001). The prevalence of FC was significantly higher among university students with moderate to severe sleep disorders than those with the other sleep status (χ2 = 18.100, p < 0.001). Furthermore, after adjusting the covariates, “complex” dietary pattern (OR = 4.023, p < 0.001), moderate to severe sleep disorders (OR = 3.003, p = 0.006), overeating (OR = 2.502, p = 0.032), long mealtime (>30 min) (OR = 6.001, p = 0.007), and poor defecation habits (OR = 3.069, p = 0.042) were positively associated with FC among university students. Based on the above-associated factors for FC, improving dietary patterns and sleep status and developing good bowel and dietary habits are essential to prevent and alleviate university students’ FC.
Introduction
Functional constipation (FC) is a clinically common functional gastrointestinal disorder commonly appearing in children and adults worldwide [1,2]. FC, also known as chronic idiopathic constipation, refers to chronic constipation formed by dysfunction or disturbance in the physiological functions of defecation for some reasons, except irritable bowel syndrome (IBS), and without organic lesions or structural abnormality resulting in difficulty in defecation [3][4][5]. Meta-analyses showed that FC was endemic among countries and the prevalence varied across different cross-sectional surveys (i.e., different regions) [6]. According to domestic and foreign surveys, the average prevalence of constipation worldwide was 16% (between 0.7% and 79%) [7], of which the prevalence of FC diagnosed according to the Roman III criteria was 10.1% [6], and the prevalence among university students in China was 9.37% to 27.17% [8]. University students are one of the main groups of FC. If they are in a constipated situation for a long time, it will not only cause facial acne and irritability, but also lead to hemorrhoids, fissures, intestinal obstruction, and other diseases, which can affect their studies and life [9][10][11].
Existing studies have shown that a sedentary lifestyle, dietary habits and types such as low vegetable and fruit intake (low dietary fiber), inadequate water intake, and low levels of education all contribute to an increased prevalence of FC [7,12,13]. In addition to the known factors described above, the prevalence of FC may be associated with sleep status. A survey by the World Health Organization revealed that 27% of the world's people had sleep problems [14]. Based on the "2020 China University Students Health Survey Report", 77% of university students had experienced sleep disturbances in the past year. Sleep problems directly affect people's lives. Current studies have found that people with gastrointestinal diseases or symptoms had high levels of sleep problems [15]. It has also been found that both sleep and the circadian cycle affected gastrointestinal function [16]. Gastrointestinal function or disease is closely related to sleep, and the relationship between FC and sleep will be explored in our study. Rajindrajith et al. found that children or adolescents with FC had more emotional and behavioral problems [17,18]. University students are in a special stage of transition between campus life and social life, as well as an important stage of physical and mental development. They will inevitably encounter some events in their lives that are affected by their emotions in different ways, which are called life events, including positive life events and negative life events (also known as stressful life events) [19]. Some stressful life events, such as unsatisfactory examination results, disputes with close friends, loss of love, and prolonged absence from family, are more likely to lead to the emergence of negative emotions and behaviors in university students [20], which may lead to gastrointestinal dysfunction, making FC more likely to occur.
In recent years, there has been an increasing amount of FC studies on different populations at home and abroad, but there are fewer studies on FC and its associated factors in university students. The treatment of FC is extremely challenging [21]. Therefore, the study aimed to investigate the prevalence of FC among Chinese university students using the Rome III criteria and explored its associated factors, to prevent FC or improve its status.
Ethical Approval
This study has been approved by the Ethics Committee of the Xiangya School of Public Health of Central South University (XYGW-2019-032).
Study Design and Participant Eligibility
From April to July 2019, at Central South University in Changsha, Hunan, southcentral China, a cross-sectional study was conducted among university students by online questionnaires.
Participant inclusion criteria were as follows: freshman to fifth year undergraduates aged ≥18 who voluntarily participated and signed informed consent forms. The exclusion criteria were as follows: (1) those who have digestive system diseases, hematologic diseases, and chronic cardiovascular system diseases; (2) those with serious lesions in other organs.
Sample Size Estimation
According to research studies, the prevalence of FC among university students in China ranged from 9.37% to 27.17% [8]. From the cross-sectional study (tolerance error of 0.2 p), the required sample size is between 257 and 929. Considering the 10% nonresponse rate and human, material, and financial factors, a total of 1000 questionnaires were distributed in this survey. A total of 950 university students participated in the survey and 935 completed the questionnaire, of which 929 university students (effective recovery rate of 97.8%) were the final valid samples (see Figure 1).
Figure 1.
Participants enrolled according to inclusion and exclusion criteria and questionnaire completion.
Data Collection
The online questionnaires were completed by university students using Questionnaire Star through various electronic devices under the guidance of uniformly trained investigators. Questionnaire Star is an online questionnaire tool for creating, submitting, and collecting questionnaire information.
•
Demographic Information: University students' gender, age, grade, height, and weight.
•
Diagnostic Criteria for FC: According to "the Diagnosis and Treatment of Functional Constipation" [22], the Rome III criteria were used to determine whether university students had FC or not, which is used for presenting the symptoms at least 6 months before diagnosis and meets the criteria in the past 3 months. • Information on Lifestyle Habits: University students' poor defecation habits (playing with mobile phones and reading books during defecation), common means of transportation, dietary habits (overeating, mealtime), and drinking water (daily water intake, active drinking water).
•
Physical Activity Evaluation Criteria: The International Physical Activity Questionnaire (IPAQ) was used to investigate the physical activity levels of university students over the past week, which has already shown good reliability and validity in Chinese college students [23]. The IPAQ short-scale consists of 7 question entries. According to the IPAQ short-scale scoring criteria, the physical activity level of university students was divided into three levels, i.e., light, moderate, and vigorous physical activity.
•
Sleep Status Evaluation Criteria: The Self-Rating Scale of Sleep (SRSS) [24] was used to evaluate the sleep status of university students in the past month. The scale has good reliability (Cronbach's alpha coefficient = 0.6418) and validity (r = 0.5625) [24]. According to the 10 items of SRSS, the total score ranges from 10 to 50 points, and the higher the total score, the more the sleep problem and the worse the sleep status. In this study, 10 [25] was used to assess the frequency and intensity of stressful life events in university students in the past 12 months, which is composed of 27
Data Collection
The online questionnaires were completed by university students using Questionnaire Star through various electronic devices under the guidance of uniformly trained investigators. Questionnaire Star is an online questionnaire tool for creating, submitting, and collecting questionnaire information.
•
Demographic Information: University students' gender, age, grade, height, and weight. • Diagnostic Criteria for FC: According to "the Diagnosis and Treatment of Functional Constipation" [22], the Rome III criteria were used to determine whether university students had FC or not, which is used for presenting the symptoms at least 6 months before diagnosis and meets the criteria in the past 3 months. • Information on Lifestyle Habits: University students' poor defecation habits (playing with mobile phones and reading books during defecation), common means of transportation, dietary habits (overeating, mealtime), and drinking water (daily water intake, active drinking water). • Physical Activity Evaluation Criteria: The International Physical Activity Questionnaire (IPAQ) was used to investigate the physical activity levels of university students over the past week, which has already shown good reliability and validity in Chinese college students [23]. The IPAQ short-scale consists of 7 question entries. According to the IPAQ short-scale scoring criteria, the physical activity level of university students was divided into three levels, i.e., light, moderate, and vigorous physical activity.
•
Sleep Status Evaluation Criteria: The Self-Rating Scale of Sleep (SRSS) [24] was used to evaluate the sleep status of university students in the past month. The scale has good reliability (Cronbach's alpha coefficient = 0.6418) and validity (r = 0.5625) [24]. According to the 10 items of SRSS, the total score ranges from 10 to 50 points, and the higher the total score, the more the sleep problem and the worse the sleep status. In this study, 10-19 is classified as good sleep status; 20-21 as fair sleep status; 22-25 as mild sleep disorders; and 26-50 as moderate to severe sleep disorders. • Evaluation Criteria for Stressful Life Events: The Adolescent Self-Rating Life Events Check List (ASLEC) [25] was used to assess the frequency and intensity of stressful life events in university students in the past 12 months, which is composed of 27 items and can be classified into 6 factors: interpersonal relationships, learning stress, punishment, loss, health adaptation, and others. The scale has good reliability (Cronbach's alpha coefficient = 0.92) [25]. A 6-level score was used, from "not occurring = 0 points" to "extremely heavy impact = 5 points". The higher the score, the greater the impact of stressful life events, and the higher the degree of psychological stress. • Dietary Pattern Analysis: The Semi-Quantitative Food Frequency Questionnaire (SQFFQ) [26] was used to collect information on the frequency and consumption of various food by university students over the past six months. Through the presurvey, we aimed to understand the most common food varieties eaten by university students; some of the same types of food varieties were clustered, and the 22 kinds of foods obtained formed the food list of the SQFFQ. Factor analysis was used to evaluate and classify the dietary patterns of university students. According to Kaiser standards, the extracted principal factors were those with eigenvalues greater than one. Varimax orthogonal rotation was used to ensure that the factor structure was practically meaningful. • a. The diet frequency of each food or food group was recorded into the number of times per week, e.g., 3 times a day or more = 21 times/week, 2 times a day = 14 times/week, and 1 time a day = 7 times/week. • b. Intake of each type of food per time: (i) solid foods: 250 g, 200 g, 150 g, 100 g, and 50 g; (ii) liquid foods: 250 mL, 200 mL, 150 mL, 100 mL, and 50 mL.
•
The Dietary Diversity Score (DDS) was assessed by trained researchers with a nutritional background using the Food Frequency Questionnaire (FFQ) [27]. Based on the dietary structure and habits of university students, their daily diet was classified into nine food groups, including cereals and potatoes, starchy staples, vegetables, fruits, livestock meat, aquatic products, eggs and milk, beans and nuts, and mushrooms. For any food group consumed once a week, a score of 1 was registered, with a DDS of 0-9. The higher the DDS, the more diverse the diet. In this study, a threshold less than 5 was defined as low DDS.
Statistical Analyses
The questionnaires were entered using Epidata 3.0 software (The Epi Data Association, Odense, Denmark), double-entry; statistical analysis using IBM SPSS 26.0 software (IBM Corp., Armonk, NY, USA) was performed. The test level α = 0.05 and p < 0.05 is statistically significant. The data meet the normality and variance homogeneity, with means and standard deviations or numbers and percentages used for the basic characteristics; Student's t-tests and chi-square tests were used to analyze the relationship between FC and singleassociated factors. Factor analysis and binary logistic regression models were used to determine dietary patterns and analyze associated factors, respectively.
Participant Characteristics
A total of 929 university students were enrolled in this study. Of them, 355 (38.2%) were male and 574 (61.8%) were female, mainly freshmen and sophomores, aged 18 to 21 years old, with mostly normal BMI (65.0%). There were no significant differences among university students of different genders across grades and age groups (p < 0.05). The prevalence of FC in university students was 5.1%, with 70.2% of females and 29.8% of males (See Table 1).
Study on the Association between FC and Lifestyle Habits, Physical Activity, Sleep Status, and Stressful Life Events among University Students
As shown in Table 2, significant differences in the prevalence of FC were found among different dietary habits of university students. The higher the frequency of overeating, the higher the prevalence of FC (3.4% vs. 5.9% vs. 9.6%, p = 0.018). The prevalence of FC was higher among university students with long meal times (2.6% vs. 6.1% vs. 13.3%, p = 0.007). There were no significant differences in the prevalence of FC among university students in terms of poor defecation habits (playing on mobile phones and reading books during defecation), common means of transportation, drinking water, and physical activity (p > 0.05). The SRSS scale showed that the prevalence of FC was significantly higher among university students with moderate to severe sleep disorders than those with other sleep statuses (χ 2 = 18.100, p < 0.001). The ASLEC scale showed that there are significant differences in health adaptation factor score (p < 0.05) and other scores (p < 0.05) between university students with and without FC (See Table 2).
Dietary Pattern Analysis of University Students
The adaptability test results of the factor analysis: KMO = 0.950; Bartlett's test of sphericity χ 2 = 10,200.415, p < 0.001. The results of factor analysis and the obtained scree plots showed that the eigenvalues of the first three principal components in this study were 9.273, 1.544, and 1.352, respectively, which explained 42.152%, 7.017%, and 6.144% of the total data variance after varimax orthogonal rotation. Therefore, the cumulative contribution of the first three principal components of this study was 55.313%.
Foods or food groups with absolute values of factor loadings ≥0.48 on a certain principal component were considered to be well represented on that principal component. The first dietary factor contains aquatic products, wine, porridge, flours, potatoes, coarse grains, stuffing, processed meat products, soy products, mushrooms, fried foods, sweets, nuts, and sweetened beverages with high loadings, which is named the "complex" dietary pattern due to a wide variety of foods; the second dietary factor has high loadings of dark and light vegetables, fruits, dairy products, and eggs, with high dietary fiber and high protein, lacking staple foods and meat, which belongs to the "vegetable, fruit, egg and milk-based" dietary pattern; in the third dietary factor, the loadings of aquatic products, red meat, poultry, and rice were high, with aquatic products and meat as the main food items, so it is named "livestock and aquatic product-based" dietary pattern. Therefore, the three types of dietary patterns are named "complex", "vegetable, fruit, egg and milk-based" and "livestock and aquatic product-based" (see Table 3).
Association between Dietary Patterns and FC among University Students
Among the 929 university students, 284 (30.6%) were "complex", 292 (31.4%) were "vegetable, fruit, egg and milk-based" and 353 (38.0%) were "livestock and aquatic productbased". FC was suffered by 28 out of 284 university students on the "complex" dietary pattern (9.9%) and 9 out of 292 university students on the "vegetable, fruit, egg and milkbased" dietary pattern (3.1%). Among the 353 university students on the "livestock and aquatic product-based" dietary pattern, 10 had FC (2.8%). The prevalence of FC in the "complex" dietary pattern was significantly higher than that of the "vegetable, fruit, egg and milk-based" dietary pattern and the "livestock and aquatic product-based" dietary pattern (p < 0.001) (see Table 4). *** p < 0.001. DDS: low DDS score < 5; high DDS score ≥ 5. Dietary patterns: obtained by factor analysis, including "complex", "vegetable, fruit, egg and milk-based", and "livestock and aquatic product-based".
Of the 929 university students enrolled in the survey, 19 (2.0%) were judged to have low DDS and 910 (98.0%) had high DDS. High DDS accounts for the majority of university students, and the prevalence of FC with low DDS was slightly higher than that of high DDS, but not statistically significant (p > 0.05) (see Table 4).
Discussion
In this study, we found that the prevalence of FC among university students in Changsha, China, was 5.1%. It is noteworthy that the prevalence of FC among university students was associated with dietary patterns, eating behaviors and habits, defecation habits, sleep status, and stressful life events. This study will provide clues and a theoretical basis for the prevention and improvement of FC among university students.
Numerous epidemiological investigations have shown an association between diet and constipation, focusing mainly on the effects of individual nutrients or foods and food groups [28,29]. According to our survey, 98% of university students accounted for high DDS, indicating that the majority of university students consumed a wide variety of foods and had a diverse diet. At the same time, we found that the dietary patterns of university students can be classified into three categories using factor analysis: "complex" dietary pattern, "vegetable, fruit, egg and milk-based" dietary pattern, and "livestock and aquatic product-based" dietary pattern. Current evidence from a large number of research suggests that dietary diversity is not necessarily beneficial for health or optimal dietary patterns, and can also be associated with higher energy intake and suboptimal diet patterns [30][31][32]. Our study showed that the prevalence of FC among university students with the "complex" dietary pattern was significantly higher than with the "vegetable, fruit, egg and milk-based" and "livestock and aquatic product-based" dietary patterns. This may be due to the variety of foods in the "complex" dietary pattern, which includes not only aquatic products, coarse grains, mushrooms, soy products, porridge and nuts, but also high-fat, high-energy snacks such as fried foods, processed products, wine, and sugary products such as desserts and sweetened beverages with high correlations. Several studies have shown that individuals who prefer high-fat foods, junk snacks, fried foods, or coffee, alcohol, and spices had a higher prevalence of gastrointestinal symptoms [33,34]. Rollet et al. showed a positive correlation between the occurrence of constipation and sugary products and higher energy intake [29]. When high-fat, high-energy foods, as well as sugary foods and alcohol, are consumed at high levels, a high prevalence of FC is more likely to occur, which is in line with our study results.
Furthermore, numerous studies have shown that increasing dietary fiber intake can significantly increase the frequency of stools and relieve constipation in patients with constipation, which is beneficial for gastrointestinal health [29,[35][36][37][38]. Other observational studies have also reported that dairy products such as cheese and milk, as well as foods such as meat and eggs, had beneficial effects on constipation [39,40]. Rollet et al. also found that the occurrence of constipation was inversely associated with lipid and total fat intake [29]. Similar results were found in our study among university students, namely, that university students with the "vegetable, fruit, egg and milk-based" dietary pattern of high-fiber foods such as vegetables and fruits, and quality protein foods such as eggs and milk, and the "livestock and aquatic product-based" dietary pattern based on fish and meat rich in unsaturated fatty acids had a lower prevalence of FC. The above reflects the complexity of the effect of dietary patterns on FC, and the effect of dietary patterns on FC in university students can be further evaluated in the future.
In addition, we found that university students who overeat had a higher prevalence of FC. Overeating is a poor eating habit that refers to abnormal behavior of swallowing large amounts of food in a short period without restraint, violently and urgently [41]. Modern medicine has confirmed that overeating, in addition to causing weight gain, can directly exert great pressure on the gastrointestinal digestive system, resulting in gastrointestinal dysfunction, and leading to a series of gastrointestinal diseases [42], which is in agreement with our findings. The higher prevalence of FC among university students with longer meal times may be due to inattentiveness during meals, such as playing on mobile phones, or playing and chatting with people around them during meals, which can lead to digestive disorders in the gastrointestinal tract. It is also possible that university students with FC have poor appetite and prolong meal time. Based on the above results, it is recommended that university students, especially those with FC, modify dietary patterns or structures through nutritional interventions and concentrate on chewing and swallowing slowly during meals to avoid overeating to prevent and improve gastrointestinal problems.
Internet terms such as "night owl" and "senior stay-up party" have gradually become synonymous with some university students, and the sleep of university students has attracted much attention. Sleep, as a necessary process for living organisms, plays an important role in maintaining the body's physiological functions and is an indispensable part of health [14,43]. In 350 BC, Astoria elaborated in his book Sleep and Insomnia that sleep is caused by hot steam produced by the stomach during digestion [44]. Some studies have shown a two-way link between chronic constipation and sleep quality, i.e., sleep disturbances may affect bowel function and increase the risk of gastrointestinal disorders, and constipation may also affect sleep quality [45,46]. In our study, the same results were found in university students, i.e., there was an association between FC and sleep status in university students, with those suffering from FC experiencing poor sleep status. This may be due to the use of electronic devices such as mobile phones for various online entertainment activities before bedtime, such as chatting online, playing games, and shopping online, which increases screen time (ST) and takes up sleep time [47]. Moreover, various pressures originating from academic, employment, and interpersonal interactions may also result in shortened sleep duration, decreased sleep quality, and sleep disorders [48], which may lead to intestinal dysfunction and be prone to FC. It may also lead to shorter sleep duration and reduced sleep quality due to suffering from FC, leading to a vicious cycle. University students are encouraged to reduce the use of bedtime electronic devices by setting their mobile phones to sleep modes, switching them off, etc. during the period from going to bed to closing their eyes. They can also alleviate bedtime anxiety by soaking feet, listening to light music, and reading books, thereby improving sleep status and preventing FC.
In this study, we also found that FC among university students was associated with stressful life events that had occurred within the past 12 months. Their life stresses mainly stemmed from health adaptation events such as significant changes in their life schedules, poor physical condition, some frustrating events, and other events such as boredom with school, lost love, and arguments or fights with others. These negative life events tend to lead to higher levels of psychological stress and emotional abnormalities among university students, which can adversely affect mental health [49]. Studies in the United States and Korea have shown that negative emotions like anxiety and depression were associated with the occurrence of gastrointestinal disorders, and an association between emotions and specific defecation habits has been suggested [50,51], which is consistent with the findings of our study. In addition, university students with FC may also experience negative emotions such as anxiety due to FC. Therefore, paying attention to the life stress and spiritual and psychological health of university students is one of the important measures to prevent and treat FC.
This study highlights the important associations between FC and diet, lifestyle behavioral habits, sleep status, and negative life events among university students. The study was conducted at Central South University, located in south-central China. With more than 34,000 university students, it is one of the universities with the largest number of university students in China. It is a typical Chinese comprehensive university with certain representativeness. However, a cross-sectional study was used in this study, and further longitudinal studies will be needed in the future to find causal relationships between FC and these factors. In addition, this study was conducted in Changsha, Hunan, and found the prevalence of FC among university students to be 5.1%, while other studies have found the prevalence of FC among university students in Fujian to be 27.17% [52], 5.45% in Shandong [53], and 11.6% in Tunisia [54]. The differences in prevalence across countries and regions may be due to the influence of factors such as regional diet and lifestyle changes, which have certain limitations in extrapolation. The differences need to be elucidated through further research.
University students are beginning to manage their own lives and health independently and they are different from the general population in terms of age, diet conditions (such as school canteens, takeaways), living environment, work and rest time, as well as the pressure of study, life, and employment, which makes university students a unique group. The present study investigated the prevalence of FC for university students and further explored FC-related factors, improving its current situation from the aspects of nutritional intervention, lifestyle, and psychological status, to enhancing the learning and life quality of university students.
Conclusions
In this study, we found that the prevalence of FC among university students in Hunan, China, was 5.1%. "Complex" dietary pattern, moderate to severe sleep disorders, overeating, long meal times, and poor defecation habits (playing with mobile phones and reading during defecation) were positively associated with the prevalence of FC among university students. Attention to the dietary patterns and habits, sleep status, life stress, and mental health of university students is crucial to preventing and improving their FC. Furthermore, prospective studies are needed to verify the causal relationship between FC among university students and its associated factors. | 2022-11-04T18:13:09.264Z | 2022-11-01T00:00:00.000 | {
"year": 2022,
"sha1": "3d258ccb2514f1dd58c6534e3575995469b7e720",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6643/14/21/4590/pdf?version=1667285499",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "82a38d9dcfba941b15bcfd4eb4af68b8a51178b4",
"s2fieldsofstudy": [
"Medicine",
"Education"
],
"extfieldsofstudy": [
"Medicine"
]
} |
14587697 | pes2o/s2orc | v3-fos-license | Associations between the oxytocin receptor gene (OXTR) rs53576 polymorphism and emotional processing of social and nonsocial cues: an event-related potential (ERP) study
Background Oxytocin receptor (OXTR) gene polymorphisms are related to individual differences in emotional processing of social cues. However, whether OXTR polymorphisms affect emotional processing of nonsocial cues remains unclear. The present study investigated the relationship between the OXTR rs53576 polymorphism and emotional processing of social cues and nonsocial cues. Methods Event-related potentials were recorded from 88 male participants while images of humans and images of objects were presented as social cues and nonsocial cues, respectively. Results First, the results showed that GG carriers of OXTR rs53576 showed more negative N1 (50–200 ms) than AA carriers in response to images of both humans and objects. Second, GG carriers showed more negative N2 (200–320 ms) than AA carriers in response to images of humans but not in response to images of objects. Third, GG carriers showed more negative N2 in response to images of humans than images of objects, whereas AA carriers showed the opposite pattern. Fourth, we observed no difference in late positive potential (600–1000 ms) to images of humans or objects that depended on the OXTR rs53576 polymorphism. Conclusions These results suggest that the OXTR rs53576 polymorphism affects emotional processing of not only social cues but also nonsocial cues in the very early stage (reflected in N1); however, the data also suggest that the OXTR rs53576 polymorphism is related specifically to increased emotional processing of social cues in the middle stage (reflected in N2).
Background
Social behaviors refer to "the reciprocal interactions of two or more animals and the resulting modification of the individual's action system" [1]. Oxytocin, a neuroactive hormone produced in the hypothalamus, is closely related to human social behaviors (reviewed in [2,3]). For example, intranasal administration of oxytocin improves the ability to infer the mental state of others [4] and increases gazing toward the eye region of human faces [5]. Moreover, genetic variations in the gene for the oxytocin receptor (OXTR) are related to individual differences in responses to social cues. In particular, recent studies have found that rs53576, a single nucleotide polymorphism (SNP) in OXTR, is related to such individual differences. Behavioral studies have indicated that homozygous carriers of the G allele (GG carriers) show higher trait empathy [6,7], prosocial behavior [8], trust behavior [9], and lower social loneliness [10] than those with the A allele (AA/GA carriers). These findings are also supported by physiological results indicating that GG carriers show facilitated brain activity to human faces [11,12] and increased blood pressure and cortisol levels in response to social rejection [13] than AA/GA carriers. These studies suggested that the G allele of the OXTR rs53576 polymorphism is related to a higher sensitivity to social cues.
Some previous studies have examined whether administration of oxytocin affects responses to nonsocial cues [14,15]. For instance, administration of oxytocin improves recognition memory for images including human figures (e.g., images of human faces) but not for images not including human figures (e.g., images of houses, art sculptures, and landscapes) [14]. Meanwhile, a recent study reported that administration of oxytocin enhances the social meaning of images of objects [15]. Although the relationship between the OXTR rs53576 polymorphism and the oxytocin level is still unclear (reviewed in [16]), a later study [15] suggests a possibility that genetic variations in OXTR may be related not only to individual differences in the response to social cues but also to differences in the response to nonsocial cues. However, the association between the rs53576 polymorphism and the response to nonsocial cues such as images of objects remains unclear, because most reported studies on this OXTR polymorphism have focused on responses to social cues such as human faces [11,12] and social situations [6,8,9,13]. Thus, the present study focused on a possible association between the OXTR rs53576 polymorphism and the response to nonsocial cues.
Event-related potential (ERP), the electroencephalogram (EEG) response to specific events such as presentation of emotional stimuli, reflects the time course of information processing in the brain due to its high temporal resolution (reviewed in [17,18]). Studies of passive image viewing tasks have reported that mainly the following three ERP components are sensitive to emotional content; N1, N2/early posterior negativity (EPN), and late positive potential (LPP) [19][20][21]. N1 is a negative peak observed at around 130 ms, N2/ERN is a negativity observed at around 250 ms, and LPP is a sustained positivity that becomes evident 300 ms after stimulus onset. N2 is usually analyzed when a mastoid electrode reference is used, whereas ERN is usually analyzed when an average electrode reference is used (reviewed in [17]). The N1, N2/ERN, and LPP components are greater (more negative for N1 and N2/EPN; more positive for LPP) in response to unpleasant images than emotionally neutral images [19,20,22,23]. Moreover, some studies suggested that N2 reflects an individual difference in response to social and nonsocial cues [21,24,25]. Taken together, the N1, N2/ ERN, and LPP components are thought to reflect the time course of emotional processing.
In the present study, we aimed to investigate associations between the OXTR rs53576 polymorphism and the time course of emotional processing of social and nonsocial cues by measuring ERP responses. To do so, we analyzed the N1, N2, and LPP components of ERP responses from 88 young male individuals while images of humans and images of objects were presented as social cues and nonsocial cues, respectively. Given that previous studies showed a higher sensitivity to social cues in GG carriers [6][7][8][9][10][11][12][13], we hypothesized that GG carriers would show a greater ERP response (more negative N1 and N2; more positive LPP) than GA or AA carriers in response to images of humans. More importantly, if an OXTR polymorphism affects emotional processing of not only social cues, but also nonsocial cues, we would expect to see differences in ERP responses between the OXTR rs53576 genotype groups in response to images of objects.
Participants
Ninety-two male Japanese undergraduate or graduate students (age range 19-25 years) participated in this study. In the present study, we recruited only male participants because previous studies reported that males show clearer differences in brain structures [10] and emotional traits [11] according to the rs53576 polymorphism than females. Participants reported that they had no psychiatric disorders. Eighty-eight participants were included in the final analysis, because the quality of the EEG was poor for four participants (for specific details, refer to "ERP measurements and analysis" section). After receiving an explanation of the details of the study, participants provided written informed consent prior to participation.
Genotyping
Genomic DNA was extracted from the saliva of participants using a Saliva DNA Isolation Kit (Norgen Biotek Corporation, Thorold, Ontario, Canada). Genotyping for the rs53576 polymorphism was then performed using TaqMan SNP Genotyping Assays (Applied Biosystems, Foster City, CA, USA). PCR amplification was carried out in a LightCycler Nano real-time PCR system (Roche Diagnostics, Mannheim, Germany). All samples were run twice, and they all provided consistent results. The genotype distribution (10 GG, 46 GA, and 32 AA carriers) was in Hardy-Weinberg equilibrium (p = 0.280) and was in line with previous studies showing that AA carriers are more common than GG carriers in Asian populations [26][27][28][29].
Stimuli
A total of 270 images were selected from the International Affective Picture System (IAPS) [30]. The images consisted of three content categories: objects, humans, and animals. The images of objects and images of animals did not include figures of humans, whereas the images of humans included figures of more than one person. The images of animals were presented as fillers to buffer against possible habituation, and thus, the results from these were not included in the analysis. Within each content category, the images were further subdivided into the following three valence categories: neutral, pleasant, and unpleasant. Examples of each category of images include a tissue box (neutral image of objects), flowers (pleasant image of objects), a dirty toilet (unpleasant image of objects), a man with an emotionally neutral face (neutral image of humans), a man with a baby (pleasant image of humans), an injured person (unpleasant image of humans), a fox (neutral image of animals), puppies (pleasant image of animals), and cockroaches (unpleasant image of animals). The category of pleasant images of humans did not include erotic images, because balancing arousal levels between pleasant images of humans and pleasant images of objects was difficult. Specific IAPS picture identification numbers [30] are presented in the Appendix.
Procedures
Participants were seated approximately 80 cm from a screen (20-in. monitor). They were asked to focus on the screen and to look at the images presented. During EEG recording, three blocks of image presentations were shown. In each block, 90 images (10 images for each category) were presented three times for a total of 270 trials. In each trial, a white fixation cross was presented on a black screen for 500 ms, and then an image was presented for 1000 ms. The inter-trial interval was 1250-1750 ms, and the order of the trials was random. The images presented were different among the three blocks.
After EEG recording, each participant filled out a subjective assessment. They once again observed the images presented during the EEG recording and judged the valence and arousal of each image based on a 9-point Likert scale (for valence, "very pleasant" was assigned 9 points, whereas "very unpleasant" was assigned 1 point; for arousal, "very arousing" was assigned 9 points, whereas "very relaxing" was assigned 1 point).
ERP measurements and analysis
The EEG was recorded using a 64-channel Geodesic Sensor Net (Electrical Geodesics, Inc., Eugene, OR, USA) based on the 10/20 system and was amplified by a highinput impedance (200 MΩ) amplifier (Net Amps 200 Amplifier, Electrical Geodesics, Inc.). During recording, EEG signals were recorded at electrode site Cz as a reference with a sampling frequency of 500 Hz. Electrode impedances were maintained below 50 kΩ.
After recording, EEG data were re-referenced offline to the average of the left and right mastoids and bandpass filtered with cutoffs of 0.1 and 30 Hz 1 using EMSE software (Source Signal Imaging Inc., San Diego, CA, USA). The trials of image presentation were averaged for the time window between −200 to 1000 ms for each category of images. Trials including artifacts (eye blinks, muscle artifacts, and body movements) above ±100 μV were rejected. Four participants were excluded from the final analysis because their mean number of trials for each image category was less than 30. We calculated three ERP components as follows: N1 (average amplitude between 50 and 200 ms), N2 (average amplitude between 200 and 320 ms), and LPP (average amplitude between 600 and 1000 ms). N1, N2, and LPP were all averaged from centro-parietal sites (CP1/2, P1/ 2, Pz, and POz) based on previous findings that the effect of emotional content on these ERP components is generally maximal in centro-parietal areas (for example, [19,20] for N1 and LPP and [25,31] for N2).
Statistical analysis
For the ERP responses (N1, N2, and LPP) and the subjective ratings (valence and arousal ratings), we conducted a generalized linear mixed model with OXTR (GG vs. GA vs. AA), content (object vs. human), and valence (neutral vs. pleasant vs. unpleasant) as fixed factors, and participant (a personal code assigned to each subject) as a random factor. In case of significant main effects or interactions, a post hoc t test was performed with Bonferroni corrections. All statistical analyses were conducted using SPSS software (version 23, IBM, Chicago, IL, USA), and statistical significance was set at p < 0.05.
ERP responses
Grand averaged ERP waveforms in the centro-parietal area are shown in Fig. 1. A summary of the results of the generalized linear mixed model for the ERP responses is shown in Table 1, and the mean scores of the ERP responses are shown in Table 2.
For N1, we observed a significant main effect of OXTR (Table 1). Post hoc analysis (independent samples t test, critical p value = 0.017 for three comparisons) revealed that GG carriers showed significantly more negative N1 than GA (t(334) = −3.90, p < 0.001) and AA carriers (t(250) = −5.51, p < 0.001) and that GA carriers showed (Fig. 2). We also observed a significant main effect of content for N1 (Table 1), indicating that N1 is significantly more negative in response to images of objects (M = −0.72 μV, standard error (SE) = 0.11) than images of humans (M = −0.26 μV, SE = 0.11). For N2, we observed a reliable interaction of OXTR × content (Table 1). As shown in Fig. 3, post hoc analysis between OXTR within each content (independent samples t test, critical p value = 0.017 for three comparisons) revealed that GG carriers showed significantly more negative N2 than AA carriers in response to images of humans (t(41.0) = −2.50, p = 0.016), whereas we found no significant difference in N2 among GG, GA, and AA carriers in response to images of objects (all p > 0.017) (Fig. 3). Post hoc analysis between content within each OXTR (paired samples t test) was also conducted; GG carriers showed significantly more negative N2 in response to images of humans than images of objects (t(29) = 2.15, p = 0.040). GA carriers did not show significant differences in N2 in response to images of objects compared to images of humans (t(137) = −1.40, p = 0.165), and AA carriers showed significantly more negative N2 in response to images of objects than images of humans (t(95) = −2.19, p = 0.031) (Fig. 3). We also observed a main effect of valence (Table 1) For LPP, we found no main effect of OTXR, and related interactions were not significant (Table 1). We observed a significant main effect of content for LPP (Table 1), indicating that LPP was significantly more positive in response to images of humans (M = 3.46 μV, SE = 0.13) than images of objects (M = 2.51 μV, SE = 0.12). We also observed a significant main effect of valence for LPP (Table 1) Table 3 summarizes the results of the generalized linear mixed model for the subjective ratings, and Table 4 shows the mean scores of subjective ratings.
Subjective ratings
For valence ratings, we observed a reliable interaction of OXTR × valence (Table 3). Post hoc analysis between OXTR within each valence (independent samples t test, critical p value = 0.017 for three comparisons) was conducted; however, we found no significant differences in valence rating among GG, GA, and AA carriers for neutral, pleasant, or unpleasant images (all p > 0.017). We identified main effects of content and valence, and a reliable interaction of content × valence for valence ratings (Table 3). Post hoc analysis between valence within each content (paired samples t test, critical p value = 0.017 for three comparisons) indicated that participants reported pleasant images to be more pleasant than neutral images and unpleasant images to be more unpleasant than neutral images for both images of objects and images of humans (all p < 0.001).
For arousal ratings, we observed a reliable interaction of OXTR × content (Table 3). Post hoc analysis between OXTR within each content (independent samples t test, critical p value = 0.017 for three comparisons) was conducted; however, we found no significant differences in the arousal rating among GG, GA, and AA carriers for either images of objects or images of humans (all p > 0.017). Post hoc analysis between content within each OXTR (paired samples t test) was also conducted; GG, GA, and AA carriers all reported that images of humans were significantly more arousing than images of objects (all p < 0.001). We also observed a reliable interaction of OXTR × valence for arousal rating (Table 3). Post hoc analysis between OXTR within each valence (independent samples t test, critical p value = 0.017 for three comparisons) was conducted; however, we found no significant differences in arousal rating among GG, GA, and AA carriers for neutral, pleasant, or unpleasant images (all p > 0.017). The main effect of content was also significant for arousal ratings ( Table 3), indicating that participants reported images of humans (M = 5.26, SE = 0.07) to be significantly more arousing than images of objects (M = 4.63, SE = 0.67). The main effect of valence was also significant for arousal ratings (Table 3). Post hoc analysis (paired samples t test, critical p value = 0.017 for three comparisons) revealed that participants reported unpleasant images (M = 5.68, SE = 0.07) to be more arousing than pleasant images (M = 4.81, SE = 0.08) and pleasant images to be more arousing than neutral images (M = 4.36, SE = 0.08) (all p < 0.001).
Discussion
The present study investigated whether the OXTR rs53576 polymorphism affects emotional processing of social and nonsocial cues. To do so, we compared the
Association between the OXTR rs53576 polymorphism and the ERP responses
In the present study, N1 was more negative in GG carriers of OXTR rs53576 than AA carriers, and intermediate in GA carriers, regardless of the response to images of objects or humans. Previous studies have reported that N1 is sensitive to highly emotional stimuli [19,20]. Although the effect of valence of images on N1 did not reach a significant level in the present study (Table 1), the present result suggests that GG carriers of OXTR rs53576 show enhanced emotional processing compared to GA and AA carriers in the very early stage (50-200 ms) in response to both social and nonsocial cues. Regarding social cues, this result supports previous findings of a higher sensitivity to social cues in GG carries than in AA/GA carriers [6][7][8][9][10][11][12][13]. In particular, the present result replicated the previous ERP study [12] showing that the effect OXTR rs53576 on emotional processing of social cues occurs from the very early stage (reflected in N1). Moreover, given that Peltola et al. [12] adapted a task to discriminate facial expressions and the present study adapted a passive picture viewing task, we suggest that associations between OXTR rs53576 and early processing of social cues is evident regardless of whether active attention to social cues is required or not. Regarding nonsocial cues, the present result of N1 provides new evidence that the OXTR rs53576 polymorphism affects emotional processing of nonsocial cues. One previous study [15] found that administration of oxytocin improves the subjective rating of emotional intensity of images of objects including the social context (touch between objects) but not the subjective rating of emotional intensity of images of objects not including social context (no touch between objects). In the present study, although images of objects did not include specific social context such as touching, N1 for images of objects was different depending on the OXTR rs53576 polymorphism. One study reported no difference in the oxytocin level between GG/GA carriers and AA carriers of the OXTR rs53576 polymorphism [32]; however, the relationship between the rs53576 polymorphism and the oxytocin level is still unclear (reviewed in [16]). Thus, interpretation of the mechanism of how the rs53576 polymorphism modulates responses to nonsocial cues remains difficult. Future studies are needed to examine the relationships among the OXTR rs53576 polymorphism, oxytocin levels, and emotional processing of nonsocial cues.
For N2, we found that GG carriers of OXTR rs53576 showed more negative N2 than AA carriers in response to images of humans but not in response to images of objects. Moreover, GG carriers showed a greater N2 in response to images of humans than to images of objects, whereas AA carriers showed a greater N2 in response to images of objects than to images of humans. In the present study, N2 was more negative in response to negative images than to pleasant images, supporting previous results showing that N2 is sensitive to highly emotional stimuli [19,20,22]. Thus, we suggest that GG carriers and AA carriers of OXTR rs53576 show opposite patterns regarding emotional processing of social cues and nonsocial cues in the middle stage (200-320 ms);
Standard error in parentheses
Valence: 1 = "very unpleasant," 9 = "very pleasant"; arousal: 1 = "very relaxing," 9 = "very arousing" GG carriers may show enhanced emotional processing of social cues compared to nonsocial cues, whereas AA carriers may show enhanced emotional processing of nonsocial cues compared to social cues. Similarly, Proverbio et al. [24,25] reported that N2 is more negative in response to images portraying persons than images portraying landscapes in women, but not in men, suggesting that this result is caused by a greater interest in social stimuli for women compared with men. From this interpretation, the association between OXTR rs53576 and N2 that is shown in the present study may also be explained by the idea that GG carriers have a greater interest in social cues than AA carriers. This supports the previous findings of a higher sensitivity to social cues in GG carries than AA/GA carriers [6][7][8][9][10][11][12][13]. LPP did not show any differences related to OXTR rs53576, unlike N1 and N2. Thus, the present results for LPP suggest that OXTR rs53576 does not affect the processing of emotional stimuli in the relatively late stage (600-1000 ms), regardless of the existence of social content in the stimuli. The observation of no association between OXTR rs53576 on late processing of social cues is in line with the previous result by Peltola et al. [12], who reported an association between OXTR rs53576 and ERP responses to human faces in N1, but not in LPP. Taken together, we suggest that the association between OXTR rs53576 and emotional processing may be more evident in the early stage than in the late stage. However, because the present study adapted a passive image viewing task and the previous study adapted a relatively simple cognitive task [12], future studies are needed to examine possible effects of OXTR rs53576 on emotional processing in the late stage during complex cognitive tasks such as the memory task that was adopted in Rimmele et al. [14].
Association between the OXTR rs53576 polymorphism and the subjective ratings We found no difference in subjective ratings for valence and arousal of images among GG, GA, and AA carriers of the OXTR rs53576 polymorphism, although the results of our ERP response indicated differences in emotional processing of social cues and nonsocial cues among the different carriers of the OXTR rs53576 polymorphism. Although some studies (for example, [33]) suggest that physiological responses are more direct indices of responses than subjective ratings, future studies are needed to examine the association between the OXTR rs53576 polymorphism and subjective rating of nonsocial cues.
Implications for anthropology
The distribution of the OXTR rs53576 genotype is different between Asians and European Americans; more GG carriers than AA carriers are found among European Americans, whereas more AA carriers than GG carriers are found among Asians [26,27,34,35]. The present study also replicated previous results showing that AA carriers are more common than GG carriers in Asian populations (10 GG, 46 GA, and 32 AA carriers). This difference in the distribution of OXTR rs53576 carriers seems to be related to cultural differences in human behavior and emotion. For instance, one study [34] found that the frequency of the A allele of OXTR rs53576 is related to collectivistic cultural values. Another study on the distribution of OXTR rs53576 in Africa, Asia, and South Europe [35] suggested that the A allele of OXTR rs53576 may be related to favoritism toward sons. As one factor affecting the interaction between genes and culture, future studies should investigate the evolutionary route that resulted in the difference in the distribution of OXTR rs53576 among regions.
Limitations and future directions
The present study has some important limitations. First, our sample size was small (n = 88) compared with most previous studies on associations between genotype and behavior or brain activity (for example, N = 94 [12], N = 108 [9], N = 179 [7], N = 228 [11], N = 285 [10]). Furthermore, the number of GG carriers in the present study was small (N = 10). As mentioned above, the distribution of the OXTR rs53576 genotype is different between Asians and European Americans. For this reason, several previous studies combined GA and AA carriers in their analyses [6,7,12], whereas other previous studies combined GG and GA carriers [26,29,32,36]. However, other previous studies did not combine genotypes and compared GG, GA, and AA carriers [11,13,27,28]. These different methods of grouping participants make the comparison of the findings between the present study and previous studies somewhat difficult.
Second, we examined only one OXTR SNP-rs53576. Although rs53576 is the most widely investigated SNP regarding the association between OXTR polymorphisms and variations in social behavior, previous studies have shown that other OXTR SNPs, such as rs7632287 [37], rs401015 [38], and rs2254298 [10,35], also affect human social behavior. For instance, the OXTR rs7632287 polymorphism is related to individual differences in pairbonding behavior [37]. Thus, future studies need to be conducted with analyses of other OXTR SNPs.
Third, our participants were all male. Thus, we could not investigate the possible interaction among gender, the OXTR rs53576 polymorphism, and emotional processing that has been suggested in previous studies [10,11]. Future studies adopting the same methods as the present study with the inclusion of female participants are necessary.
Conclusion
The present study investigated an association between the OXTR rs53576 polymorphism and the time course of emotional processing of social and nonsocial cues by measuring the ERP response. From the present results, we suggest that the OXTR rs53576 polymorphism affects emotional processing of not only social cues but also nonsocial cues in the very early stage (before 200 ms); however, we also suggest that the OXTR rs53576 polymorphism is related specifically to increased emotional processing of social cues in the middle stage (200-320 ms). | 2017-06-27T20:04:39.451Z | 2017-01-26T00:00:00.000 | {
"year": 2017,
"sha1": "809293483aeb41198da3609b004d8487d0d01fac",
"oa_license": "CCBY",
"oa_url": "https://jphysiolanthropol.biomedcentral.com/track/pdf/10.1186/s40101-016-0125-3",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a97aecc5e799901f1ec3a43105d45abfa9e805d3",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
64119 | pes2o/s2orc | v3-fos-license | Symmetric Assembly Puzzles are Hard, Beyond a Few Pieces
We study the complexity of symmetric assembly puzzles: given a collection of simple polygons, can we translate, rotate, and possibly flip them so that their interior-disjoint union is line symmetric? On the negative side, we show that the problem is strongly NP-complete even if the pieces are all polyominos. On the positive side, we show that the problem can be solved in polynomial time if the number of pieces is a fixed constant.
Introduction
The goal of a 2D assembly puzzle is to arrange a given set of pieces so that they do not overlap and form a target silhouette. The most famous example is the Tangram puzzle, shown in Figure 1. Its earliest printed reference is from 1813 in China, but by whom or exactly when it was invented remains a mystery [5]. There are over 2,000 Tangram assembly puzzles [5], and many more similar 2D assembly puzzles [3]. A recent trend in the puzzle world is a relatively new type of 2D assembly puzzle which we call symmetric assembly puzzles. In these puzzles the target shape is not specified. Instead, the objective is to arrange the pieces so that they form a symmetric silhouette without overlap.
The first symmetric assembly puzzle, "Symmetrix", was designed in 2003 by Japanese puzzle designer Tadao Kitazawa and was distributed by Naoyuki Iwase as his exchange puzzle at the 2004 International Puzzle Party (IPP) in Tokyo [4]. In this paper, we aim for arrangements that are line symmetric (reflection through a line), but other symmetries such as rotational symmetry could also be considered. The lack of a specified target shape makes these puzzles quite difficult to solve.
We study the computational complexity of symmetric assembly puzzles in their general form. We define a symmetric assembly puzzle or SAP to be a (1) (2) (3) Q: Can you make a line symmetric shape from these two pieces? (Two solutions) Figure 1: [Left] The seven Tangram pieces (1) can be assembled into non-simple silhouettes (2) and (3).
[Right] A symmetric assembly puzzle invented by Hiroshi Yamamoto [7]: given the two black pieces (right) from the classic T puzzle (left), make two different line symmetric shape. (Used with permission.) set of k simple polygons P = {P 1 , P 2 , . . . , P k }, with n = |P 1 | + · · · + |P k | the total number of vertices in all pieces. By simple polygon we mean a closed subset of R 2 homeomorphic to a disk bounded by a closed path of straight line segments where nonadjacent edges and vertices do not intersect. A symmetric assembly f : P → R 2 of a SAP P is a planar isometric embedding of the pieces so that their mapped interiors are disjoint and their mapped union forms a simple polygon that is line symmetric. We allow pieces to flip over (reflect), but other variants of the puzzle may disallow this. Given that humans have difficulty solving SAPs with even a few low-complexity pieces, we consider two different generalizations: bounded piece complexity (|P i | = O(1)) and bounded piece number (k = O(1)). In the former case, we prove strong NP-completeness, while in the latter case, we solve the problem in polynomial time (the exponent is linear in k).
Many Pieces
First we show that it is hard to solve symmetric assembly puzzles with a large number of pieces, even if each piece has bounded complexity (|P i | = O(1)).
Theorem 1 Symmetric assembly puzzles are strongly NP-complete even if each piece is a polyomino with at most six vertices and area upper bounded by a polynomial function of the number of pieces.
Proof. If a SAP has a solution, the location and orientation of each piece within a symmetric assembly is a solution certificate of polynomial size checkable in polynomial time, so symmetric assembly puzzles are in NP. We reduce from the Rectangle Packing Puzzle problem, known to be strongly NP-hard [2]. Specifically, it is (strongly) NP-complete to decide whether k given rectangular pieces-sized 1 × x 1 , 1 × x 2 , . . . , 1 × x k , where the x i 's are positive integers bounded above by a polynomial in k-can be exactly packed into a specified rectangular box with given width w and height h and area x 1 +x 2 +· · ·+x k = wh.
Let I = (x 1 , . . . , x k , w, h) be a rectangle packing puzzle. Without loss of generality, we assume that w ≥ h. Now let I = (P 1 , . . . , P k , F ) be the SAP where P i is the 1 × x i rectangle for each i ∈ {1, . . . , k}, and F is the polyomino in Figure 2. We call F the frame piece of I . We show that I has a rectangle packing if and only if I has a symmetric assembly.
Clearly, if I has a rectangle packing, then the pieces P 1 , . . . , P k can be packed into the w×h hole in the frame piece creating a line symmetric W ×H rectangle, solving the SAP. Now we show the reverse implication. Assume that I has a symmetric assembly, and let O * be a line symmetric polygon formed by the pieces {P 1 , . . . , P k , F }. We claim that O * must be a W × H rectangle, which will imply that I is a yes-instance of RPP. Fix a placement of the pieces of I that forms O * , and let be one of its lines of symmetry. Assume, without loss of generality, that is a vertical line. Let F be the reflection of F about .
Observation 1 implies that passes through an interior point of F . Let B be the line containing the segment of F with length 4w. Let c be the center of the frame piece's bounding box.
Lemma 2 B is either parallel or orthogonal to .
Proof. Suppose for contradiction that B is neither parallel nor orthogonal to . Let α be the smaller angle made by B and . We partition the edges of F crossed by into two at their intersection points. Let F L and F R be the sets of segments on the left and right portions of F , respectively. Consider the set of counter-clockwise angles between and the lines containing segments of F L . The assumptions that B and are neither parallel nor orthogonal, and that F is a polyomino together imply that the set contains exactly two angles α L and β L , where α L ≤ β L and α L + π/2 = β L . Similarly, let α R and β R be the clockwise angles between and the lines containing segments of F R , where α R ≤ β R and α R + π/2 = β R . Since α L + β R = π, it holds that α L + α R = π/2. Note that α ∈ {α L , α R }.
Two distinct pieces of I are connected if the fixed placement of the two pieces to form O * have a non-degenerate line segment on their edges in common. Let P be the subset of {P 1 , . . . , P n , F } such that each P i ∈ P can be reached from F by repeatedly following connected pieces in O * .
As before, consider the angles formed by and the lines containing segments in the left and right parts of P. Since all pieces are polyominoes, these lines cannot make angles other than α L , β L , α R , and β R with . Further note that the subset O of O * covered by P must be mirror-symmetric with respect to , or else O * would not be. This implies that α L = α R . Since α L +α R = π/2, the only solution in which is not parallel or orthogonal to B is when α L = α R = π/4 and α = π/4. However, if α = π/4, then F ∩F is a subset of an H ×H rectangle (see Figure 2), whose area is at most H 2 = 9w 2 , contradicting Observation 1. If passes through c, and is either orthogonal or parallel to B , the symmetric assembly puzzle can only be completed into a rectangle.
So is either parallel or orthogonal to B . Further, it passes through c (see Figure 3). In either case, F ∪ F is a W × H rectangle, and thus O * = F ∪ F . This implies that O * \ F is a w × h rectangle that must contain the remaining pieces of I . In particular, we have that this placement packing of P 1 , . . . , P n gives a solution to the instance I of RPP, completing the proof of Theorem 1.
We extend the above proof to show that the problem stays strongly NPcomplete even when each piece is a convex quadrilateral.
Theorem 3 Symmetric assembly puzzles are strongly NP-complete even if each piece is a convex quadrilateral and area upper bounded by a polynomial function of the number of pieces.
Proof. We note that the only piece that is not a convex quadrilateral is the frame piece F . Hence, we split this into two convex quadrilateral pieces as shown in Figure 4. We note that due to the dimensions of H and W , the four angles α, β, γ, and δ are all unique. Furthermore, only α + δ and β + γ do not sum up to multiples of 90 degrees. If we show that in any line symmetric solution these four angles have to be aligned as in Figure 4, Theorem 1 completes the proof.
Assume the angles are not matched as in Figure 4. We first show that extending γ or δ by a multiple of 90 degrees is not useful. We focus on γ, but the argument for δ is analogous. If we extend γ using a right angle of the other frame piece, it is easy to verify that the imbalance resulting from the implied line of symmetry cannot be overcome using only the remaining rectangles of combined area wh (see Figure 6). Extending γ using the rectangles also does not lead to a line symmetric polygon, since placing the other frame piece afterwards still leads to an imbalanced shape.
Hence, since the four angles are all unique and the symmetry line can pass through at most two corners of a simple polygon, at least two of these angles have to meet in a point. If α is matched to δ or if β is matched to γ, we note that the created angle is not a multiple of 90 degrees and thus we still have three unique angles. This implies that in this case, both α is matched to δ and β is matched to γ (see Figure 5). Since neither created angle is a multiple of 90 degrees, the only way to construct a line symmetric solution is for the symmetry line to pass through both created angles. However, this implies that in order to make a line symmetric shape, we need to at least add one region of area 3w 2 −wh and one of area 3wh. Hence, the total area required is at least 3w 2 + 2wh, which is more than the wh combined area of the rectangles. Therefore, the four angles have to be aligned as in Figure 4.
This result raises the question of what the simplest shape is for which the problem is strongly NP-complete. We conjecture that the problem is still strongly NP-complete even when if each piece is a right triangle.
Conjecture 4 Symmetric assembly puzzles are strongly NP-complete even if each piece is a right triangle and area upper bounded by a polynomial function of the number of pieces.
The idea would be to reduce from the 3-Partition problem: It is (strongly) NP-complete to decide whether a given set of 3k positive integers (each integer is bounded from above by a polynomial in k) can be partitioned into k triples, such the sum of the integers in each triple is the same.
Let {a 1 , ..., a 3k } be the given set of integers in increasing order. We first transform these integers into almost squares of size 1 × 1 + i , such that the 1 + i sides of each triple sum to the same length: When we want to ensure that i is at most 1/1000 for each square, we transform each a i into an almost square of size 1 × 1 + ai 1000a 3k . Note that this does not change triples nor the solvability of the 3-Partition instance. The hole for the 3-Partition instance is shown in gray.
Figure 8:
Splitting an almost square.
Next, we create a big square frame that has a hole of size Note that the area of this hole is equal to the total area of the almost squares. We split the frame into right triangles as shown in Figure 7, while ensuring that any combination of non-right angles is unique.
Finally, we split the 3k almost squares into 24k right triangles. The general idea behind the splits is the same as for the frame: we pick four points close to the middle of the sides of the almost square and split the square as shown in Figure 8. More precisely, when s is the length of a side, we pick a point }. Note that p is at most s/2k away from the middle of the side. Again, we require that any combination of non-right angles is unique.
This uniqueness of angles should ensure that the triangles can only be combined to the desired frame and almost squares. Proving this formally, however, turns out to be rather lengthy, hence we leave this as a conjecture instead.
Constant Pieces
Next we analyze symmetric assembly puzzles with a constant number of pieces but many vertices, and show they can be solved in polynomial time.
Theorem 5 Given a symmetric assembly puzzle with a constant number of pieces k containing at most n vertices in total, deciding whether it has a symmetric assembly can be decided in polynomial time with respect to n.
To prove this theorem, we present a brute force algorithm for solving a SAP that runs in polynomial time for constant k. We say two pieces in a symmetric assembly are connected to each other if their intersection in the symmetric assembly contains a non-degenerate line segment, and let the connection between two connected pieces be their intersection not including isolated points. We will call two pieces fully connected if their connection is exactly an edge of one of the pieces, and partially connected otherwise. Call a piece a leaf if it connects to at most one piece, and a branch otherwise. Given a leaf, let its parent be the piece connected to it (if it exists), and let its siblings be all other pieces connected to its parent. An illustration demonstrating these terms can be found in Figure 9.
We will use a few utility functions in our algorithm. Deciding whether a single simple polygon has a line of symmetry can be done in linear time [6]. We will use isSym(P ) to denote this algorithm, returning TRUE if polygon P has a line of symmetry and FALSE otherwise. In addition, we can test congruence of polygons in linear time using cong(P, Q), returning TRUE if P and Q are congruent polygons, and FALSE otherwise.
In addition, we will need to construct simple polygons from provided simple polygons by laying them next to each other along an edge. Let E P denote the set of directed edges (p i , p j ) from a vertex p i to an adjacent vertex p j of some simple polygon P . Given an edge e ∈ E P , we denote its length by λ(e). Let e P = (p 1 , p 2 ) be a directed edge of a polygon P , let e Q = (q 1 , q 2 ) be a directed edge of a polygon Q, and let d be a nonnegative length strictly less than λ(e P ) + λ(e P ). Translate Q so that q 1 is incident to the point on the ray from p 1 containing e P a distance d from p 1 ; then rotate Q so e Q is collinear and in the same direction as e P ; and finally possibly reflect Q about e Q if necessary so that the respective interiors of P and Q incident to e P and e Q lie in different half planes. Call these transformations the mapping g : P ∪ Q → R 2 . Then we define join(e P , e Q , d) to be either, g(P ) ∪ g(Q) if it is a simple polygon and the interior of g(P ) ∩ g(Q) is empty (forms a simple polygon without overlapping pieces), or otherwise the empty set. See Figure 9. If a SAP has a symmetric assembly, let its connection graph be a graph on the pieces with an edge connecting two pieces if they are connected in the symmetric assembly. Because a symmetric assembly is a simple polygon by definition, its connection graph is connected and has a spanning tree; we can then construct the assembly using a concatenation of join procedures in breadth-first-search order from an arbitrary root. Because parameter d is not discrete, the total solution space of simple polygons that are constructible from the pieces of a SAP may be uncountable. However, we can exploit the structure of symmetric assemblies to search only a finite set of configurations.
In order to enumerate possible configurations, we would like to distinguish between three cases of puzzle (see Figure 10 share a vertex on their connection; Case 2: the puzzle has a symmetric assembly not satisfying Case 1 in which the distance between vertices from the connecting edges between two connected pieces has the same length as an edge from a third piece (we say the connection between two pieces constructs the length of another edge); Case 3: the puzzle has a symmetric assembly not satisfying Case 1 or Case 2 where a nonempty set of pieces are symmetric about the line of symmetry of the symmetric assembly, and any remaining pieces are pairs of congruent pieces.
Lemma 6
If a SAP has a symmetric assembly, it can be described by one to the above three cases.
Proof. Suppose for contradiction we have a symmetric assembly f : P → R 2 of a SAP P that does not satisfy any of the above cases, let s : f (P) → f (P) be an automorphism reflecting f (P) across a line of symmetry L, and let µ = s • f , mapping a point p ∈ P to the reflection of f (p) across L.
Consider the connection graph of f (P). Because the symmetric assembly forms a simple polygon and no two connected pieces share a vertex, by exclusion from Case 1 the connection graph is a tree which we call a connection tree, or else the symmetric assembly would not be homeomorphic to a disk. Further, all connections are single non-degenerate line segments.
Let P be a leaf in the symmetric assembly, whose siblings include at most one branch. We claim that either P is a line symmetric polygon, or µ(P ) is itself a piece of the SAP congruent to P contradicting exclusion from Case 3. First, if P has no parent and is the only piece in the symmetric assembly, P must be a line symmetric polygon. Otherwise, let Q be the parent of P with edge e P from E P touching edge e Q from E Q . Let e QP denote the subset of e Q that maps to the intersection f (e P ) ∩ f (e Q ). Segment f (e QP ) cannot lie along L or else one of f (e P ) or f (e Q ) would share a vertex with another piece, contradicting exclusion from Case 1. Alternatively suppose f (e QP ) and µ(e QP ) are the same line segment. As a leaf, P connects to the rest of the symmetric assembly only through f (e QP ), so for the assembly to be symmetric, f (P ) must be the same as µ(P ), and piece P is a line symmetric polygon.
Lastly, suppose f (e QP ) and µ(e QP ) are not the same line segment; we claim µ(P ) is itself a piece of the SAP congruent to P . Suppose for contradiction it were not. Then µ(P ) either (a) contains a piece as a strict subset, (b) does not fully contain a piece but intersects interiors of multiple pieces, or (c) is a strict subset of a single piece (see Figure 11). First suppose (a), so µ(P ) contains some piece S as a strict subset. Root the connection tree at a piece R with the shortest graph distance to S in the connection tree for which f (R) ∩ µ(P ) = ∅ and f (R) \ µ(P ) = ∅ which exists because µ(e P Q ) must intersect some piece. Then a leaf P with a longest root to leaf path that contains S is also fully contained in µ(P ). Let Q be its parent with edge e P from P touching edge e Q from Q . Because R is the piece crossing the boundary of µ(P ) closest to S in the connection tree and P has the longest root to leaf path, e Q connects to at most one branch piece that intersects µ(P ). Segment f (e P ) cannot contain an edge of the symmetric assembly or else it would construct a length equal to an edge of P , contradicting exclusion from Case 2. So every leaf fully contained in µ(P ) connected to e Q is fully connected to Q . Each endpoint of the subset of e Q in µ(P ) has shortest Euclidean distance to the connection of one leaf intersecting µ(P ) connected to e Q . But at least one of these leaves is fully contained in µ(P ) which would construct a length equal to an edge of P , contradicting exclusion from Case 2. So µ(P ) does not fully contain a leaf, contradicting case (a). Now suppose (b), and suppose two connected pieces intersect µ(P ). The edges connecting these two pieces must overlap in µ(P ) to construct a length equal to an edge of P , contradicting exclusion from Case 2. So µ(P ) does not intersect the interior of multiple branch pieces.
Finally suppose (c), and let µ(P ) be the strict subset of some piece Q * . Segment f (e P ) cannot contain an edge of the symmetric assembly or else it would create a length equal to an edge of Q * , contradicting exclusion from Case 2. So P is fully connected. A useful corollary of the preceding three arguments is that the reflection of any partially connected leaf of a symmetric assembly that conforms to neither Case 1 nor Case 2, must itself be a piece congruent to the leaf. We will refer to this property later as partial leaf congruence.
Here we note that none of the arguments so far have required P to be a leaf having at most one branch sibling; we will use that fact in the argument to follow. Let be the line collinear with segment f (e QP ), and let e be the subset of Q that maps to the largest connected subset of ∩ f (Q) containing f (e QP ). Consider the two disconnected sections of the boundary of Q between an endpoint of e P Q and an endpoint of e , which must each be more than an isolated point or exclusion from Case 1 would be violated. Piece P has at most one branch sibling, so at most one of these sections can be connected to a branch. Let q be an endpoint of e in a section not connected to a branch.
Consider the boundary of Q between e QP and q. Suppose this boundary were a line segment subset of e Q , implying the internal angle of Q at q is less than π; see Figure 12. Then µ(q) is in f (Q * ) or else Q * would connect to another piece somewhere on the segment between e QP and q and construct an edge of the same length as a leaf connected to e Q , contradicting exclusion from Case 2. If µ(q) is in f (Q * ) and Q does not connect with any other piece at q, then µ(q) must be a vertex of f (Q * ). Alternatively, q partially connects to a leaf through e Q . By partial leaf congruence, the reflection of this leaf must itself be a congruent piece, so µ(q) is a vertex of f (Q * ). In either case, the edge of Q * adjacent to µ(q) contained in µ(e Q ) will have the same length as the subset of e Q between q and a vertex of a leaf, contradicting exclusion from Case 2.
Thus, the boundary of Q between e QP and q is not a line segment, so f (Q) must cross , and the endpoint q of e Q in this section is a vertex of Q with internal angle greater than π; see Figure 12. By the same argument as in the preceding paragraph, µ(q ) must be in f (Q * ), and if it were a vertex, we would have the same contradiction as before. However this time µ(q ) need not be a vertex of f (Q * ) because f (Q * ) may extend past µ(q ), with Q * connecting to another piece on the other side of e . However, the connection between these pieces will construct an edge that is the same length as an edge in either Q or a leaf connected to Q, and we have arrived at our final contradiction. So if P is not line symmetric, µ(P ) is itself a piece of the SAP congruent to P .
Thus, our SAP has a leaf that is either a line symmetric piece, symmetric about the line of symmetry, and/or exists in a pair of two leaf pieces that are congruent and symmetric about the line of symmetry. If we remove such an identified leaf piece or pair from the SAP, what remains is a SAP with fewer pieces also admitting a symmetric assembly. Further, removing pieces cannot make the new SAP belong to one of the cases that the original SAP did not before. Repeatedly removing pieces using this process identifies every piece as either symmetric, or uniquely paired with a piece congruent to it, contradicting exclusion from Case 3.
Since every symmetric assembly can be classified as one of these cases, we can check for each case to decide if the SAP has a symmetric assembly. Given a SAP that does not satisfy Case 1 or Case 2, by Lemma 6 it must satisfy Case 3 if it has a symmetric assembly. However, satisfying Case 3 is not sufficient to ensure a symmetric assembly. For example, two congruent regular polygons with many sides and a single regular star with many spikes cannot by themselves form a symmetric assembly though they satisfy Case 3 because no pair of edges can be joined without making the pieces overlap. Thus given a SAP in Case 3, we must search the configuration space of possible connected arrangements of the pieces for an arrangement that forms a simple polygon.
Recall that the connection graph for a symmetric assembly not in Case 1 must be a tree. For a SAP with k pieces, Cayley's formula says the number of distinct connection trees is k k−2 [1]. However, even if two pieces are connected, they could be connected through O(n 2 ) different pairs of edges, so the number of different edge distinguishing connection trees, connection trees distinguishing between which pairs of edges are connected, can be no more than n 2k k k = O(n 2k ) (k is constant). As an instance of Case 3, P consists of one or more symmetric pieces, with the rest being congruent pairs. Let D P and D P be maximal disjoint subsets of P such that there exists a matching η : D P → D P between pieces in D P and D P such that matched pairs are congruent. Let S P be the set of symmetric pieces in P not in D P or D P . Let S T denote some subset of the symmetric pieces contained in D P , and define a trunk to be a subset of symmetric pieces R T = S P ∪ S T ∪ η(S T ) that can be connected into a simple polygon without overlap while aligning each of their lines of symmetry to a common line L (see Figure 13). Define a half tree T to be an edge distinguishing connection tree on R T ∪ D P such that every piece in D P connected to a piece R in R T connects through an edge of R intersecting the same half-plane bounded by L. We call this half-plane the connecting half-plane, with the other half-plane the free half-plane. The reason we define half trees is if we can find a point in their configuration space for which pieces do not intersect and for which pieces in D P not in the trunk do not intersect the free half-plane, we can place the remaining congruent pieces in D P \ S T at the mirror image of their respective matched pairs to complete a symmetric assembly. Let T P be the set of possible half trees. Let L T be the set of undirected edges {P, Q} where piece P is connected to piece Q in tree T ∈ T P , and let m = |L T | < k. For a fixed edge distinguishing connection tree, the orientation of each piece is fixed as pieces may only translate along their specified connection. We want to define a set of intervals I T {P, Q} where we could join e P to e Q while together forming a simple polygon, without overlap between P and Q. For each {P, Q} ∈ L T with e P and e Q the respective connecting edges of P and Q with λ(e P ) ≥ λ(e Q ), let I T {P, Q} be defined as follows. If P and Q are both in R T , let I T {P, Q} be the empty set if join(e P , e Q , d P Q ) is the empty set and {d P Q } otherwise, where we use d P Q to denote |λ(e P ) − λ(e Q )|/2, the distance d would need to be in order to align the midpoints of e P and e Q . Alternatively if P or Q are not in R T , let I T {P, Q} be the closure of the set 1 Function hasAssemblyCase3(P) 2 input : Symmetric assembly puzzle P that satisfies Case 3. We now describe the subset of R m where intersection occurs between two pieces that are not connected in T . If two pieces in a configuration overlap, by continuity there exist two edges e P and e Q from two distinct pieces P and Q that also intersect. The positions of e P and e Q are translations parameterized by a point in C T and the region in which the two edges intersect is a convex region X T {e P , e Q } ⊂ R m bounded by four hyperplanes forming the mdimensional parallelogram representing the intersection of the two edges. For each O(n 2 ) pair of edges from distinct pieces that are not connected, we can subtract each X T {e P , e Q } from C T to form C T . If C T contains any point in its interior, then there exists a symmetric assembly since it will be a point in the configuration space avoiding overlap between pieces. However, the boundary of C T may contain configurations that are weakly simple as the boundaries of each I T not between two pieces in R T and the boundaries of each X T all correspond to configurations containing non-simple touching between pieces. Thus we require C T to have a point on its interior unless all pieces exist in R T , where C T may be a single point corresponding to a symmetric assembly.
Consider the function hasAssemblyCase3 described in Algorithm 1.
Lemma 7
Given symmetric assembly puzzle P that satisfies Case 3, function hasAssemblyCase3(P) returns TRUE if and only if P has a symmetric assembly, and terminates in O(n 5k ) time.
Proof. If P has a symmetric assembly satisfying Case 3 with nonempty D P , C T will have a point on its interior for some tree T as argued above; or if D P is empty, C T will be nonempty. There are O(n 2k ) elements of T P . There are m = O(k) interval sets I T {P, Q} each having computational complexity O(n), so we can construct C T naively in O(n k ) time. The union X T of the O(n 2 ) regions X T {e P , e Q }, which are m-dimensional convex regions, has computational complexity at most O(n 2m ), so the final computational complexity of C T = C T \ X T is at most O(n 3m ) and can be computed in as much time. Thus, the running time of hasAssemblyCase3 is bounded by O(n 5k ). 1 Function hasAssembly(P) 2 input : Symmetric assembly puzzle P. Our brute force algorithm hasAssembly(P) is described in Algorithm 2.
Lemma 8 Function hasAssembly(P) returns TRUE if and only if P has a symmetric assembly that satisfies either Case 1, Case 2, or Case 3, and terminates in O(n 5k ) time.
Proof. We prove by induction. For the base case, P consists of only a single piece satisfying Case 3, which will drop directly to the last line of the algorithm checking Case 3 which, by Lemma 7 will evaluate correctly. Now suppose hasAssembly returns a correct evaluation for SAPs containing k − 1 pieces. Then we show hasAssembly returns a correct evaluation for SAPs containing k pieces.
The outer for loop of hasAssembly cycles through every pair of directed edges e P = (p 1 , p 2 ) and e Q = (q 1 , q 2 ) taken from different pieces P and Q. For each pair, hasAssembly first checks to see if there exists a symmetric assembly for which e P is connected to e Q with p 1 coincident to q 1 , which would satisfy Case 1. If one exists, then joining P and Q into one piece as described would produce a SAP P with one fewer piece that also has a symmetric assembly. Then evaluating hasAssembly on the smaller instance will return correctly by induction. Since the outer for loop checks every possible pair of edges that could be joined in a symmetric assembly satisfying Case 1, hasAssembly will return TRUE if P satisfies Case 1.
Next hasAssembly checks to see if there exists a symmetric assembly for which e P is connected to e Q with p 1 and q 1 separated by a distance equal to the length of some other edge e R in P, which would satisfy Case 2. In the same way as with Case 1, both for loops check every possible pair of edges and that could be joined at every possible length that could produce a symmetric assembly satisfying Case 2, so hasAssembly will return TRUE if P satisfies Case 2.
Otherwise, no symmetric assembly exists satisfying Case 1 or Case 2. By Lemma 7, hasAssemblyCase3 correctly evaluates if P is in Case 3, so hasAssembly returns a correct evaluation for SAPs containing k pieces. Let T (k) be the running time of hasAssembly on an instance with k pieces. Then the recurrence relation for hasAssembly is T (k) = O(n 3 )T (k − 1) + O(n 5k ), where O(n 5k ) is the running time given by Lemma 7. Running time for Case 3 dominates the recurrence relation so hasAssembly terminates in O(n 5k ). Now we can determining whether a symmetric assembly puzzle with a constant number of pieces has a symmetric assembly in polynomial time. Proof.
[of Theorem 5] By Lemma 6, if the SAP has a symmetric assembly, it satisfies either Case 1, Case 2, or Case 3, and by Lemma 8 hasAssembly(P) can correctly determine if it has a symmetric assembly satisfying one of the cases in polynomial time, proving the claim.
Open questions include whether SAPs: are hard for simpler shapes (we conjecture SAPs containing only right triangles are still hard), are hard for non-simple target shapes with constant pieces, or are fixed-parameter tractable with respect to the number of pieces (we conjecture W[1]-hardness). | 2016-03-22T00:56:01.885Z | 2015-09-14T00:00:00.000 | {
"year": 2017,
"sha1": "4df8c819a29467126746a70412cd3e337d023346",
"oa_license": "CCBYNCSA",
"oa_url": "https://dspace.mit.edu/bitstream/1721.1/110865/1/Symmetric%20assembly%20puzzles%20are%20hard,%20beyond%20a%20few%20pieces.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "7367f78fef991cb875ee558c2650e29b6acea14c",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
260405524 | pes2o/s2orc | v3-fos-license | Proposed Customer Acquisition Strategy for Mobile Banking Application
: Digital technology has produced numerous benefits and simplified workers. Nonetheless, this exponential growth in digital technology has increased competition, significantly impacting the finance industry. This significant development in banking presented a vast opportunity that had to be carefully analysed and managed, resulting in intense competition among digital banking products such as mobile banking services, which have helped macro, micro, and small enterprises in Indonesia gain access to more convenient financial services. A regional bank owned one of Indonesia’s provinces, launched a successful mobile banking app with revolutionary features that made transaction easier. The bank updated their mobile app in 2022 as part of their ongoing implementation change from a focus on products to one on customers. In response to user feedback, the new version enhances consumer experience in terms of UI UX and other factors. However, once a market-leading application ranked among the top 10 digital banking products in Indonesia, the mobile banking is presently experiencing a decline in the number of app activations and is being replaced by competitors. Additionally, the bank has difficulty measuring the mobile banking’s digital marketing campaigns. Beginning with an internal and external analysis, a customer acquisition strategy was proposed for mobile banking services product utilizing the RACE planning framework and a digital marketing implementation strategy.
INTRODUCTION
There is persistent and ongoing pressure on businesses in the modern world to adopt digital technologies and adapt their business models to the new reality, particularly among banks and other financial institutions.These companies are beginning to put forth their best effort for business success, even though the industry is still young, unpredictable, and fraught with new challenges.This significant development in banking was a significant chance that had to be cautiously monitored and managed, resulting in digital products such as mobile banking services that were frequently received and utilized by many individuals.BI, the Central Bank of Indonesia, reports that approximately 3,2 billion mobile banking transactions have been conducted since the beginning of May 2022.This value has increased by 676.87% from the same time last year, which is equivalent to 1.90 billion transactions.In addition, between January 1 and May 13, 2022, mobile banking transactions totaling 3.888.09trillion Rupiah increased by 43.76 percent yearover-year to a total of 2.704.61 trillion Rupiah.Payment transactions increased by 57.31 percent year-over-year to a total of 221.56 trillion Rupiah.In response to the desire of consumers to conduct transactions with convenience, efficiency, and adaptability using their fingertips.PT Bank Central Asia Tbk's BCA Mobile leads competitors.BCA's President Director Jahja Setiaatmadja remarked that establishing a unified digital solution is currently the main capital for keeping BCA competitive in banking transaction markets.MyBCA will also become an integrated services app.Due to many issues, the bank discontinued the the mobile banking services app in 2021.The corporation released the new enhanced app in 2022.The bank decided to make the new application separately and only provide mobile banking services for the mobile banking product, instead of integrating the popular digital wallet owned by the bank.According to data and an interview with the Head of Marketing Department, user activations are dropping between 2020, 2021, and 2022.Although in a solid position, the bank targeted user activations dropped, offline marketing campaign events drove most user activations in 2022.45 activities include KJP distribution, schools and universities, city town hall events, and more.The marketing staff distributes savings account products during offline events to promote the mobile banking.This indicates that the mobile banking application needs a new strategy, such as a customer acquisition strategy to target specific audiences and communicate via multiple channels (both offline and online) to expand the consumer base or just to determine if the company is targeting the right consumer.The research should suggest a customer acquisition approach for the new version of the mobile banking services application. .STRATEGY The definition of strategy is a collection of integrated commitments, actions and decisions that are used to explore superior competencies in gaining competitive advantage and competitive strategies (Wandebori, 2019).According to Porter (1969), the essence of strategy is choosing to perform activities differently than rivals, it is the creation of a unique and valuable position involving a different set of activities.The objective of strategic management is to increase a company's organization's ability to compete in the market.Typically, strategic management considers the optimal allocation of employees and assets to attain these objectives.
2. CUSTOMER ACQUISITION Companies attract prospective customers through customer acquisition.An efficient customer acquisition plan helps firms acquire new customers, retain existing ones, and increase revenue.No matter how good your customer retention plan is, customers will go, thus you must fill the gaps to keep your company going.Gaining new customers includes persuading customers to buy a company's goods and services, which refers to customer acquisition.Businesses that are just getting off the ground, growing, offering low switching costs, and infrequent repeat purchases need to acquire customers.Before selecting how to acquire them, a business must select the ideal clients (Majid, 2020; Sağlam & el MONTASER, 2021; You & Joshi, 2020).Customer acquisition is important even where customer retention is justified as the core strategy, it has been observed that 25% or more of customers may need replacing annually (Sellers 1989; Hanan 2003; Buttle 2004).
3. DIGITAL MARKETING Digital marketing is described as marketing efforts that include branding and use various web-based media.Social media has become an essential part of many people's lives across the world.Digital and social media technologies and applications have also been widely employed to raise public awareness of government services and political campaigns at a minimal cost (Rowley & etc, 2017).The foundation of digital marketing is making the company 'simple' to contact customers by being present in the media with direct access to customers.This is an example of horizontal method.Since customers must be served horizontally, when marketers and customers are on the same line, both can reach each other and customer satisfaction with service may be met (Kertajaya, 2009).
4. CUSTOMER ANALYSIS Consumer analysis is a thorough comprehension of the consumer base.It enables businesses to optimize their strategic marketing process to create targeted advertisements, customize and prioritize specific features during product development, and adjust their current business plan to meet the ever-changing requirements of their customers.Using three analysis; STP analysis stands for segmenting, targeting and positioning initials that summarize the core of strategic messaging (Wilson, 2005); Value Proposition to create value and the best value proposition.The Value Proposition Canvas was created by Dr. Alexander Osterwalder as a framework to ensure that the product and the market are compatible.It is a precision modelling instrument for the connection between two consumer segment components and the value proposition; 7P existing marketing mix, the collection of marketing instruments that can be utilized to create a comprehensive marketing strategy.Product, price, promotion, place, people, process, and physical evidence.
RESEARCH METHODS
The research design is a strategic plan of action that connects the research questions to the research execution or implementation.Each step of the research for this final project is shown in the figure obtained.External analysis, internal analysis, and customer analysis will be used in alongside theories to resolve business difficulties.Following the analysis, the next stage is to do a SWOT analysis and a TOWS matrix analysis.Following that, the strategy may be determined, along with an explanation of how it will be implemented into the bank's mobile banking application services offering, which should aid in the resolution of the business issue, incorporate primary and secondary data.For the primary data, the author interview with the bank employees who are familiar with the daily business process of the mobile banking application.The author got information for the secondary part of this project from the company's internal data, papers, books, and previous theses.Qualitative methods include text, visual, and design aspects and distinct data analysis steps.This strategy is chosen to describe field circumstances more precisely, clearly, and comprehensively.
Categories Interpretation Lifestyle
Budget-minded professionals.In the interview, busy professionals wanted fast transactions and seamless interaction with other apps to manage their time.The budget-conscious want the mobile banking product's discounts and cashbacks.
Worker
The interviewees included a construction company owner and a business property owner in this category for self-employed professionals.Both said they have personal and company bank accounts and need the mobile banking's expenditure monitoring, invoicing, and 'fast menu' said the mobile banking's tax services feature help their enterprises.
Setting
For urbanites.People want QR code payment for online and in-store purchases.
Understanding mobile banking users' needs in these areas helps banks design products with better experiences, features, and marketing.Addressing functional, social, and emotional demands may help banks engage and satisfy customers.The author analyses potential consumers' buyer's readiness stage, which goes from awareness to consideration to decision making, it increases the relevance of marketing messages, making it more likely to resonate with potential customers.
PROPOSED CUSTOMER ACQUISITION
To reach more customers in this competitive market, the bank must come up with an appropriate marketing plan to penetrate the market via numerous channels.Based on the data analysis results presented in the previous chapter, Bank DKI should concentrate on promoting and introducing the application to the Gen Z market.This research proposed a customer acquisition strategy as a business solution for the mobile banking application product using the RACE Planning Framework.RACE is comprised of four phases or marketing activities aimed to assist brands in engaging their customers throughout the customer lifecycle.
A.
Reach Reach means raising brand awareness in areas beyond the reach of your company but not beyond your influence.Potential customers are at the first stage of the marketing funnel, looking for the best option.Pay-per-click (PPC) advertising appears in videotron, pops up on ATM machines, and is printed in the form of banners.Display advertising appears in videotron, pops up on ATM machines, and is printed in the form of banners.Social media marketing, because the benefit of social media is that it connects everyone from B2B to B2C.Some of the major concerns in the context of using social media marketing include sharing instructive and useful information, highlighting mobile banking features, leveraging user-generated content, and readily monitoring and analysing results.
B.
Act Act, which is short for "interact," the bank starts bringing in business leads.It's all about giving prospects something of value to help get the friendship off to a good start.The future conversion rate will depend on how many leads are brought in and how good the relationship is.Using content marketing is crucial for acquiring new users and retaining current customers, marketing automation software to customize the content to aligning the message, convert the step is a conversion to sale which can occur both online and offline/ it entails convincing the audience to take the critical next step of becoming paying customers, regardless of whether the payment is made through online e-commers or offline channels.
C.
Convert The conversion to selling process might take place both online and offline.It requires persuading individuals to take the essential next step of becoming paying customers, whether the payment is done through online e-commerce or online platforms.Using public relation to try soft sell the company's product and services, partnership/multichannel selling could be in a form of collaboration with e-commerce platform to reach wide user base who frequently do transactions online.Collaboration with retailers and merchants which allow the mobile banking to offer incentives and reward offers into the mobile banking application, and collaboration with providers of digital wallet which are garnering significant popularity.
D.
Engaging When it comes to re-engaging users of the mobile banking app, utilizing multiple effective which are email marketing, in-app notifications, SMS and CRM to facilitates customer comprehension and eventually strengthen the customer relationships and loyalty by leveraging CRM capabilities.
CONCLUSION AND SUGGESTION
The bank experiences a decline in their activation number of the app.Once a market-leading application, they are now being gradually replaced by competitors.This happened due to the market situation that is continuously evolving, influenced by various factors such as regulatory changes, market competition and the most important was customer preference.According to the marketing team of the bank who were interviewed, they are still having trouble measuring the effectiveness of their marketing campaigns, particularly their digital campaigns.Because the product is a digital product, it is not aligned with it.This suggests that the mobile banking application requires a new strategy, such as a customer acquisition strategy to target specific audiences and communicate via multiple channels (both offline and online) with the aim to expand the consumer base, rather than just to determine if the company is targeting the appropriate consumer.It is planned that the research will suggest a customer acquisition strategy for the new mobile banking app version.In order to acquire more customers, the JakOne Mobile team should increase their digital marketing presence, since, according to the interview results, they are presently only successful in offline event activation.This research therefore proposes a customer acquisition strategy emphasizing digital marketing initiatives utilizing the RACE framework and its KPIs to concentrate more on acquiring the appropriate customers.Bank DKI needs to focus on developing display, pay-per-click, social media advertising, referral, content marketing, marketing automation, public relations, partnership/multichannel selling, re-engage channel, and CRM.Some recommendation for further researchers to explore to acquire insight into studies on the topic of mobile banking services product: increase findings and literation's in relation to customer acquisition strategies, particularly for digital banking products.
For the bank: focusing on the digital marketing campaign, identify the appropriate instruments for measuring the effectiveness of each digital marketing channel.If the available resources for the mobile banking's digital marketing initiatives are insufficient, find a third party with expertise in the field, such as creative agency.
16 participant interviews focused on the needs and expectations of mobile banking application users (both the mobile banking users and non-users).Allows the author to comprehend the market's view of mobile banking product use and the buyer's readiness stage for the mobile banking app; Awareness, Knowledge, Liking, Preference, Conviction.Based on the interview, the author categorized several needs of the mobile banking usage.
3 .
VALUE CHAIN ANALYSIS -Inbound Logistic: IP (Intellectual Property) It uses no raw resources.Thus, before launching the application inbound logistics include licencing and patenting; as the new versions focus primarily on mobile banking capabilities, hiring professionals with mobile banking experience.-Operation: Research, Product Management, Development -Inbound Logistic: available in iOs and Android, and can be downloaded via App Store & GooglePlay -Marketing and Sales: Social media advertising in various platforms; Printed ads; Offline acquisition from events.-Services:24/7 service and assistance for customers uses multiple sites for application evaluations and Chatbot AI complaints.SUPPORT ACTIVITIES-Firm infrastructure: Include all the administrative, financial, legal and operation divisions responsible for the mobile banking application product.-HR Management: Due to the bank's employment procedure, all current employees have a fair opportunity to experience job rotation within a specific time frame.-Technology development: The bank's most recent application removes the e-wallet payment functions and focuses on mobile banking.Due to Indonesia's dynamic mobile banking market, the bank will constantly innovate to improve its app.The app's "stiff" UI and UX have been criticized recently compared to other mobile banking apps.The team noted the criticism and is immediately using it to redesign the app's user interface and experience.-E-procurement: Contrary to conventional procurement.The bank uses technology to make its procurement processes more efficient, transparent, and quick.E-sourcing, e-tendering, and e-catalogue are utilized by the bank. | 2023-08-03T15:38:04.491Z | 2023-07-18T00:00:00.000 | {
"year": 2023,
"sha1": "16e60d8d2ccbb3ac127c10678ff36f53f97e919c",
"oa_license": "CCBY",
"oa_url": "https://ijcsrr.org/wp-content/uploads/2023/07/145-31-2023.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "e210cf069b001859804285904274cb39c973ef89",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": []
} |
247287659 | pes2o/s2orc | v3-fos-license | Challenges of Statin Therapy in Clinical Practice (According to Outpatient Register «PROFILE» Data)
Aim. To identify the main problems of statin therapy in patients with high and very high cardiovascular (CV) risk in real clinical practice. Material and methods. The general information of the study was based on data from 2,457 patients who were included in the register before November 30, 2020: 1,250 men (50.9%) and 1,207 (49.1%) women. A more detailed analysis was performed for groups of patients with high and very high CV risk who had indications for statin treatment at the time of inclusion in the register: out of 2457 patients, 1166 people had very high CV risk, 395 was at high CV risk (a total of 1561 people, the average age of patients was 64.4±11.0 years). Results. Information on the parameters of the lipidogram – the level of total cholesterol (TC) and low-density lipoprotein cholesterol (LDL-C) was available in 1918 (78.1%) and 1546 (62.9%) patients, respectively. Of 1561 patients with high and very high CV risk, TC and LDL-C levels were analyzed in 1221 (78.2%) and 956 (61.2%) cases, statistically significantly more often in patients with high CV risk (p<0.05). Statins were recommended only to 823 (52.7%) patients with high and very high CV risk. Patients with very high CV risk received such appointments 4 times more often than patients with high CV risk: odds ratio (OR) 4.2; 95% confidence interval (CI) 3.2-5.3 (p<0.001). Doctors preferred atorvastatin in prescriptions (n=456, 55.4%), rosuvastatin (n=244, 29.7%) and simvastatin (n=121, 14.7%) were in second and third places. The target level of LDLC was 2 times more often achieved in patients with high CVR, compared with patients with very high CV risk: OR = 2.0, 95% CI 1.4-3.0 (p<0.001). Conclusion. The main problems of statin treatment in real clinical practice remain the non-assignment of these drugs to patients who have indications for such therapy and the failure to achieve the target levels of lipidogram indicators, which may probably be due to the clinical inertia of doctors regarding titration of statin doses, and in some cases caused by the choice of drugs that are not the most effective in reducing LDL cholesterol. Patients with very high CV risk are 4 times more likely to receive a recommendation to take statins compared to patients with high CV risk, but the target level of LDL cholesterol is reached in them 2 times less often.
Challenges of statin therapy
Introduction Atherosclerotic cardiovascular diseases continue to lead among the main causes of death in the population in most countries of the world [1,2]. The first place among drugs that have a pathogenetic effect on the atherosclerosis process belongs to statins (HMG-CoA reductase inhibitors). These are the most studied lipid-modifying agents that improve the prognosis of the disease and life in patients with atherosclerotic cardiovascular diseases. Statins are recommended for most patients with high and very high risk of cardiovascular complications and are the basis of lipid-lowering therapy in such patients [1][2][3][4][5][6]. It remains far from optimal despite improvements in statin prescribing and achievement of target cholesterol levels (total cholesterol and low-density lipoprotein cholesterol [LDL-C]) in large trials (EU-ROASPIRE IV, EUROASPIRE V, ARGO, etc.) [7][8][9]. Also, researchers note difficulties in finding and implementing a universal strategy for the most effective use of statins in clinical practice [10].
The aim of this study is to determine the features and main problems of statin therapy, as well as to assess the possibility of achieving the target level of lipid profile parameters in the treatment of patients with high and very high cardiovascular risk (CV risk) with drugs from the statin group in clinical practice.
Materials and methods
The study was of the nature of a single-stage cohort study. Data from the outpatient register of patients with cardiovascular diseases and their risk factors «PROFILE» were analyzed to achieve the aim of the study.
The «PROFILE» register is a register of the department of preventive pharmacotherapy of the Federal State Budgetary Institution « National Medical Research Center for Therapy and Preventive Medicine » of the Ministry of Health of Russia, which includes all patients who applied for a consultation about cardiovascular diseases or to assess their possible participation in clinical trials. The initial consultation with a cardiologist is the first visit/inclusion of the patient in the registry. The register database has been formed since 2014.
Analysis of patient data was performed at the time of inclusion in the registry since this information most reflects the state of the statin treatment problem in clinical practice. The target value of LDL-C was the level of <1.8 mmol/l for patients with very high risk and <2.5 mmol/l for patients with high cardiovascular risk (target values according to clinical guidelines in force during the main period of inclusion of patients in the study [4,11]) The study's general information was based on 2,457 patients (1,250 men [50.9%] and 1,207 [49.1%] women) who were enrolled up to November 30, 2020. The average age of patients was 61.4±10.5 years. A more detailed analysis was performed for groups of patients with high and very high cardiovascular risk (that is, who had indications for statin treatment) at the time of inclusion of patients in the register.
Statistical data processing was performed using the SPSS 23.0 program (IBM Statistics, USA). The normality of the distribution of quantitative variables was determined using Shapiro-Wilk's test. The variables analyzed in the study were presented as mean values (M) and standard deviations (δ) or median (Me) and interquartile range [25%; 75%]. Qualitative variables are presented as absolute and percentage values. Student's t-test and Pearson's chi-square were used for comparative analysis and determination of odds ratio (OR) and 95% confidence intervals (CI) for 2×2 contingency tables. Differences were considered statistically significant at p<0.05.
Results
Information about the analysis of the total cholesterol level at the time of inclusion in the register was available only in 1,918 (78.1%) patients: 948 men (50.6%) and 970 (49.4%) women. The determination of LDL-C was performed in 1,546 patients (62.9%): 753 (48.7%) men and 793 (51.3%) women.
The frequency of appointment of analyzes for total cholesterol was 1 analysis in 15 [11; 26]
Challenges of statin therapy
At the time of inclusion in the outpatient registry of 1,561 patients with high and very high CV risk, lipid profile parameters (total cholesterol and LDL-C) were determined in 1,221 (78.2%) and 956 (61.2%) cases, respectively. Comparative analysis revealed that at the time of inclusion of patients in the register, lipid profile parameters were more often determined in patients with high cardiovascular risk compared to patients with very high cardiovascular risk ( Table 2).
These drugs were recommended only to 823 (52.7%) patients, despite the presence of indications for the appointment of statins in patients with high and very high CV risk. Patients with very high CV risk received such prescriptions 4 times more often than patients with high CV risk (OR=4.2; 95% CI=3.2-5.3, p<0.001; Fig. 1).
We also note that doctors preferred atorvastatin in prescriptions (55.4%; n=456); rosuvastatin (29.7%; n=244) and simvastatin (14.7%; n=121) were in second and third place. Pitavastatin and lovastatin were recommended in 2 patients. Figure 2 provides information on the statin (international nonproprietary name) class of drugs that were prescribed to patients with high and very high CV risk before inclusion in the outpatient «PROFILE» register. The frequency of prescribing different statins in the compared groups didn't differ (p=0.7).
At the time of inclusion in the «PROFILE» registry, lipid profiles were present in 674 (57.8%) of 1,166 patients with very high CV risk and in 282 (71.4%) of 395 patients with high CV risk. We analyzed in what percentage of cases the LDL-C level was <1.8 mmol/l in patients with very high CV risk, and <2.5 mmol/l in patients with high CV risk (Fig. 3).
The frequency of achieving the LDL-C target level was statistically significantly higher in the group of patients with high risk than in patients with very high cardiovascular risk (OR=2.0; 95% CI=1.4-3.0; p<0.001).
Discussion
The statin prescription to patients with high and very high cardiovascular risk has been recognized and reflected in the clinical guidelines (for the treatment of such patients) of many world professional communities [1][2][3][4][5][6][11][12][13]. Moreover, this provision of recommendations has the highest class and level of evidence. Nevertheless, the results of many observational or cross-sectional studies demonstrate that the non-prescription of statins when patients have indications for such treatment is one of the Challenges of statin therapy most significant problems of statin therapy in clinical practice [9,[14][15][16][17][18]. However, there is certainly a positive trend in the dynamics of statin prescriptions for patients with high or very high CV risk over the past 10-15 years. For example, according to a comparative analysis of two hospital registries (Lyubertsy mortality studies), LMS-1 and LMS-3, the frequency of statin prescription to patients with acute coronary syndrome (ACS) at the outpatient stage prior to a coronary event increased from 2.0% to 4.9% from 2005 to 2015, while remaining extremely low. If we accept that ACS could be the onset of CHD, the authors of this comparative analysis estimated the frequency of statin prescription in patients with a history of CHD: only 12.5% of patients received such therapy. At the same time, all of them survived with ACS, and persons taking statins were not among the patients who died [14].
The data obtained in our study, according to which only half of the patients with high and very high CV risk received recommendations for taking statins, are fully consistent with the results of the cross-sectional cohort multicenter study ARGO: statins were not prescribed in 45.2% of 18,273 patients with high and very high CV risk [9]. According to the results of the observational study SANTORINI, in Europe, almost one in five of 9,606 patients with high or very high risk of cardiovascular complications (18.6%) didn't receive lipid-lowering therapy [15].
According to the prospective observational multicenter study PRIORITET, 37.6% of high and very high-risk patients who had no absolute contraindications to statin treatment were not prescribed these drugs in an outpatient setting [16]. Nevertheless, a brief briefing of the attending doctors on the main provisions of the current clinical recommendations, as well as regular monitoring of patients, significantly improved the situation with the statin prescription and the adherence of patients to this treatment [17].
An even more difficult issue in the treatment of patients with high and very high cardiovascular risk with statins is the achievement of target values of lipid profile parameters, mainly, the LDL-C level. The tendency to reduce the target level of this indicator (the maximum possible reduction), which continues in the clinical recommendations of the last decade, especially for patients with very high cardiovascular risk, leads to the fact that it can be achieved in clinical practice in an ever smaller percentage of cases [18,19]. Medical inertia in prescribing highintensity lipid-lowering therapy plays an important role in this.
Of course, an integral part of the problems associated with statin therapy and the effectiveness of this treatment are the issues of patient adherence to the prescribed treatment [17,20], which were not studied in our study. However, we note that the issue of patient adherence is always secondary to physician adherence to clinical guidelines that statin therapy Challenges of statin therapy should be initiated in patients with high and very high CV risk, and target levels of lipid profile should be achieved. According to the results of the analysis, atorvastatin and rosuvastatin, noted in clinical recommendations, as statins with the maximum lipid-lowering efficacy, prevailed in medical prescriptions. However, simvastatin was recommended to one in seven patients in our study.
Study limitations. The PROFILE outpatient registry is prospective, but the study performed is a crosssectional, single-center, and cohort study, the results of which reflect the characteristics of statin treatment in high and very high-risk patients in clinical practice.
Conclusion
The main problems of statin treatment in clinical practice are the non-prescription of these drugs to patients with indications for such therapy and the failure to achieve the target levels of lipid profile pa-rameters, which may be due to the clinical inertia of doctors regarding titration of statin doses, and in some cases, this may be associated with the choice of drugs that are not the most effective in reducing the LDL-C level. Patients with very high CV risk are 4 times more likely to receive a statin recommendation compared to patients with high CV risk, but they reach their target LDL-C level 2 times less often.
Relationships and Activities. None. Funding. The PROFILE registry is maintained by the National Medical Research Center for Therapy and Preventive Medicine. Unloading of impersonal information of patients (n=2457) from the database of the register "PROFIL" on the nosology of dyslipidemia, necessary for research work and its statistical processing was performed with the sponsorship of EGIS-RUS LLC (Hungary), which did not affect the results in any way , conclusions and own opinion of the authors. | 2022-03-08T16:16:35.178Z | 2022-03-04T00:00:00.000 | {
"year": 2022,
"sha1": "7c3e6a05bba31e1c433f6106b22e0bacd41e9131",
"oa_license": "CCBY",
"oa_url": "https://www.rpcardio.com/jour/article/download/2671/2283",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "7d5f356606c32dbfce136bcd37d58c5030084027",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
122780333 | pes2o/s2orc | v3-fos-license | Simulation of irregular waves over submerged obstacle on a NURBS potential numerical wave tank
In this paper, a fully non-linear three-dimensional Numerical Wave Tank (NWT) is developed for studying propagation and scattering of non-linear random sea wave over bottom submerged bars. The simulation of fully non-linear free-surface is based on Non-Uniform Rational B-Spline formulation (NURBS) as a novel approach and Mixed Eulerian-Lagrangian method (MEL). Highorder boundary integral equation is used to solve the Laplace equation in the Eulerian frame. To update the free-surface, time marching approach including material node method and fourth order Runge-Kutta time integration scheme is used. To obtain appropriate numerical solutions for wave propagation problem, damping zone is set at the downstream. Also, the NURBS approximation is employed to evaluate the velocity of the free-surface particles. Propagation of regular and irregular waves in a NWT is investigated and compared with the available experimental and numerical data. Transmission of the random sea wave over submerged bars is also compared with the experimental and prior numerical studies.
Simulation of irregular waves over submerged obstacle on a NURBS potential numerical wave tank 1 INTRODUCTION Description of free-surface in the numerical simulation of free-surface flow is important to obtain accurate solutions for the problem.Numerous remedies have been developed to find accurate approximation of free-surface.They sometimes are complicated to implement and some of them are time consuming.Polynomial interpolation functions have been widely used as shape function to define marine structures and boundary geometry.Eight-node quadrilateral elements were used by Ning and Teng (2006) and Ning et al. (2009) to simulate free surface with biquadratic shape func-Latin American Journal of Solids and Structures 11 (2014) 2308-2332 tion in time marching scheme.To achieve more accurate results, higher order shape function can be used by more nodal points.Numerical procedure becomes time consuming with increment of nodal points and by decrement of the mesh size, numerical instability may occur.Also, influence coefficient matrix of boundary integral may become near singular by use of the fine mesh size.
A two dimensional potential wave tank was developed by Tang and Huang (2008) for propagation of the second-order wave and simulation of Bragg bottom effect on the free surface evolution.They employed the linear boundary element method to solve the non-linear problem.Third order shape function with twelve-node quadrilateral curvilinear elements was employed by Shao (2010) in weakly non-linear wave-body interaction.Flow field around an offshore monopile was computed using FEM method by Li et al. (2011) where eight-node quadrilateral elements with second-order shape function were used to describe the hull boundary geometry.
In the two past decades, Non-Uniform Rational B-Spline surface (NURBS) has been widely applied to define the complex shape of marine structures.Diffraction problem of floating body was solved by Datta & Sen (2006) in which hull shape was approximated with NURBS.Review of the past studies shows that using of B-spline surface for free-surface modeling is a new approach in time domain simulation.Desingularized boundary integral equation, in both direct and indirect formulations was given by Cao et al. (1991).Three-dimensional NURBS indirect Boundary Integral Equation (BIE) was proposed by Gao and Zou (2008).Two dimensional B-spline boundary element formulations for different degree of continuity of the geometric boundary and variables are developed by Cabral et al. (1990Cabral et al. ( , 1991)).A two dimensional super-parametric boundary element method was formulated by Damanpack et al. (2013) to solve Poisson equation for bending analysis of thin functionally graded plates based on Green second identity.Two dimensional potential numerical wave tank based on NURBS boundary integral equation is developed by Abbasnia and Ghiasi (2014) to simulate interaction between non-linear wave and truncated breakwater.
To keep the stability of solutions, free-surface particle velocity has to be evaluated accurately.To obtain tangential velocity for isoparametric linear elements, double node approach was developed by Grilli and Svendsen (1990).A polynomial formulation was presented by Grilli et al. (2001) for biquadratic and high-order curvilinear elements.For time marching scheme, first-order and second-order finite difference formulation in time was employed in Numerical Wave Tank (NWT) by Wu et al. (2005) and Xiao et al. (2009) as low-order time integration methods.Fourth-order Runge-Kutta method was used by Koo (2003), and fifth-order Runge-Kutta-Gil and fourth-order Adams-Bashforth-Moulton methods were used by Zhang et al. (2005) as high-order integration scheme to update the free-surface.
Different types of wave-makers on the inflow boundary were reviewed by Tanizawa (2000) and Newman (2010).In the opposite side of the wave-maker, the wave absorber is adopted to prevent wave reflection from the end wall.Artificial damping zone has also been applied by Cointe (1991) on the free surface boundary to minimize the wall effect on computational domain.During the free-surface simulation of nonlinear waves, the non-physical saw-tooth instability may occur.Instabilities may also occur due to variable mesh size or natural singular treatment at the intersection of the wave-maker and the free-surface.To treat the so-called saw-tooth instability, different smoothing schemes can be used such as Chebyshev five-point smoothing scheme presented by Koo and Kim (2004) and the B-spline smoothing scheme applied by Tanizawa (2000).
Latin American Journal of Solids and Structures 11 (2014) 2308-2332 This paper is mainly focused on the development of three-dimensional potential NWT based on NURBS approximation.Desingularized direct boundary integral coupled with NURBS formulation is developed to solve the boundary value problem in the Eulerian frame.In the material node approach based on Mixed Eulerian-Lagrangian (MEL) method presented by Longuet-Higgins and Cokelet (1976), nodal points on the free surface are allowed to move freely with the free-surface particles and are traced in the Lagrangian frame.Derivatives of B-spline basis function are used to obtain tangential derivations of potential.The fourth-order Runge-Kutta time integration scheme is applied into the fully non-linear free-surface boundary condition to obtain instant position of the moving boundary for the next time step.To avoid non-physical saw-tooth instability, five-point Chebyshev smoothing scheme is applied on the obtained instantaneous variables for every few time steps.
Series of tests are performed to verify the present numerical procedure.Superposition of several first-order and second-order regular waves at a certain spot is measured as focused wave.Three random waves are propagated in the present NWT and their propagations are compared with the experimental measurements and available numerical results.Effect of bottom profile on the free surface elevation is also investigated.For a submerged bar on the bottom the wave transformations are investigated and compared with the experimental data and the numerical results.
NUMERICAL MODEL
Consider a three-dimensional numerical wave tank with depth d , width B and length L .A layout of the computational domain is illustrated in Figure 1.A wave generator is placed on the upstream wall of the tank.A right-hand Cartesian coordinate system (Oxyz ) is defined on the mean water surface where the origin is on the corner of the tank, the x -axis is laid on the length of the tank and the positive z -axis directed vertically upwards.An artificial damping zone is defined at adja- cent of the end tank wall.
Governing equation and boundary conditions
It is assumed that the fluid is homogeneous, incompressible and inviscid and the flow is irrotational and surface tension on the free-surface is neglected.Therefore, velocity potential ( ) , , , x y z t f can be introduced.Velocity potential satisfies the Laplace equation in the fluid domain W : There are fully non-linear boundary conditions, Kinematic Free Surface Boundary Condition (KFSBC) and Dynamic Free Surface Boundary Condition (DFSBC) on the free-surface ( f S ) which can be written as the following (Tanizawa, 2000): where, ( ) where, n is the normal vector directed outward of the fluid.Inflow boundary condition can be written as (Tang and Huang, 2008): where, I f is the theoretical input wave potential.For instantaneous free-surface, the initial conditions are defined as: The direct boundary integral equation based on the Green's second identity is used to solve the boundary value problem in the Eulerian frame (Brebbia and Dominguez, 1992): Latin American Journal of Solids and Structures 11 (2014) 2308-2332 where, ( ) c q equals zero when the point is outside of the fluid domain and for a point on and inside the boundary domain ( ), it is solid angle i.e., 2p for a point on the smooth boundary and 4p for a point inside the boundary.For three-dimensional problems, ( ) , G q p is given as (Brebbia and Dominguez, 1992): where, q c and p c are the source and field points location, respectively.
Desingularized NURBS boundary integral equation
An arbitrary nodal point on the curved free surface can be described by the NURBS as (Piegl and Tiller, 1996) where, , (Piegl and Tiller, 1996): where, i u is the knot given by Piegl and Tiller (1996).
and tangential vectors s and t for an arbitrary point on the surface can be obtained as follows: Latin American Journal of Solids and Structures 11 (2014) 2308-2332 ( where, u are partial derivatives of NURBS surface in u and v directions shown in Figure 2. Location of a nodal (field) point p on the surface is expressed as (Piegl and Tiller, 1996) ( ) ( ) While source point q c is hold outside of W and p c on the boundary integration surface.Equation 6can be written as: Solids and Structures 11 (2014) 2308-2332 ( Equation 15 can be rearranged on the boundaries as (Cao et al. 1991): where, N G represents the boundaries in which the normal flux of potential is known and D G indi- cates boundary in which the known potential is applied, whereas Since q c and p c are not coincident, integrand singularity is removed.To find the location of q c , the desingularization distance is used which is a function of local grid sizes on the boundary surface as shown in Figure 3. Distance of a source point from the boundary has been proposed as (Cao et al. 1991): ( ) ( ) Solids and Structures 11 (2014) 2308-2332 where, d L and u are constant and equal to 1.0 and 0.5, respectively, and m D is the square root of the panel area.
The boundary integral is computed by Gaussian quadrature scheme.Gaussian points are distributed over ( ) ) and field points p c are located on each boundary surface.
Thus, the location of the source points cloud can be obtained as: where, is Gaussian integration weight factor corresponding to Gaussian points and ( ) . First term of Equation 16, N b S G Î , can be written as: Similarly, other Dirichlet boundaries can be evaluated.Suppose that the bottom surface is divided into ( ) b M ´N = M segments in u and v directions, respectively.Then, Equation 19 can be written as: , , , , Latin American Journal of Solids and Structures 11 (2014) 2308-2332 where, M and N are the number of segments in u and v directions and n d is the normal deriva- tive of potential on the free surface.Other integrals for the remaining boundaries can be discretized similarly.
Time marching scheme
Material node approach is used to transform Equation 2 from the Eulerian form to the Lagrangian form.Nodal points are allowed to freely move with the particles of free surface.Then, Equation 2 can be written as (Tanizawa, 2000): where, c is the location of particles on the free-surface.The position of the particles is updated by the evaluation of particle velocity on the free-surface and Fourth-order Runge-Kutta time integration approach.
Tangential derivatives
Since, geometry of the free-surface is described by NURBS, the normal and tangential unit vector components are provided by Equations 10, 11 and 12.The potential variation on the free-surface can also be described by NURBS as: , , where, , i j Q is potential on the control points and evaluated by potential of nodal points which do not participate in the basis function.Directional derivatives of f with respect to u and v can be written as (Piegl and Tiller, 1996): where, ( ) The particle velocity represented on the boundary can be represented by in which n f is the normal derivative of potential.
Artificial wave generator
Extreme waves described by linear and second-order Stokes wave theories are imposed on the inflow boundary.These are written as follows: ( ) ( ) and also, ( ) ( ) where, ( ) h correspond to first-order potential and wave profile, respectively, and h correspond to the second-order Stokes wave properties (Ning et al., 2009).A ramping function is engaged to avoid the impulse-like behavior and keep the stability of solutions and to reach to the steady state properly.In the present modeling, the ramping function given by Tang and Huang (2008) is used: ( ) where, m T is the modulation time which equals to the incident wave period in the present study.
Artificial damping zones
To obtain proper numerical solution in a numerical wave tanks, the artificial damping zone (sponge layer) is adopted at the end of wave tank.The energy dissipation scheme is used including adding an artificial damping term to the fully non-linear free-surface boundary condition over the region of the free-surface adjacent to the rigid walls boundaries and the inflow boundary.Modified non-linear free-surface boundary condition with damping coefficient presented by Cointe (1991) is as follows: Latin American Journal of Solids and Structures 11 (2014) 2308-2332 where, the subscript e corresponds to the reference configuration for the fluid.The function ( ) in which, w is a characteristic wave frequency and k is a characteristic wave number.The param- eters a and b control the strength and extent of the damping zone, respectively.
, 0
damping zone acts as simple absorber.If a propagating wave is used as reference value, then the damping zone allows only this wave to pass through.In practice, the damping coefficient equals zero except in the damping zone ( ) , which is continuous and continuously differentiable.
Convergence test and model verification
To perform the numerical simulation, a numerical wave tank of Ning and Teng (2006) is provided.
The dimension of the wave tank is taken as 15.45When the wave is fully developed, analytical solution and numerical computations with different number of nodal points are tabulated on Table 1.Root mean square of wave elevation for a complete wave period at the raised numerical probe is defined as: where, K is the number of the time steps to complete a wave period with time step 40 T .NURBS NWT is run with a desktop PC (Intel Core 2 Quad CPU, 2.66 GHz, and 2 GB of RAM).
It is shown that to achieve about half of RMS corresponds to 4 20
´ nodal points, more than twice CPU time is taken for 4 48 ´ nodal points at each time step.Accuracy of the present model for different wave slopes and wave frequencies of second-order stokes wave is examined and given in Table 2. Hence, three wave slopes are chosen and RMS of wave elevations is determined for 4 30 ´ To obtain the proper solution of NWTs, the damping zones should be adopted.A part of wave energy will return back to the computational domain from downstream boundary if the damping of wave energy is too weak.On the other hand, if the absorbing strength is too powerful, the damping zone will act as a solid boundary and the waves reflect from outflow boundary.The length and strength of damping zone control the performance of damping zone.Therefore, for different parameters a and b the wave elevation of free surface is compared in Figure 6 and Figure 7, respectively.In Figure 6, the free surface oscillations along the numerical tank ( x d ) due to the input secondorder stokes wave for 2 b = and different a are shown.When the strength of the damping zone is increased the damping zone acts as the rigid walls and some portion of incident wave is reflected.For 3 a = , increment of the wave height is happen due to wave reflection.For 1 a = and 1.5 a = , the wave reflection is reduced from the wave absorber.For different length of the damping zone, the wave elevation for 1 a = is given along the numerical wave tank in Figure 7.For 1 b = , the wave absorber do not dissipate the incident wave and for long simulation the wave reflection is happened severely.For 2 b = and 3 b = , the incident wave is fully dissipated within the wave absorber and open water condition is kept.
Focused wave generation and propagation
In the experimental work, data measurement of non-linear waves is transformed in the Fourier domain as presented by Ning et al. (2009) and wave spectrum ( ) S f is manifested.Indeed, it repre- sents wave components with different amplitudes and periods which interact with each other and consequently make extreme wave.Amplitude of each wave component i a is obtained as: where, A is the general linear focused wave amplitude given by Ning et al. (2009) ´ Gaussian points are adopted on the free-surface and the rational B-spline basis function of order 3 3 ´ is used to simulate the free-surface elevation.Other boundaries are flat and described by isoperimetric linear quadrilateral elements.Time step is used in high-order time integration scheme.The maximum linear and nonlinear wave elevations max.h for the cases given in Table 3 are computed and compared with the physical measurements and numerical results of Ning et al. (2009) and Westphalen et al. (2012).The comparisons are presented in Table 4.Meanwhile, numerical results include a three-dimensional potential numerical wave tank computations based on the boundary integral equation with isoparametric quadratic element developed by Ning et al. (2009).In addition, the numerical wave tank calculations based on commercial CFD packages with both FV and CV-FE solvers were conducted by Westphalen et al. (2012).For case 2, the highest crest of the first-order wave input with NURBS NWT is higher than the evaluations with CV-FE and FV solver.For the second-order wave components, NURBS NWT predicts the maximum wave elevation closer to the experimental measurement than the FV solver.For case 3, the trend is similar.For case 4, the numerical results show good agreements with the experimental data.It should be mentioned that in this case, the focused wave almost broke in the real wave tank and the nonlinearity predominates on the simulation.Time series of wave elevations of the cases 2-4 are depicted at focused point in Figure 9 and Figure 10, respectively.The comparison of computed crest and trough focused waves using linear and nonlinear theory with experimental results is presented.For case2, computational wave crest on the center of wave group is reached to the experimental measurement by NURBS NWT.When the input wave includes the second-order wave components, prediction of the central wave group trough coincides with physical wave tank measurement as shown in Figure 10.The results of the surrounding crests and troughs are generally improved from the first-order to the second-order.For both input wave cases, the surrounding trough elevations are slightly higher than the physical experiments with respect to the surrounding crests and somewhat, NURBS NWT decreases the small differences.
For case 3, it can be said that the numerical evaluations are closer to the physical experiments better than case 2 and the slight difference of the surrounding troughs and crests are decreased.It shows that moving from input linear wave components to the second-order wave components improves trough elevations.
In the physical measurement, wave breaking almost occurs due to the steepness of the wave in case 4. Hence, severe nonlinearity is attended in the wave simulation.Nevertheless, computational results for central crest reasonably agree with the experimental measurements for both input wave cases.There are substantial differences in surrounding crests and troughs and the wave trends found by Ning et al. (2009).Capability of NURBS to predict sharp mutation of the free surface accommodates computations to agree with the physical measurements.As well, symmetry around the maximum crest of wave group in case 2 is retained by NURB NWT and for steeper wave is lost while the nonlinearity dominates in the numerical modeling.Experimental investigation on propagation of incident irregular waves over a submerged bar is carried out by Beji and Battjes (1993) and its numerical simulations based on Boussinesq equations and mild-slope equations are conducted by Hsu et al. (2007) and(2002), respectively.JONSWAP spectrum of Hsu et al. (2007) given in Equation 35 is chosen to pass over a submerged bar.Dimensions of numerical wave tank is equal to the experimental wave flume of Beji and Battjes (1993) with the length of 37.7 m , the breadth of 0.8 m and the water depth of 0.4 m .Spectrum of the fully propagated incident wave with significant wave height of 1 3 0.03 H m = and the significant period of where, the peak period p T is related to the significant period, and the coefficients are merical results is remained on the higher frequency domain, but it seems that NURBS NWT model is closer to the proposed spectrum.To measure the irregular wave evolution due to a submerged trapezoidal bar, eight wave probes are arranged along the wave flume as shown in Figure 12.
It is worth mentioning that the wave breaking does not occur during the simulation and the observation is conducted at the first wave probe.Comparison of the experiments and the numerical results is given in Figure 13.At the peak point of energy density at every wave probes, NURBS NWT is better than the other numerical computations, whereas the mild slop simulation for wave probes 5-8 and the Boussinesq simulation for wave probe 2 and 4 are overestimated.In addition, these simulations do not touch peak measurement at wave probes 1 and 3.For higher frequency, substantial deviation occurs at wave probes 5-8 for numerical simulation showing that computations are partly improved by NURBS NWT.
CONCLUSIONS
Development of a fully non-linear 3D NWT is considered in this paper for investigation of propagation and scattering of non-linear random sea wave due to bottom submerged bars.The simulation of fully non-linear waves using NURBS in a potential three-dimensional numerical wave tank is successfully completed.
MEL method and desingularized boundary element method is employed for numerical simulation.To keep numerical accuracy and avoid instability in MEL, five points Chebyshev smoothing scheme was adopted in time marching.To obtain appropriate numerical solutions for wave propagation problem in a numerical wave tank artificial damping zones (sponger layer) is adopted.Perturbation sources are placed on the fixed inflow boundary to make the free-surface oscillating during the simulation.Also, the NURBS is used to evaluate velocity of the free surface particles accurately.It is a novel procedure to calculate the free-surface kinematics.
The stability and accuracy of NURBS NWT were examined and verified to model the freesurface.It is shown that the present approach gives accurate solution, whereas the computation time was saved and the computational nodes were reduced.Simulations of propagation of irregular
Figure 1 :
Figure 1: A definition sketch and coordinates system.
of Solids and Structures 11 (2014) 2308-2332 the surface, and , m n are the numbers of control points in u and v directions, respectively.k and l are the orders of the B-spline basis functions
Figure 2 :
Figure 2: Partial derivatives and unit normal vector.
Figure 3 :
Figure 3: Distribution of source points and collocation points.Ox are the Gaussian points located on the bottom sur- face.For the free surface boundary ( D f S G Î ), the second integral on the LHS of Equation16is discretized as: derivatives of B-spline basis functions.Derivatives of free-surface bound- ary value in tangential directions are computed as follows: Latin American Journal of Solids and Structures 11 (2014) 2308-2332 0 x and 1 x indi- cate the edges of the damping zones in horizontal plane on the free-surface.The terms e f and e c are reference values.When reference values are set for calm water condition (
Figure 4 :
Figure 4: Comparison of analytical and numerical modeling of linear wave, A h vs. dimensionless time.
increasing of RMS is infinitesimal.Effect of time step on the numerical solution is given in Figure5.In this Figure the solutions of input second-order stokes wave with 0.104 kA = and different time step with water depth 0.5 is compared.Mesh size and order of B-spline basis function is the same as for Table2.For time step90 T, fully development of the input wave is postponed due to the coarse time steps.For 10 t T , the wave is fully developed for three time steps and the numerical solutions are independent from the time step.
of Solids and Structures 11 (2014) 2308-2332 , ( ) i S f is the spectral density of each wave component i and f D is the frequency step depending upon the number of wave components n .For a wave flume with water depth of presented in Figure8.Characters of each wave spectrum are depicted in Table3based on Westphalen et al. (2012) in order to simulate the wave in which p T is the peak period of wave spectrum and p l is the wave length corresponding to the peak period.In this paper, an artificial wave generator is located at 0 x = and an artificial damping zone with the length twice as great as the length of the wave is placed at the end of the tank.To verify proposed numerical procedure, a tank with the dimensions of 12 provided and the focused point and time are set to 0 case 3 and case 4, respectively.The phase angle i e is taken equal to zero. 4 36
Figure 5 :
Figure 5: Comparison of numerical modeling of linear wave for different time steps, A h vs. dimensionless time.
Figure 6 :
Figure 6: Performance of the end wall damping zone for different strength of the damping zone.
Figure 7 :
Figure 7: Performance of the end wall damping zone for different length of the damping zone.
Figure 8 :
Figure 8: Wave spectrum of focused wave and properties of simulated cases.
Figure 9 :
Figure 9: Comparison of time history of wave elevation of the first-order focused wave cases 2-4.
Figure 10 :
Figure 10: Comparison of time history of wave elevation of the second-order focused wave cases 2-4.
Figure 11 :
Figure 11: Comparison of JONSWAP spectrum recorded in physical and numerical wave tank.It seems that there are trivial mismatch at the peak spectrum inHsu et al. (2007) computations which is improved by the present model.Furthermore, the deviation between theoretical and nu-
Figure 10 :
Figure 10: Comparison of irregular wave transformation when propagating over the submerged bar.
Table 1 :
RMS error and CPU time of linear wave simulation.
Table 2 :
RMS error and CPU time of linear wave simulation.
Table 3 :
Properties of wave cases for simulation.
Table 4 :
Maximum wave elevation of three focused wave. | 2017-11-05T19:25:06.630Z | 2014-10-07T00:00:00.000 | {
"year": 2014,
"sha1": "885c21f9960c66470e17d4cb57ef1501a3aadc29",
"oa_license": "CCBY",
"oa_url": "https://www.scielo.br/j/lajss/a/bk6MHNySqwy57QQqzp7wVhB/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "885c21f9960c66470e17d4cb57ef1501a3aadc29",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
402204 | pes2o/s2orc | v3-fos-license | Caregiver Burden in Alzheimer's Disease: Differential Associations in Adult-Child and Spousal Caregivers in the GERAS Observational Study
Background/Aims To examine factors influencing the caregiver burden in adult-child and spousal caregivers of community-dwelling patients with Alzheimer's disease (AD). Methods Baseline data from the 18-month, prospective, observational GERAS study of 1,497 patients with AD in France, Germany, and the UK were used. Analyses were performed on two groups of caregivers: spouses (n = 985) and adult children (n = 405). General linear models estimated patient and caregiver factors associated with subjective caregiver burden assessed using the Zarit Burden Interview. Results The caregiver burden increased with AD severity. Adult-child caregivers experienced a higher burden than spousal caregivers despite spending less time caring. Worse patient functional ability and more caregiver distress were independently associated with a greater burden in both adult-child and spousal caregivers. Additional factors were differentially associated with a greater caregiver burden in both groups. In adult-child caregivers these were: living with the patient, patient living in an urban location, and patient with a fall in the past 3 months; in spouses the factors were: caregiver gender (female) and age (younger), and more years of patient education. Conclusion The perceived burden differed between adult-child and spousal caregivers, and specific patient and caregiver factors were differentially associated with this burden.
Introduction
Alzheimer's disease (AD), the most common cause of dementia, presents an enormous health, social, economic, and personal challenge given the large and growing number of older people affected by the disease worldwide [1][2][3] . Informal caregivers, typically family members, play a major role in caring for AD patients living at home, and the cost of informal care in AD represents the largest component of societal costs [4,5] . Caregiver burden can be defined as the caregiver's perception of the physical, emotional, economic, and social cost of the caregiving relationship [6] . Identifying patient and caregiver characteristics associated with the caregiver burden is an important step towards determining interventions that may alleviate the burden of caring for patients with AD and may result in cost savings [7] .
Increased caregiver burden has been associated with increased risk of AD patient hospitalization [8] and a faster time to institutionalization and death in AD patients [8][9][10] , as well as early mortality for caregivers themselves [11] . This is despite the positive aspects of caregiving, which include companionship, reward, and enjoyment [6,12] .
The caregiver burden increases with greater AD severity, and both patient and caregiver characteristics have been found to explain greater caregiver burden, with some differences depending on the caregiver-patient relationship [6,13,14] . Although the caregiver relationship to the patient with AD is often reported amongst the potential factors affecting caregiver burden, findings of different studies are conflicting. In some studies, adult children experienced the highest level of burden [15][16][17][18] , in others spouses had a higher burden [19][20][21][22][23] , whereas some other studies reported no significant differences in burden between these caregiver groups [24][25][26] or that the caregiver relationship to the patient was not a determinant of caregiver burden [27][28][29] . Conde-Sala et al. [13] directly compared factors associated with burden in caregivers who were adult children or spouses of AD patients. Using the Zarit Burden Interview (ZBI), a widely used instrument for measuring subjective caregiver burden in AD [30] , a greater level of burden was found in adult-child caregivers than in spousal caregivers [13] . This highlights the caregiver relationship to the patient as an important factor when assessing caregiver burden. However, further studies are required as the Conde-Sala study was conducted in an individual country (Spain) and included relatively small samples of adult-child and spousal/partner caregivers [13] . Differences in the burden perceived by spousal and adult-child caregivers may also be influenced by disease severity [13,31] . Previous studies of community-dwelling AD patients focused mainly on patients with milder forms of the disease or did not directly report burden according to AD severity, but we have recently found in the GERAS study, a large multinational observational study that included a relatively large number of patients with more severe AD, that caregiver burden increased with increasing AD severity [32] . Exploring factors influencing caregiver burden separately in adult-child and spousal caregivers of a large population of patients with mild, moderate and more severe AD will identify differential factors and allow better targeting or tailoring of interventions to reduce caregiver burden.
The objectives of this study were to explore the associations between patient and caregiver characteristics and clinical factors, and subjective caregiver burden in adult-child and spousal caregivers participating in the GERAS study.
Study Design and Participants
GERAS is an 18-month prospective, multicenter, naturalistic, observational cohort study reflecting the routine care of patients with AD in France, Germany, and the UK. The main aims of the study were to evaluate costs and resource use associated with AD for patients and caregivers. The study design and methods have been described in detail elsewhere [4] .
Briefly, investigators, mostly from specialist secondary care clinics ('memory clinics'), enrolled community-dwelling patients aged at least 55 years, diagnosed with probable AD according to the National Institute of Neurological and Communicative Disorders and Stroke and Alzheimer's Disease and Related Disorders Association (NINCDS-ADRDA) criteria [33] , with a Mini-Mental State Examination (MMSE) [34] score of ≤ 26, and who presented within the normal course of care. Patients with other potential causes of dementia were excluded from the study. Enrolled patients were also required to have a primary caregiver who was willing to participate in the study and be responsible for the patient for at least 6 months of the year. 'Primary caregiver' was defined as an informal carer who, according to the family, was the person who normally takes most responsibility for the day-to-day decisions and provision of home care for the patient.
All patients (or their legal representative) and caregivers were required to provide written informed consent before entering the study, which was approved by ethics review boards in each country according to country-specific regulations. Study enrolment was from October 2010 to September 2011. Patients and caregivers were evaluated at baseline and could attend up to three assessment visits at 6-month intervals during the 18-month study period, thus reflecting the routine care of patients with AD. This article reports a post hoc analysis of the baseline data from this study.
Patients were stratified according to disease severity at baseline using MMSE criteria based on UK clinical guidelines [35] : mild AD (MMSE score = 21-26), moderate AD (MMSE score = [15][16][17][18][19][20] or moderately severe/severe AD (MMSE score <15). As the study aimed to recruit equal numbers of patients in each of the three severity groups, the sample size was calculated on this basis together with a potential dropout rate of 20% each year during the study, as reported previously [4] .
Data Collected
Patient information collected at baseline included age, gender, marital status, number of years of education, time since AD diagnosis, living arrangements, comorbidities and whether or not they had experienced a fall in the past 3 months. Cognitive function was tested using the MMSE [34] and, for patients with mild or moderate disease severity, the cognitive subscale of the Alzheimer's Disease Assessment Scale [36] ; these scales were completed by the investigators. Patient functional ability was assessed by caregivers using the Alzheimer's Disease Co-operative Study of Activities of Daily Living Inventory (ADCS-ADL) [37] . Behavioral symptoms, and caregiver distress caused by these symptoms, were evaluated by caregivers using the Neuropsychiatric Inventory (NPI) [38,39] .
Caregiver information collected at baseline included age, gender, relationship to the patient, marital status, whether they live with the patient (yes/no), whether they are the sole caregiver (yes/no), employment status (working/not working for pay), and medical conditions.
Caregiver burden was assessed using the 22-item ZBI [30] , a self-reported instrument that includes questions on caregiver stress, time available for self, and impact of caring on the caregivers' social life. Responses to each item are recorded on a 5-point scale (0 = never, 4 = nearly always). The ZBI total score has a range of 0-88, with higher scores indicating greater burden.
Caregivers completed both the caregiver and patient-proxy versions of the EuroQol-5 Dimension Questionnaire (EQ-5D) [40] , providing estimates of both the caregiver's and patient's health-related quality of life (HRQoL).
Caregiver resource use during the month before the baseline visit was assessed using the Resource Utilization in Dementia (RUD) instrument [41] . The RUD was administered by the physician and answered by the caregiver. This included information on the time spent caring for the patient in three distinct areas of care: the number of hours spent on assisting basic activities of daily living (ADL), the number of hours spent on instrumental ADL, and supervision time.
Statistical Analysis
Analyses were performed on two groups of caregivers: adult children and spouses. Caregivers whose relationship to the patient was reported as friend or any other relationship were excluded from this analysis. Relationship status was collected using the RUD questionnaire, and the responses for husbands and wives were combined to form the spousal group (which may have included the patient's partner in some cases as the RUD does not specify how people who are partners or cohabiting with the patient should respond).
Descriptive statistics (means and SDs or frequencies) were used to summarize all variables, including demographic and clinical characteristics, and were based on nonmissing observations. Comparisons between caregiver groups used Cochran-Mantel-Haenszel tests for categorical data, stratified by country and MMSE severity group. For continuous variables, the p values were taken from the type III sum of squares general linear regression model (using PROC GLM in SAS), which assumes the data are normally distributed, and includes the factors caregiver-patient relationship, MMSE severity group and country.
A backward selection method was used for separate multivariate general linear models (GLMs, using PROC GENMOD with the identity link function) for the adult-child and the spouse cohorts to identify which patient and caregiver factors were independently associated with the ZBI total score. This model assumes that the probability distribution is normal. Country and MMSE severity group were forced into the model while other factors were removed from the model based on an exclusion criterion of p > 0.05. In addition to country and MMSE severity group, the variables included in the initial model prior to the backward selection process were: (1) patient factors: age, gender, living location (urban/rural), ADCS-ADL total score, years of education, duration of disease, experienced a fall in the past 3 months, and number of comorbidities, and (2) caregiver factors: age, gender, lives with patient (yes/no), sole caregiver (yes/no), working for pay (yes/no), number of medical conditions, NPI distress score, time spent caring for basic ADL, instrumental ADL, and supervision time. The deviance R 2 (R 2 DEV ) was calculated for each of the GLMs. Sensitivity analyses were used to explore the relationship between cognitive and functional measures on the ZBI. The ADCS-ADL score was removed from the model selection, while keeping the MMSE score in the model. For functionality, the relationship between the ADCS-ADL and ZBI was explored in both caregiver groups through alternative GLMs which replaced the total ADL score with the ADL-instrumental and ADL-basic subdomain scores, and with the scores for the four subdomains of basic activities, household activities, communication, and outdoor activities.
Because caregivers of patients with dementia report high rates of depression [10,42] and depression has been reported as a predictor of caregiver burden as assessed using the ZBI [43] , we wanted to investigate the impact of caregiver depression on burden in our study population. A further GLM included the presence of caregiver depression (yes/no) in the final model. All sensitivity analyses for the adult-child model were run with the 'caregiver lives with the patient' factor included. The different sensitivity analyses were compared using the Akaike information criterion (AIC) model fit statistic from the GLMs.
All data were analyzed using SAS software, version 9.2 (SAS Institute, Cary, N.C., USA). Statistical significance was considered to be p < 0.05.
Results
The study cohort included 1,497 patients with AD, 985 with spousal caregivers and 405 with adult-child caregivers. The remaining 107 patients were excluded from the analysis because their primary caregiver at baseline was reported as being a friend or another person who was not the spouse or adult child of the patient (caregiver-patient relationship status was missing for 3 patients). Thus, a total of 1,390 caregivers were analyzed: 985 (70.9%) spousal caregivers and 405 (29.1%) adult-child caregivers. Table 1 summarizes the patient and caregiver characteristics at baseline by caregiver relationship to the patient. Patients with adult-child caregivers were older, predominantly women (82.5%), mostly widowed (75.8%), had a shorter mean time since AD diagnosis and a shorter duration of education, 49.1% lived alone in their own home, and 18.9% had experienced a fall in the past 3 months. The adult-child caregivers were mostly women (74.6%), 58.3% were working for pay, and 27.2% were the sole caregiver, whereas the spousal caregivers were older, fewer were women (58.8%), 72.0% were the sole caregiver, 91.5% were not working for pay, and they had a higher number of medical conditions. Almost all (99.2%) of the spousal caregivers lived with the patient.
In the adult-child caregiver group, there were 133 (32.8%), 136 (33.6%) and 136 (33.6%) patients with mild, moderate and moderately severe/severe AD, respectively; the corresponding numbers in the spousal caregiver group were 400 (40.6%), 298 (30.3%) and 287 (29.1%). The caregiver burden (unadjusted mean ZBI total score) was higher with greater AD severity in both groups of caregivers ( fig. 1 ). A GLM of the ZBI using the MMSE severity group, caregiver-patient relationship and the interaction between the two as factors showed that both the MMSE severity group (p < 0.001; i.e., greater burden with higher AD severity) and caregiver-patient relationship (p < 0.001; i.e., adult-child caregivers have a higher burden than spousal caregivers) were statistically significantly associated with ZBI total score, but that the interaction between MMSE severity and caregiver-patient relationship was not significant (p = 0.23). Thus, there is no evidence that ZBI total score behaves differently across AD severity groups depending on the type of caregiver. Table 2 reports the least square mean differences and associated p values for the caregiver-patient relationship factor in the GLMs for baseline clinical characteristics. The models show some differences in clinical characteristics between patients cared for by adult-child caregivers versus spousal caregivers. Although the patients with adult-child caregivers had a b Marital status is not reported for the group of patients who were cared for by their spouse because by definition their marital status has to be married. Relationship status was collected in the RUD questionnaire, which does not specify how people who are partners/cohabiting with the patient should respond and they may be included in the spousal group with husbands and wives. c For these patients, some of the 'other' persons they live with will be the adult-child caregiver.
shorter duration of disease ( table 1 ) and better functional abilities in instrumental ADL and the subdomain of household activities, they had worse scores in the ADL basic activities subdomain, more behavioral problems (NPI total score; NPI subdomains of psychosis, affective and apathy) and a lower HRQoL (EQ-5D index and visual analog scale scores) than the patients with spousal caregivers. Adult-child caregivers had a greater caregiver burden than the spousal caregivers (unadjusted mean ZBI total score: 31.8 vs. 28.1, respectively; p < 0.001; table 2 ). The adult-child Data are presented as means ± SD. p values in bold are significant. ADAS-cog = Cognitive subscale of the Alzheimer's Disease Assessment Scale; VAS = visual analog scale. Reduced functioning is indicated by lower scores for MMSE and ADCS-ADL and higher scores for ADAS-cog; increased impairment is indicated by higher scores for NPI; reduced quality of life is indicated by lower scores for EQ-5D; greater caregiver burden is indicated by higher ZBI scores. a Least square mean differences and p values from the type III sum of squares of the GLM, which included the factors caregiver-patient relationship, MMSE severity and country. b Model includes only the factors caregiver-patient relationship and country. c For mild and moderate MMSE severity groups only. caregivers spent less time on all aspects of caregiving (basic ADL, instrumental ADL, and supervision) and had a better HRQoL compared with the spousal caregivers ( table 2 ). The results of the final GLMs ( table 3 ) indicated that a lower caregiver burden (ZBI total score) was independently associated with better patient functional ability (higher ADCS-ADL total score) in both groups of caregivers. Disease severity (MMSE) was not independently associated with caregiver burden once other variables were taken into account. Also, a higher caregiver NPI distress score was independently associated with increased caregiver burden for both adult-child and spousal caregivers. For adult-child caregivers, a higher burden was associated with living with the patient, patient living in an urban location, and whether the patient had experienced a fall in the past 3 months. For spousal caregivers, a higher burden was associated with the caregiver being female, being younger, and when the patient had spent more years in education.
Sensitivity Analyses
When the ADCS-ADL total score was excluded from the final model, AIC statistics showed a slightly poorer model fit. Without ADCS-ADL total score in the spousal model, the MMSE severity group was significant, and caregiver time for basic ADL and instrumental ADL and disease duration were included in the model, whereas in the adult-child caregiver model, the MMSE severity group remained nonsignificant, and no additional factors entered the model. Where the cells are blank, the variable was not entered into the final model for that group of caregivers (p > 0.05 in univariate analysis). A negative estimate indicates a lower caregiver burden compared with the reference category (for categorical variables) or as the value increased (for continuous variables). p values in bold are significant. CL = Confidence limit. a MMSE and country were forced into the final models.
Replacing the ADCS-ADL total score with the instrumental and basic ADL subdomains showed that the association of functionality with burden was predominately associated with the instrumental subdomain for both of the caregiver-patient relationship cohorts.
Replacing the ADCS-ADL total score with the four ADL subdomains (basic activities, household activities, outdoor activities, and communication) improved the model fit (based on the AIC statistic) for both caregiver cohorts. In the spousal model, household activities and communication were statistically significant in the model, and higher scores were associated with a lower burden. In the adult-child model, only the communication subdomain was statistically significant; an increase in the scores for ADL communication was associated with a lower burden.
Inclusion of caregiver depression as a specific condition improved the model fit (based on the AIC statistic) for both the spousal and adult-child models. Higher burden scores were associated with the caregiver reporting depression.
Discussion
Our results show that two thirds of the patients with AD in the GERAS study cohort were being cared for by their spouse at the baseline assessment, and that adult-child caregivers appeared to experience a higher burden than spousal caregivers despite spending less time caregiving. Regression analysis showed that patient functioning and caregiver distress due to patient behavior were both associated with caregiver burden irrespective of the caregiverpatient relationship. Disease severity (MMSE) was not an independent determinant of caregiver burden due to the strong association between patient functional ability and disease severity. Additional patient and caregiver characteristics were differentially associated with the burden experienced by adult-child and spousal caregivers. These findings imply that the caregiver's relationship to the patient should be considered when implementing interventions for reducing caregiver burden and improving the care of community-dwelling patients with AD.
Although the GERAS study baseline results are generally consistent with those of some previous studies (e.g. Conde-Sala et al. [13] ), our study has a particular strength in that it was conducted in a large cohort of community-dwelling patients, with a wide distribution of AD severity (including more severe AD), and with their family caregivers in multiple countries. Other studies tended to be conducted in individual countries, in much smaller samples, and to focus on patients with milder AD.
Consistent with data from Spain [13] and other studies [15] , we found that the caregiver burden was greater in adult-child caregivers than in spousal caregivers. Subjective reasons for this have been attributed to spouses living with incremental increases in disease severity and gradually adjusting to this way of living, whereas the adult-child caregivers may experience considerable disruption to their usual lifestyle and may have to perform other family duties in addition to caring for their parent with AD [13] . In contrast, some researchers have found that spouses of patients with dementia experience a greater burden than adult-child caregivers [20,21,44] . For example, in Korea, spousal caregivers reported a higher burden than either adult-child caregivers or daughters-in-law, with the latter group forming the largest group of caregivers for older family members in that country [44] . Additionally, data from a population-based survey in Latin America, India, and China showed that children/children-in-law had similar ZBI scores to spousal caregivers, whereas other types of caregivers had a lower level of burden [24] . It is likely that cultural traditions in caring for frail older parents are an important factor influencing the differential findings from these studies.
In the GERAS study, the two factors independently associated with caregiver burden in both adult-child and spousal caregivers were patient functional abilities (ADCS-ADL total score) and caregiver distress due to the patient's behavioral symptoms (NPI distress score). Better patient functioning (ADCS-ADL total score) was associated with a lower burden in both types of caregiver, although the estimates in table 3 indicate that the impact was greater on spousal caregivers, almost all of whom lived with the patient. Our results are consistent with those of Conde-Sala et al. [13] , who found that greater functional ability (measured using the Disability Assessment for Dementia scale) was negatively correlated with caregiver burden for both adult-child and spousal caregivers. Both that study and the current analysis found no difference in functional ability between patients cared for by adult-child versus spousal caregivers in the univariate analysis. Our sensitivity analysis showed that instrumental ADL but not basic ADL was significantly associated with burden for both types of caregiver. Considering the four subdomains of the ADL, adult-child and spousal caregivers had a significantly higher burden with poorer scores in communication, whereas only spousal caregivers had a higher burden associated with poorer scores on household activities. Thus, spousal caregivers were impacted more by a patient's loss of ability to perform household chores, which would be consistent with most spouses being the sole caregiver and living with the patient and, therefore, having to undertake the household tasks when the patient was unable to perform them.
In previous studies, impaired instrumental ADL [20] and ADL abilities (measured using the Disability Assessment for Dementia scale) [45] were associated with caregiver burden. However, functional ability (ADCS-ADL score) was only weakly correlated with caregiver burden in an analysis of the CATIE-AD study [22] . One reason for the different findings between studies may be that the concept of ADL is closely related to MMSE severity and caregiver time, such that these variables may have a similar influence on caregiver burden. The results of the model used by Bergvall et al. [45] implied that part of the association between ADL abilities and caregiver burden was mediated by informal care hours. Our sensitivity analyses confirm the relationship between ADL, MMSE and caregiver time; however, the results indicate that ADL is a better explanatory variable for caregiver burden than either of the other two variables.
Research has consistently shown that AD patient behavioral problems (neuropsychiatric symptoms) have a substantial impact on the caregiver and are associated with increased caregiver burden [13,14,20,29,[45][46][47] . Some of these studies have reported that caregiver burden/distress is associated with specific behavioral disturbances, but the findings vary between studies. In the GERAS study, adult-child caregivers reported significantly greater patient impairment in the NPI subdomains of psychosis, affective and apathy compared with spousal caregivers. In the regression model, greater caregiver distress due to the patient's behavioral symptoms (NPI distress score) was associated with greater caregiver burden in both caregiver-patient relationships. These findings suggest that treatments for reducing patient behavioral symptoms are an important therapeutic option and may not only alleviate patient suffering, but may also lead to a reduction in the perceived burden of caregivers.
Although the number of medical conditions in caregivers was not found to be significant in the burden analysis for either spousal or adult-child caregivers, when we explored caregiver depression as a specific condition it was found to be associated with burden for both types of caregiver. In a previous study, depressive symptoms were related to the burden of adult-child caregivers but not of spousal caregivers [13] .
Our analyses identified differential factors associated with adult-child and spousal caregiver burden. Caregiver gender and age were associated with spousal caregiver burden: wife caregivers and younger spouse caregivers reported a greater burden. The gender difference in spousal caregiver burden is in agreement with previous studies [13,48] , and may be mediated by different coping strategies [48] . Male spousal caregivers may also receive more support from other family members or professional services [49] . Other researchers have found that younger caregivers can experience a higher level of burden [20] . Spousal caregiver burden was also greater when caring for patients with more years of education, which has not been reported previously, although the estimate for this variable is quite small for each change in burden score. Considered together with the findings on caregiver gender and age, we could hypothesize that caregivers may have expectations of patients with more years of education to fulfill specific aspects of daily living (e.g. financial or emotional aspects), and these caregivers are now struggling with their change in role and their ability to provide support for their spouse.
Our results showed that adult-child caregivers who live with the patient (26.2% of the adult-child caregiver group) had a higher burden than those who do not live with the patient, supporting the findings of Conde-Sala et al. [13] . One possible reason for this difference may be that caregivers who live with a parent with AD have greater difficulties in managing the situation [13] , especially if they have family and/or work responsibilities, and may have less personal time and experience more social isolation from friends. Research into the impact of marital status in adult-child caregivers on burden may be useful to further assess this finding. Another interesting finding is the association between greater adult-child caregiver burden and whether patients had experienced a fall in the previous 3 months. Falls are common in elderly people with dementia and increase patient dependence on their caregiver for basic ADL [50] . More falls can also be considered a measure of worse patient physical health and can result in reduced mobility, which would be expected to lead to increased caregiver time spent caring for the patient, thereby potentially increasing the caregiver burden. In a prospective cohort study in Italy, caregiver burden was associated with the AD patient's risk of falling, and there was a higher risk of falls among patients whose caregivers were nonspouse and non-first-degree relatives [51] . These researchers speculated that caregiver emotional and psychological distress might contribute to patient falls. Although further research is needed to elucidate the causal relationship, both findings could imply that interventions aimed at reducing caregiver burden, especially among adult-child caregivers, should include training and strategies for preventing patient falls.
The analyses presented in the current article focus on the subjective emotional aspects of caregiver burden assessed using the ZBI. Other aspects of caregiver burden, such as caregiver health (physical and mental) [12] , time spent caring or the costs of informal care [27] , may be affected by other factors than those reported here.
Several limitations of the study must be considered when interpreting the results. First, this was a selected sample of community-dwelling AD patients and may not represent the full spectrum of caregiver burden, although patients with mild, moderate and moderately severe/ severe AD were included. Second, although we have identified several patient and caregiver factors associated with subjective caregiver burden, we cannot assume causality and it remains unclear whether these factors are a cause or consequence of caregiver burden. Third, there may be some rating bias in the caregiver assessment of patient behavioral symptoms (NPI) and HRQoL (EQ-5D) as it has been shown that the health and well-being of caregivers can influence their evaluation of the patient; caregiver burden and depression have been associated with more negative assessments of these patient factors [52] . Moreover, the worse patient behavioral symptoms (higher NPI total score) in the adult-child caregiver cohort indicate potential selection bias, such that patients living with spouses with very severe behavioral symptoms may have already been institutionalized. Fourth, although HRQoL is a potentially important variable affecting caregiver burden, it was not well measured by the EQ-5D. We did not include EQ-5D scores in the GLMs examining factors associated with the ZBI total score because patient HRQoL is a proxy measure completed by the caregiver and so may be too complex to associate with the caregiver feeling of burden. Also, the baseline caregiver EQ-5D score has a skewed distribution, which will make any association with caregiver burden difficult to interpret. Employing a disease-specific HRQoL scale may have been more useful. In their analysis of the differences between adult-child and spousal caregivers in the perceived quality of life of patients using the Quality of Life in AD scale, Conde-Sala et al. [53] found that adult-child caregivers had a more negative perception of patient quality of life compared with spousal caregivers, and this was associated with greater caregiver burden and higher levels of depression in the patient.
Our study considered a broad range of potential factors that may be associated with caregiver burden. Our baseline analyses identified differential factors associated with burden that could potentially be prevented (e.g. falls) or modified (e.g. living location) to help reduce caregiver burden. As GERAS is a longitudinal study, future analysis of changes in burden and factors associated with these changes will be useful in assessing the impact of disease progression on caregiver burden.
In conclusion, our results showed differences between the burden perceived by adultchild and spousal caregivers when caring for their relative with AD. Our data, from a large study involving patients and caregivers from three European countries, are consistent with those of Conde-Sala et al. [13] , suggesting that the previous findings in caregivers of patients with milder AD [13] can be extended to caregivers of patients with more severe AD. Several patient and caregiver characteristics and clinical factors associated with caregiver burden differed between adult-child and spousal caregivers, although better patient functioning (particularly instrumental ADL) and less caregiver distress due to patient's behavioral problems were associated with a lower burden for both types of caregiver. Our findings suggest that the needs of adult-child and spousal caregivers are different and imply that interventions that are aimed at reducing the individual caregiver burden and improving patient care should be tailored to support each caregiver depending on the caregiver-patient relationship. | 2017-04-02T21:24:30.858Z | 2014-02-19T00:00:00.000 | {
"year": 2014,
"sha1": "70b0443f252d55bfa553526d7fcb179b5d4f0f04",
"oa_license": "CCBYNC",
"oa_url": "https://www.karger.com/Article/Pdf/358234",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "60c8eebbf0d7115e0bfb80fa7c5f1df792f6a6e6",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
1826778 | pes2o/s2orc | v3-fos-license | Proteomic discovery and verification of serum amyloid A as a predictor marker of patients at risk of post-stroke infection: a pilot study
Background Post-stroke infections occur in 20–36% of stroke patients and are associated with high morbidity and mortality rates. Early identification of patients at risk of developing an infection could improve care via an earlier treatment leading to a better outcome. We used proteomic tools in order to discover biomarkers able to stratify patients at risk of post-stroke infection. Methods The post hoc analysis of a prospective cohort study including 40 ischemic stroke patients included 21 infected and 19 non-infected participants. A quantitative, isobaric labeling, proteomic strategy was applied to the plasma samples of 5 infected and 5 non-infected patients in order to highlight any significantly modulated proteins. A parallel reaction monitoring (PRM) assay was applied to 20 additional patients (10 infected and 10 non-infected) to verify discovery results. The most promising protein was pre-validated using an ELISA immunoassay on 40 patients and at different time points after stroke onset. Results Tandem mass analysis identified 266 proteins, of which only serum amyloid A (SAA1/2) was significantly (p = 0.007) regulated between the two groups of patients. This acute-phase protein appeared to be 2.2 times more abundant in infected patients than in non-infected ones. These results were verified and validated using PRM and ELISA immunoassays, which showed that infected patients had significantly higher concentrations of SAA1/2 than non-infected patients at hospital admission, but also at 1, 3, and 5 days after admission. Conclusions The present study demonstrated that SAA1/2 is a promising predictor, at hospital admission, of stroke patients at risk of developing an infection. Further large, multicenter validation studies are needed to confirm these results. If confirmed, SAA1/2 concentrations could be used to identify the patients most at risk of post-stroke infections and therefore implement treatments more rapidly, thus reducing mortality. Electronic supplementary material The online version of this article (doi:10.1186/s12014-017-9162-0) contains supplementary material, which is available to authorized users.
Background
Stroke is a serious medical condition produced by brain cell death. It occurs when there is a lack of blood flow to the brain (~80% of cases) or a hemorrhage affecting the brain or its surroundings (20%). Every year, around 15 million people will suffer a stroke, leading to 6 million deaths and 5 million disabled patients [1][2][3]. Around 40% of patients die within the first weeks following the stroke [4,5]. Non-modifiable factors, such as the severity of the stroke or the age of the patient, are highly correlated with mortality [6,7]. However, one-third of deaths result from potentially preventable stroke-associated complications. Nosocomial infection, particularly bacterial pneumonia, is the most common complication after stroke, with an incidence of 5-22% [8][9][10]. Despite the intensive care given to these patients, infection rates remain elevated and are associated with bad functional outcome and mortality [11]. The high incidence of infection is likely to be a result of an impaired immune function. The patient's reduced ability combat bacteria is a consequence of the initial brain damage [12,13]. Therefore, the early identification of patients who might be prone to developing an infection after stroke is a necessary step towards better hospital management, more rapid implementation of treatments, and improved long-term patient outcomes [13,14].
In clinical practice, the diagnosis of post-stroke infection is a challenging one as there has been no satisfactory concordance between different studies. The most widely studied markers of post-stroke infection-procalcitonin (PCT), C-reactive protein (CRP), and white blood cells (WBC)-have only shown moderate predictive value, and their levels do not increase early enough to be of help before the infection is clinically apparent [15,16]. Clinical signs such as older age, fever, severe stroke, or dysphagia, among others, have been linked to post-stroke associated pneumonia [17]. Nevertheless, they are not specific enough to act as individual markers. Using a combination of these markers with clinical scales such as the A2DS2, AIS-APS, and ISAN [17][18][19] have not been applied routinely in clinical practice. The gold standard for diagnosing an infection is the result from a bacterial culture, yet this may take 2 days. All of these reasons can lead to antibiotic treatment being started too late, with the unfortunate associated consequences. There is thus evidence of a need for a reliable early biomarker [20].
The present study aimed to use proteomic approaches to find a biomarker that could be tested for at hospital admission in order to identify patients at risk of developing a post-stroke infection. To do this, we investigated the plasma proteomes of infected and non-infected patients, using isobaric labeling methods. After selecting SAA1/2 as the most promising protein, parallel reaction monitoring (PRM) and the enzyme-linked immunosorbent assays (ELISA) confirmed its ability to predict which patients were at risk of infection after a stroke.
Study design and setting
We performed a post hoc analysis of a prospective cohort study which included 40 ischemic stroke patients (Clini-calTrials.gov.NCT00390962) who had been hospitalized consecutively at the University Hospital of Basel (Switzerland) between November 2006 and November 2007. The study protocol was conducted according to the principles expressed in the Declaration of Helsinki and with the approval of the local ethics committee. Before enrolment, informed consent was obtained from patients, their relatives, or their legal guardians.
Clinical protocol
Comprehensive information on the assessment of the study participants' demographic and vascular risk factors has been published previously [15]. Briefly, ischemic stroke was defined according to the World Health Organization criteria [21]. A detailed history was obtained for vascular risk factors, vital signs, and relevant comorbidities as assessed using the Charlson Comorbidity Index (CCI) medication taken prior to the stroke. Neurological deficits were estimated using the National Institutes of Health Stroke Scale (NIHSS). Patients underwent the following standardized diagnostic workup: brain computer tomography (CT) and/or magnetic resonance imaging, long-term electrocardiography, echocardiography, and neurosonographic imaging of the extracranial and intracranial arteries. Stroke etiology was determined according to the TOAST (Trial of Org 10172 in Acute Stroke Treatment) classification criteria, which distinguish largeartery arteriosclerosis, cardio embolism, small-artery occlusion, other etiologies, and undetermined etiologies [22].
Definition of stroke-associated Infections
Stroke-associated infection (SAI) was defined as any infection occurring within the first 5 days after hospital admission [13]. Infections were diagnosed according to the U.S. Centers for Disease Control and Prevention (CDC) criteria [15]. We distinguished between pneumonia, urinary tract infection (UTI), and other infections (OI). Pneumonia was diagnosed when at least one symptom from each of the two following symptom groups was present: (1) abnormal respiratory examination, pulmonary infiltrates in chest X-rays; (2) productive cough with purulent sputum, positive microbiological cultures from the lower respiratory tract or blood cultures.
Diagnosis of a UTI required two of the following criteria to be met: fever (≥38.0 °C), urine sample positive for nitrite, leukocyturia (≥40/µL), or significant bacteriuria (≥10 4 /mL of an uropathogen). OI were diagnosed if white blood cell count was ≥11,000/mL and CRP was ≥10 mg/L or temperature was ≥38.0 °C and an infectious manifestation was present. The treating physician made the diagnosis of pneumonia during hospitalization. This was then validated post hoc using charts.
The time point of diagnosis was taken to be the beginning of clinical symptoms which led to the diagnostic workup and resulted in the diagnosis of infection. In order to exclude any acute infections that had preceded the stroke, patients with an admission temperature >38 °C, reporting an infection lasting up to 3 days before the onset of stroke, or who required mechanical intubation were not included in the study.
Blood sample collection
Blood samples were collected from venous blood puncture within the first 72 h following symptom onset and then 1, 3, and 5 days after admission. Blood was centrifuged for 30 min at 3000×g, collected in EDTA, tubes and stored at −80 °C.
Proteomic study Quantitative proteomic analysis: TMT
Quantitative proteomic analyses were performed on five infected and five non-infected patients at hospital admission. The aim was to identify significantly regulated proteins between the two groups in order to find a promising infection marker.
Reduction, alkylation, digestion, and TMT labeling The quantitative proteomic experiment used 1 µL of each plasma sample. These amounts were dried and reconstituted in 16.6 µL of 6 M urea in tetraethylammonium bromide (TEAB) 0.1 M. The proteins were reduced by adding 1 µL of 50 mM tris-(2-carboxyethyl) phosphine hydrochloride (TCEP) to each sample, and they then reacted for 1 h at 37 °C. After the sample had cooled to room temperature, the alkylation step required mixing the solution with 1 µL of iodoacetamide 400 mM and storage at room temperature for 30 min. Sixty seven µL of TEAB 0.1 M were added to reduce the concentration of urea to <2 M. The digestion was carried out overnight at 37 °C using 1 µg of trypsin for each 20 µg of protein. The protocol is detailed by Dayon et al. [23,24].
Subsequently, each digested plasma sample was labeled with one of the 10 TMT reagents (Thermo Fisher Scientific, Waltham, USA). Infected patients' samples were labeled with TMTs 127n, 128n, 129n, 130n, and 131n. Non-infected patients' samples were labeled with TMTs 126, 127c, 128c, 129c, and 130c. To calculate experimental error, 1 µg of β-lactoglobulin was spiked in each sample. All the samples were pooled, desalted using a C18 Macro SpinColumn, and dried in a speed-vacuum.
Off-gel electrophoresis (OGE) OGE separation was carried out using an Agilent 3100 Off-Gel fractionator, as per the manufacturer's instructions. Previously dried samples were reconstituted using the OGE solution and then focused using an immobilized pH gradient (IPG) dry strip (13 cm, pH 3-10) [23]. After OGE, samples were desalted using a C18 Micro SpinColumn, dried in the speed-vacuum, and stored at −20 °C until analysis.
Briefly, peptides reconstituted using 5% CAN, 0.1% FA, were trapped in a 5 µm 200 Å Magic C18 AQ (Michrom) 0.1 × 20 mm pre-column and separated in a 5 µm 100 Å Magic C18 AQ (Michrom) 0.75 × 150 mm column with a gravity-pulled emitter. Both columns were made inhouse. The analytical separation ran for 65 min using a gradient of H 2 O/FA 99.9/0.1% (solvent A) and CH 3 CN/ FA 99.9/0.1% (solvent B). The gradient ran at a flow rate of 220 nL/min as follows: 0-1 min 95% A and 5% B, then to 65% A and 35% B at 55 min, and 20% A and 80% B at 65 min. For the MS survey scans, OT resolution was set to 60,000 and the ion population was set to 5 × 105 with an m/z window from 400 to 2000. A maximum of three precursors were selected for both collision-induced dissociation (CID) in the LTQ and higher-energy collisional dissociation (HCD) with analysis in the OT. For MS/MS in the LTQ, the ion population was set to 7 × 10 3 (isolation width of 2 m/z), whereas for MS/MS detection in the OT, it was set to 2 × 10 5 (isolation width of 2.5 m/z), with a resolution of 7500, a first mass at m/z = 100, and a maximum injection time of 750 ms. The normalized collisional energies were set to 35% for CID and 60% for HCD.
Protein identification MS data were processed using EasyProtConv. Peak lists were obtained using the 12 OGE fractions and the combination of HCD-CID raw data peak lists were generated. Afterwards, these data were submitted to an EasyProt software platform (version 2.3, build 718) that uses Phenyx software (GeneBio, Geneva, Switzerland) for protein identification. The protein search was made using the Uniprot/Swiss-Prot database (2014-10, 669903) [26], applying the following search criteria: Homo sapiens taxonomy, oxidized methionine (as the variable modification), and cysteine carbamethylation, TMT 10 lysine, and TMT 10 amino-terminus (as the fixed modifications). Trypsin was selected as the proteolytic enzyme, allowing one missed cleavage. Parent-ion tolerance was set to 10 ppm and the accuracy of fragment ions to 0.6 Da. Only proteins with a less than 1% false discovery rate (FDR) and at least two different unique peptides were selected for further analysis [27]. A minimum peptide length of 6 amino acids was used.
Protein quantification used the Isobar R package [28]. The manufacturer's isotopic distribution data was used to correct the isotopic impurities of TMT 10 reporter-ion intensities. The equal median intensity method was used to normalize the reporter intensities. Peptides which did not present reporter intensities were not quantified. The infection/no infection ratio was calculated for each peptide, combining the reporter-ion intensities between infected patient channels (127n, 128n, 129n, 130n, and 131n) and non-infected patient channels (126, 127c, 128c, 129c, and 130c). To test the ratio's accuracy and biological significance, technical and biological variability were calculated for each protein ratio. A ratio p value and sample p value were calculated for each variable. Furthermore, only proteins with a cut-off threshold value higher than 1.5 or lower than 0.67 were considered [29][30][31].
SAA1/2 PRM analysis
Parallel reaction monitoring (PRM) analysis was performed on ten infected and ten non-infected plasma samples using a Q-Exactive Plus mass spectrometer (ThermoFisher), as previously described [32]. The aim was to verify the discovery results.
Each sample was loaded into a PepMap precolumn (2 cm × 75 µm i.d., C18, 3 µm, and 100 Å pore size). Subsequent separation was performed in a PepMap column (50 cm × 75 µm i.d., C18, 2 µm, 100 Å pore size). A mixture of mobile A and B phases was used for peptide elution. The phase A solvent was composed of 0.1% (v/v) formic acid (Biosolve) and HPLC-grade water (Romil); the phase B solvent was composed of 0.1% (v/v) formic acid in HPLC-grade acetonitrile (Romil). To perform the separation, a linear gradient of 5-35% solvent B at 250 nL/min for 60 min was set and it was followed by a washing step (35-90% of solvent B for 10 min).
Three masses were targeted (doubly and triply charged ions), corresponding to total SAA, but also specifically to SAA1 and SAA2. The selection of the different peptides was performed considering two different criteria: a previous SAA PRM study and the results of our quantitative proteomic analysis [32]. The three peptides selected in this way were tryptic peptides associated to each isoform.
This inclusion list triggered targeted scans at a resolving power of 70,000, with an isolation width of 1 Th around the m/z of interest, an AGC target of 1 × 10 6 , a maximum injection time of 100 ms, and a normalized collision energy of 27% in a higher-energy c-trap dissociation (HCD) cell.
Data analysis Data were analyzed using the targeted MS/MS feature available in Skyline v3.5 software [33]. In order to confirm the identity of the peptides, a data dependent acquisition spectral library of annotated reference MS/MS spectra was created from the two pools of plasma samples composed of infected and non-infected patients. Peptides were quantified by extracting the peak areas of accurate fragment ions (<6 ppm), and they were then integrated across the peptides' elution profiles. For each peptide, transition peak areas were normalized by the average of the sum of the transition peak areas for all the peptides across the runs.
SAA1/2 ELISA measurement
The Vascular Injury Panel-I electrochemiluminescence (ECL) assay was used to determine the levels of SAA1/2 in 40 stroke patients, as per the manufacturer's instructions (Meso Scale Discovery, Gaithersburg, MD). Each plasma sample was diluted 1:1000 with using sample diluent provided by the kit. An ECL detection system using multi-array technology (SECTOR Imager 2400, Meso Scale Discovery) was used to determine analyte concentrations. Samples were measured in a single detection.
Statistical analyses
Statistical analyses were carried out using SPSS software (v21, SPSS Inc., Chicago, IL). Analytes were not normally distributed, so the Mann-Whitney U-test was used to compare the two unpaired groups. Fisher's exact test and the Chi squared test were used to assess whether patients with and without infection were significantly different according to their gender, medical history, clinical data, laboratory values, lesion size, or TOAST. All statistical tests were two-tailed, and a p value <0.05 was considered statistically significant.
Multivariate analyses were performed to assess the associations between variables. The presence/absence of infection was set as the dependent variable, and SAA, CRP, WBC, and NIHSS were set as confounders. The model was validated using the bootstrap method. Categorical data were dichotomized according to the criteria in the table of demographic characteristics. Longitudinal data were also dichotomized according to the best cut-off obtained from area under the receiver operating characteristic (ROC) curve (AUC) analysis.
Baseline population characteristics
Of 40 consecutively enrolled ischemic stroke patients, 21 developed an infection within 5 days of stroke onset (day 4 was the median day of infection development after the cerebrovascular event). Mean patient age was 79 years old (IQR: 70-82 years) and 55% of patients were men. Patients with severe strokes, resulting in higher NIHSS values at hospital admission, were more prone to developing an infection than patients with minor strokes. Other factors, such as hypertension, diabetes mellitus, or smoking, did not significantly affect the development of an infection. Nevertheless, according to the modified Rankin Scale, patient outcome appeared to be significantly affected by the development of an infection, as most of the patients with a poor outcome had developed an infection during their hospital stay.
At hospital admission, levels of WBC and CRP were within the normal range in both groups, with no significant differences found between infected and noninfected patients. Patients' demographic characteristics are summarized in Table 1.
Proteomic results
In order to find a biomarker able to distinguish, at hospital admission, which patients will and will not develop a post-stroke infection, the proteomes of five infected and five non-infected stroke patients were compared using quantitative proteomic analysis. Applying the criteria of a maximum of 1% FDR and at least two unique peptides, 266 proteins were quantified (Additional file 1: Table 1). Of all the proteins, serum amyloid A1 appeared to be the only significantly (p = 0.007) regulated protein between the two groups of patients, with a ratio of 2.2 after Bonferroni correction.
To verify the results obtained by the TMT 10 plex during the discovery phase, a further PRM analysis was performed on a new batch of patients. Consequently, we targeted three transitions of the tryptic SFFSFLGEAFD-GAR peptide in 10 infected and 10 non-infected patients. This peptide is common to all the different isoforms of acute-phase SAA. By measuring its concentration, therefore, we were sure to measure the total amount SAA present in blood and not only that of one of the different described isoforms. As shown in Fig. 1, the concentration of SFFSFLGEAFDGAR was significantly higher (p < 0.001) in infected patients than in noninfected ones, confirming that there was a clear overproduction of SAA in patients who went on to develop an infection.
Different SAA isoforms for infection development
Further PRM analyses were performed on the same 20 patients in order to evaluate whether either of the acute phase isoforms (SAA1 and SAA2) had a more significant effect on infection and inflammatory processes. The high sequence-similarity between the SAA1 and SAA2 isoforms prevented an evaluation of their effects using classic ELISAs. The present study measured three transitions in the FFGHGAEDSLADQAANEWGR peptide (unique to SAA1) and GPGGAWAAEVISNAR peptide (unique to SAA2) across 10 infected and 10 non-infected patients. As Additional file 2: Fig. 1 shows, both peptides were significantly (p < 0.001) more abundant in infected patients than in non-infected ones.
Kinetics of serum amyloid A1/2
Serum amyloid A1/2 plasma concentrations were subsequently measured in a new group of 21 infected and 19 non-infected patients in order to validate the previous proteomic results. Concentrations of this acute-phase reactant molecule were measured at hospital admission and at 1, 3, and 5 days after hospitalization, using an SAA1/2 ELISA assay. Initially, analyses were performed separately in those patients used for the discovery step and in those used for the verification/validation step. As Additional file 3: Fig. 2 shows, in both cases, SAA concentrations were significantly higher in patients who went on to develop a post-stroke infection than in those who did not. Subsequently, analyses were performed again when all the patients were evaluated together. As Fig. 2 shows, peptide concentrations were again significantly higher in infected patients than in non-infected patients, at all time points, particularly at 3 days (p = 0.01) and 5 days (p = 0.01) after stroke onset. SAA measurements were evaluated to distinguish between the two groups of patients at D0, D1, D3, and D5. As Table 2 shows, the accuracy of SAA measurements in distinguishing which patients went on to develop an infection and which did not reached values of 73.2% (cut-off: 14.2 µg/mL) and 77.1% (cut-off: 8.8 µg/mL) at hospital admission and 1 day after, respectively. Three days after hospitalization, the AUC of SAA was slightly better, reaching a value of 80.7% (cut-off: 21.4 µg/mL), and 5 days after hospitalization, the AUC was 76.7% (cut-off: 87.7 µg/mL).
To evaluate the capacity of SAA1/2 measurement to rule-in patients at risk of infection, we set specificity (SP) at between 90 and 100%. At hospital admission, with a 94.7% SP, SAA measurement reached 42.9% sensitivity (SE) and a partial AUC of 2.5% (Table 2). Three days after hospitalization, SP reached 100%, SE was 33.3%, and the partial AUC was 3.6% (Table 2). All the AUC and pAUC curves obtained at the different time points are represented in Fig. 3. These AUC and pAUC values were obtained using different cut-off concentrations corresponding to the best combination of SP and SE.
However, due to the high variability of the SAA concentrations obtained, we decided to evaluate the possibility of using a ratio based on those concentrations to predict the development of an infection. As Table 2 shows, patients who went on to develop an infection during their hospital stay, presented with an average 2.4 times greater concentration of SAA on D3 than on D1. For patients who did not become infected, average SAA concentrations remained very similar (ratio of 0.97), with no significant increase, thus suggesting that this ratio could be used as an indicator of patients at risk.
Multivariate analyses
Finally, we performed multivariate analyses in order to confirm that SAA was a promising biomarker of poststroke infection and to assess whether it was an independent predictive factor. The presence of infection was set as the dependent variable, and the significantly regulated parameters according to the patients' demographic characteristics (NIHSS and SAA) were set as confounders. WBC and CRP were also included in the confounder group because they are widely used in clinical practice. Table 3 shows, SAA was the only marker that displayed a relationship with the development of post-stroke infections, thus confirming and validating the possibility of measuring SAA concentrations as a biomarker of infection in stroke patients.
Discussion
The present study highlighted the capacity of proteomics to identify protein biomarkers that could assist in the detection of stroke patients at a high risk of developing post-stroke infection [34]. Using isobaric labeling methods, we first compared the plasma samples of five infected stroke patients and five non-infected stroke patients. We found that concentrations of serum amyloid A1 were overexpressed in patients who went on to develop an infection. This first approach was then verified using parallel reaction monitoring in 20 stroke patients (10 infected and 10 non-infected). Finally, the SAA1/2 concentrations of 40 ischemic stroke patients were confirmed using ELISA kits. The results demonstrated that SAA1/2 has already been described as a potential marker of inflammation and infection in several pathological conditions, including stroke and subarachnoid hemorrhage, [35]. In a case-controlled study involving 54 patients, levels of acute-phase proteins were significantly higher in stroke patients who developed an infection during the month preceding a cerebrovascular event than in those who did not develop one. During the month following the stroke, concentrations of SAA started being significantly higher in patients who went on to develop an infection than in those who had only had an infection preceding the cerebrovascular event at 3 days after the onset of symptoms [36]. This highlighted that the acute-phase response was clearly related to the development of infections. In another study, of 60 patients, levels of SAA were significantly higher in 45 patients with stroke than in the 15 control patients without stroke. SAA concentrations increased between days one and three in patients with a cerebral infarction complicated by an infectious inflammatory process [37]. These results suggested a correlation between the acute-phase response and the development of an infection. Nevertheless, to the best of our knowledge, until now no one had evaluated the ability of these acute-phase molecules to act as predictors of infection.
In a population of 81 subarachnoid hemorrhage patients, SAA concentrations measured at hospital admission predicted which patients would develop an infection during their hospital stay with an accuracy of 76% [38]. We therefore decided to perform the same analysis using ischemic stroke patients in the present study. As already shown, very similar results were obtained.
To the best of our knowledge, our study is the first to assess the predictive value of SAA concentrations while taking into account the time points of measurements as well as the diagnosis. We found that the SAA concentration was able to detect 42.9% of the stroke patients who had a very high certainty of going on to develop an infectious complication.
Human SAA is an acute-phase protein primarily expressed by the liver [39]. There are four different but closely related genes responsible of the protein's different isoforms. In humans, the production of SAA1 and SAA2 takes place under inflammatory conditions. SAA3 is a pseudo-gene, and SAA4 encodes a protein that is produced constitutively [40]. Inflammatory SAA1 and SAA2 share around 90% of their gene sequence. Due to the similarities between both, immunoassays have been unable to differentiate between them [39], and studies to date have been unable to determine which of the isoforms is most associated with infectious and inflammatory processes. In the present study, we used the PRM method to track each isoform and evaluate its contribution. As shown in Additional file 2: Fig. 1, both SAA1 and SAA2 are related to infection development. The FFGHGAED-SLADQAANEWGR peptide, unique to SAA1, and the tryptic SAA2 GPGGAWAAEVISNAR peptide appeared to be significantly more abundant in patients suffering from an infection than in patients without one.
SAA concentrations are most likely higher in stroke patients suffering from infections due to its role in attracting leukocytes and immune cells to the sites of tissue damage, infection, or inflammation [41,42]. As previously described, inflammation is an important part of the reactions taking place after an ischemic event. Indeed, blood derived leukocytes and microglia will be activated from minutes to hours after a cerebrovascular event [43]. Recruitment, activation, and adhesion of leukocytes to the endothelium will happen at the same time as neutrophils and monocytes/macrophages transmigrate into the location of the cerebral infarction [44]. During this process of brain damage, the acute-phase response will also activate acute-phase proteins as SAA, CRP, haptoglobin, α1-acid glycoprotein, α1-antichymiotripsin appear increasingly in the blood [44].
The present study has certain limitations. (1) The cohort was small and its results should be validated in a larger cohort of patients in order to have sufficient samples for the subgroup analyses (infection, no-infection).
(2) The study proposed SAA as a promising prognostic infection marker in stroke patients. Nevertheless, combining SAA concentrations with other clinical scales (NIHSS) or scores could improve the accuracy of the association. Different combinations should be tested to evaluate the potential added value of a panel of markers. (3) Another point which remains to be investigated is why SAA concentrations become elevated in patients developing an infection much earlier than CRP does, for example. The present study postulated that this was due to its role in inflammation, but are we thus measuring inflammation or are we facing a post-infection inflammation phenomenon? As previously reported, the acute-phase response is more prominent in patients who develop an infection during hospitalization, but
Conclusions
In a small cohort of stroke patients, we were able to demonstrate that the concentrations of SAA1/2 measured at hospital admission could be used to predict post-stroke infection. Applying SAA measurement in clinical settings could drastically improve patient management and, consequently, their associated outcomes. Further large, multicenter validation studies are needed to confirm these results. | 2017-08-03T00:34:20.110Z | 2017-07-12T00:00:00.000 | {
"year": 2017,
"sha1": "0e8f5d078fb0e5b6ca1156a67709e573d6b5b033",
"oa_license": "CCBY",
"oa_url": "https://clinicalproteomicsjournal.biomedcentral.com/track/pdf/10.1186/s12014-017-9162-0",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "21c47752bcd2129bf51bef6f5439529eac85b7cd",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
248990638 | pes2o/s2orc | v3-fos-license | A binocular perception deficit characterizes prey pursuit in developing mice
Summary Integration of binocular information at the cellular level has long been studied in the mouse model to uncover the fundamental developmental mechanisms underlying mammalian vision. However, we lack an understanding of the corresponding ontogeny of visual behavior in mice that relies on binocular integration. To address this major outstanding question, we quantified the natural visually guided behavior of postnatal day 21 (P21) and adult mice using a live prey capture assay and a computerized-spontaneous perception of objects task (C-SPOT). We found a robust and specific binocular visual field processing deficit in P21 mice as compared to adults that corresponded to a selective increase in c-Fos expression in the anterior superior colliculus (SC) of the juveniles after C-SPOT. These data link a specific binocular perception deficit in developing mice to activity changes in the SC.
INTRODUCTION
Our understanding of the development of visual circuit function is significantly advanced by employing the mouse model (see [1][2][3][4] for representative reviews). For example, we know when neurons in the primary visual cortex acquire their ability to integrate binocular input and we know that several important mechanisms required for this development are conserved across species. 1,4-6 However, we do not yet understand how to relate these developmental differences in cellular function of the cortex to natural visual behavior in developing mice. Nor do we understand whether and when significant developmental changes in other visual areas, such as the superior colliculus (SC), lead to important changes in visual behavior across development. The mouse SC in particular represents a rich repertoire of visual information, including binocularity and stimulus motion, size and valence. [7][8][9][10] The SC also controls many important and highly conserved visual behaviors in the adult mouse such as predator/stimulus avoidance and prey capture/stimulus pursuit. [10][11][12][13][14] Thus, to best understand how known mechanisms of visual circuit plasticity relate to visual behavior, it is necessary to better understand these two aspects of mouse visual system development: (1) Age-characteristic binocular behavior and 2) the relationship between binocular behavior and SC circuit maturation.
Studying the development of innate visual behavior will allow us to better establish the causal relationships between the refinement of visual neural circuits and behavior in the mouse. 11,12 In particular, the complex visual processing required for mice, other rodents and primates to capture prey is an ideal natural context to investigate visual system circuit development as related to object identification, visual search and determining stimulus salience and stimulus pursuit. [13][14][15][16] Specifically, prey pursuit entails complex visual control over motor movements which requires accurate spatial localization of targets and predictive coding to be successful. 15,17,18 There is also evidence that juvenile mammals and barn owls must have adequate time to practice the behaviors needed for prey capture to survive in the wild. 19,20 In addition, larval zebrafish demonstrate remarkable plasticity in prey capture behavior that requires increasing integration between the tectum (homologous to the SC) and forebrain circuitry. 21 Similarly, adult mice significantly change their responsiveness to virtual ''prey-like'' stimuli presented in our computerized-spontaneous perception of objects task (C-SPOT) after having previous experience with live crickets. 22 This emerging body of work suggests that studying the development of visually guided prey capture in the mouse as related to SC circuit development will uncover key molecular and cellular mechanisms underlying plasticity of conserved and fundamental aspects of vision.
Binocular integration within the superior colliculus mediates optimal visually guided prey capture in the adult mouse. 23,24 Monocular occlusion 23 or perturbation of ipsilateral retinal input to the SC 24 leads to an increase in time to capture a live insect and reduces that ability of mice to maintain pursuit of prey within their binocular zone. However, these recent studies did not quantify whether juvenile mice exhibited insect hunting, or hunting-like behavior, nor whether binocular-dependent perceptions and/or prey capturerelated activity in the superior colliculus might be developmentally regulated in the mouse. In this study, we compared both live prey capture behavior and innate responses to virtual ''prey-like'' visual stimuli between P21 and adult mice. Our analyses revealed that P21 mice lack a prominent binocular visual field bias that enhances successful ''target'' approach and pursuit in the adult during both the live and virtual behavioral assays. C-SPOT in particular revealed that visually guided approach behavior was specifically different in early development relative to visually induced arrest behavior driven most frequently by small moving objects appearing in the monocular visual field. To connect developmental differences in visual pursuit behavior to possible differences in SC activity, we compared c-Fos expression levels in the SC between P21 and adult mice. Analysis of c-Fos 26 expression in the SC after C-SPOT revealed a distinct pattern of enhanced cellular activity in the anterior regions of the SC in juveniles responding to visual motion stimuli relative to adults. Our results indicate cellular activity differences in the anterior SC as relevant to developmental differences in binocular perception and the accurate pursuit of visual objects in mice. Overall this approach may be a useful way to assay sensory processing differences in mouse models of neurodevelopmental disorders that impact visual system plasticity. 27
RESULTS
To understand how the mouse relates to other species such as the owl where the young practice and improve prey capture behavior, 19 we first determined whether postnatal day 21 (P21) mice, before weaning and foraging for themselves, also use vision to detect, approach and pursue live insects ( Figure 1). The house mouse, a popular model of mechanistic visual system development, readily approaches and ultimately preys on live insects using specific visual cues. 13 This behavior relies on specific neural circuitry in the superficial superior colliculus. 25 P21 laboratory mice of several background strains (see STAR methods) also detect, approach and engage live crickets repeatedly in the laboratory setting (Video S1, juvenile live prey capture, associated with data in Figure 1). No significant differences were observed on measures of prey capture performance between the C57BL/6J wild type and mixed background strains used in this study (p > 0.10 in all cases, Time to first Approach, Probability of Intercept and Duration of Contact, C57BL/6J versus Ntsr1-GN209-Cre mice and C57BL/6J versus Grp-KH288-Cre mice, Welch's t-test, with correction for multiple comparisons, 3 comparisons). We note several important differences between P21 and adult mice (Video S2, adults live prey capture, associated with data in Figure 1), specifically in approach orienting responses when they first encounter live crickets. Most dramatically, P21 mice fail to show a binocular visual field bias on initiating an approach relative to adults ( Figure 1A). This correlates with a significant increase in the onset of the first approach toward a cricket by P21 mice versus adults ( Figure 1B). Overall, these differences do not result in a decline in the total number of approaches started over the entire 5-min encounter with a live cricket (16.1 G 1.2 versus 17.5 G 2.1, P21 vs. P90, N = 12 vs. 10, respectively, p > 0.05, Welch's t-test, Videos S1 vs. S2). The detection and pursuit deficits correlate with a significant decrease in the probability that an approach started will end with a successful contact ( Figure 1C and Videos S1 vs. S2). Furthermore, when P21 mice successfully contact live crickets, they do so from a wider mean stimulus angle at the end of approach (48.3 G 2.2 versus 26.1 ⨦ 0.5, P21 versus P90, N = 12 versus 10, respectively, p < 0.01, Welch's t-test) and engage in prolonged proximate contact with rapid sniffing (Figure 1D). Related to this behavior, juvenile mice rarely attack insects even after repeated exposure over 5 days relative to adults (18.2 versus 80% of juveniles versus adults attacked crickets after 5 days of exposure for 5 min each day, Fisher Exact test, p = 0.0046, N = 11 versus 10, respectively), both groups without food deprivation. Taken together, these results indicate that the binocular visual field bias that facilitates adult mouse approach and pursuit behavior is immature in P21 mice. This developmental timepoint corresponds to a known inability to integrate binocular information as studied in visual cortex. 1,28,29 However, the behavioral deficit could also relate to possible immaturity of binocular responses located in the mouse superior colliculus. 9,30 Consistent with this idea, induced perturbation of ipsilateral eye input to the superior colliculus in adults results in similar prey capture behavior deficits as we show characterize normally developing juveniles. 24 Though live prey capture analysis captured a binocular processing deficit in developing mice, it is a complex multimodal experience relying on more sensory input than just vision, 13 especially in insect-naïve mice. 31 We therefore quantified the innate visual behavior of P21 mice evoked by virtual, high-contrast motion stimuli displayed from a computer screen using a computerized, spontaneous perception of 2 iScience 25, 105368, November 18, 2022 ll OPEN ACCESS objects task, C-SPOT. 22 C-SPOT affords the opportunity to vary specific parameters of visual objects and quantify how that alters stimulus detection and response relative to the egocentric location of the objects that cause behavioral responses (Figures 2A-2C). Although mice have not been presented with all possible combinations of ''cricket-like'' features using C-SPOT, mice that experience cricket capture specifically increase approach responses to a simple moving ellipse of a particular combination of size and speed. 22 We presented this ''cricket-like'' stimulus to both adults and P21 juveniles ( Figure 2A) and found striking developmental differences in orienting behaviors ( Figures 2D and 2E). Both ages robustly approach the presented virtual stimulus over a total 5-min presentation period. However, P21 mice take significantly longer to first approach ( Figures 2D and 2F) and generate fewer approaches toward the stimulus over the entire session ( Figures 2D and 2G). On the other hand, P21 and adult mice arrest their locomotion in response to stimuli with similar latencies from the start of stimulus presentation and have a similar arrest frequency overall ( Figures 2D-2F and 2H). This suggests that P21 mice still detect the moving stimuli as rapidly as adults and find motion in their periphery salient. Further, there are no significant sex differences in the measured behaviors ( Figures 2G and 2H, p > 0.05, Welch's t-test, correction for multiple comparisons, 2 comparisons total), Thus, P21 deficits in visual processing are specific to the relative visual information that elicits an approach and allows stimulus interception. The increase in latency to approach the virtual targets by P21 mice is consistent in both our C-SPOT assay and live prey capture. This argues that a developmental iScience Article difference in visuo-motor integration specifically explains the developmental differences in live prey capture behavior.
To determine more precisely which visual responses and stimulus preferences are different between P21 mice and adult mice, we quantified the visual field location, relative size (in arc degree), and speed (in arc degree/sec) of the virtual stimulus at the onset of an approach or arrest response ( Figure 3). Consistent with what was observed during live prey capture, P21 mice lack a strong binocular visual field bias in first detecting stimuli that they then successfully and continuously approach ( Figures 3A and 3B, Left). In contrast to adults, the average position of stimuli that are approached is near 60 ( Figure 3C). In addition, when P21 mice nearly contact the stimulus with the nose, they reach a position just in front of the stimulus whereas adults end with their nose touching at or near the center of the stimulus (Videos S3 versus S4, juvenile versus adult responses to virtual stimuli during C-SPOT, associated with data in Figures 3 and 4). This leads to a significant difference in mean absolute stimulus angle at the end of an approach (33.1 G 7.2 vs. 11 G 4.3, P21 vs. P90, N = 15 vs. 16, respectively, p < 0.05, Welch's t-test). Taken together, these data demonstrate that P21 mice similarly detect and respond to stimuli located in their peripheral visual field, but specifically have a deficit in how they respond to stimuli that are located in central-anterior egocentric space, where visual information can be processed binocularly for adult animals.
Importantly, our assay revealed that the perception of stimuli that cause approach are specifically developmentally regulated relative to those perceptions that elicit arrest. For both ages, the stimuli that are likely to elicit arrest responses are primarily located in the periphery, but otherwise do not vary significantly in relative size or speed between juveniles and adults ( Figures 3D and 3E). This suggests that the visual pathways that process peripheral, monocular visual information may have similar spatial and temporal resolution capabilities and lead to similar arrest behavioral outcomes at both ages. Indeed, we showed that the arrest responses evoked by these objective stimuli in the adults depend on the egocentric direction of motion of the stimulus across the visual field. 22 That is, stimuli that elicited arrests not followed by an approach were moving further into the peripheral monocular visual field of mice. In addition, both ages of mice were able to detect stimuli that caused either an approach ( Figure 3A) or an arrest from the furthest possible reaches of the testing environment, $60cm from the mouse ( Figure 3). Thus, P21 mice are capable of responding to similar relative sizes and speeds of objects as compared to adults. Importantly, we did not probe the capabilities of the juvenile visual system in this study exhaustively. Receptive field properties do refine after eye-opening even in the monocular visual field of primary visual cortex suggesting that other aspects of monocular visual behavior may yet mature over development. 32 Thus, our future studies will seek to parametrically test a wider range of qualities of motion to further probe for differences in visual perception that change over development in a retinotopic fashion.
To map developmental differences in mouse visual orienting behavior to possible differences in activity in the SC, we assayed the expression of the immediate-early gene c-Fos (Figure 4). We reasoned that this topographically organized, evolutionarily conserved, visual area may display significant differences in c-Fos activation as it has been significantly linked to generating representations of stimulus salience, direction of motion, relative size and speed of visual objects (see Basso et al., 33 for a recent review). Furthermore, specific cell types within the SC are required to control orienting behavior in the adult mouse during prey capture 25 and the deeper layers that are aligned to the superficial retinotopy are known to integrate multimodal sensory input (visual, auditory and somatosensory) relevant to prey capture behavior. 31,34,35 We quantified the percent of c-Fos positive cells in different regions of the SC in mice exposed to C-SPOT versus age-matched controls exposed to the same environment with no stimuli presented ( Figure 4). We obtained measures at three different locations along the anterior-posterior axis (À3.5, À4.15 and À4.6 AP) to compare regions of SC that encode visual information along the nasal (central) to temporal (peripheral) visual field axis. 34, 35 We also compared across the three laminar zones from the dorsal surface (superficial, intermediate, and deep) and medial versus lateral divisions of the SC at the three positions along the AP axis ( Figures 4A and 4B). We calculated a ratio of c-Fos positive cells after visual stimulation normalized to age-matched controls with no stimuli in the same environment in each subregion of the SC. We then statistically compared those ratios between comparable subregions from the two ages of mice using Welch's iScience Article t-test with correction for multiple comparisons using the Benjamini-Hochberg procedure to decrease the false discovery rate as independence could not be assumed for within subject measures (i.e. AP, ML or depth position) ( Figure 4B). This analysis revealed that c-Fos expression is significantly different between P21 and P90 mice in specific subregions of the SC after specific visual experience ( Figure 4B, bolded entries). The developmental differences are most prominent along the AP axis. There is an increase in A B D C Figure 4. c-Fos expression differences after C-SPOT in P21 versus P90 mice (A) Adaptation of visual field topography reflected in the right hemisphere of the superior colliculus of the mouse from a dorsal view, Drä ger and Hubel, 1976. The cartoon is meant as general representation of visual space in the superficial SC that is observed during topographical imaging of mouse SC. 2 We assayed for c-Fos expression differences after exposure to virtual visual stimuli that evoke approach and arrest behaviors from three key planes of coronal sections, À3. iScience Article c-Fos expression related to C-SPOT experience in the anterior region of the SC in juveniles, yet a more significant decrease in c-Fos in the posterior SC of the adults after the C-SPOT experience. The anterior region of the superficial SC corresponds to visual information located within the central/nasal visual field 34 (Figure 4A) and binocular responses are prominent in the anterior SC in the deeper superficial to intermediate layers, 34,35 where specific ipsilateral projections from the retina that are required for optimal prey capture behavior target the SC. 24
DISCUSSION
Overall, we show significant developmental differences in how mice process stimuli in the binocular visual field during a natural pursuit behavior. It is noteworthy that these behavioral differences correlate with long-known significant differences in binocular visual processing in the mouse visual system at the same age. [1][2][3][4] Further, our study finds that the developmental differences in prey pursuit behavior correlates with important developmental differences in cellular activity within the SC where binocular integration occurs. 9 Although it remains unclear how and whether cortical activity differences may contribute to the observed visual stimulus orienting responses, Shank3 knockout mice also fail to execute normal continuous prey pursuits. 37 Shank3 knockout mice are impaired in classical forms of visual cortex plasticity, 27 although SC function and plasticity were not analyzed. It will therefore be important in future work to study the coordinated development of cortical and SC circuits as related to binocular visual processing during this behavior. Indeed, prey capture and the pursuit of ''preylike'' visual stimuli, may be ideal behavioral contexts in which to understand the development of corticofugal pathways, spatial orienting behavior and decision-making. In addition, we found a consistent decrease in c-Fos activity during C-SPOT-specific visual experience relative to free exploration in the same environment in the adult. It is unclear why c-Fos levels are reduced in posterior SC and not increased in anterior SC with visual stimulus evoked behavior in adults in this context. It is possible that the increased demands on the juveniles to learn from their experiences to shape binocular function leads to the observed increases of c-Fos in their anterior SC, yet the adults do not have these demands given fully developed binocularity. As both groups of animals were similarly sensitive to responding to stimuli in the peripheral visual field (information that may be reflected in posterior SC), the current study lacks the ability to clearly interpret the reduced posterior c-Fos levels in posterior adult SC during stimulus presentation. As c-Fos expression levels may reflect local responses to growth factors or neuromodulation, or may be localized to other cell types besides neurons, 38 it will be important in future physiological studies of SC activity to compare the anterior-and posterior-most responses over development during this behavior.
Developmental differences in orienting responses to prey and prey-like stimuli were most robust along the nasal to temporal visual field axis. Eye-tracking studies performed in the freely moving adult mouse during prey capture showed that the head angle of the mouse in the azimuth provides an accurate estimate of the position of the binocular visual field. 23,36 Therefore, the evidence in our study argues for more targeted developmental studies of visual stimulus encoding in the regions of SC with binocular responses. 9 However, we did not quantify head pitch and therefore do not provide an estimate of whether developmental biases occur along the upper to lower visual field axis. A previous study of prey capture in mice where the pitch as well as the orientation of the head along the azimuth was quantified found that mice angled their heads downwards during an approach. 23 This would position target stimuli higher up in the relative visual field which is already overrepresented in the topography of the SC relative to the lower visual field. 34 Strikingly, we found few significant differences in c-Fos expression after C-SPOT in either group between the medial (upper visual field) versus lateral (lower visual field) SC. Instead, c-Fos levels increased similarly in both medial and lateral regions of SC only in the juveniles after C-SPOT relative to free exploration. It therefore remains an open question as to whether and how visual information may be processed differently between these two regions of visual space in early development.
An important aspect of this study is demonstrating that C-SPOT can quantify vision in developing mice. This provides a powerful behavioral tool to assess models of neurodevelopmental disorders, Alzheimer's, schizophrenia or other diseases that impact sensory stimulus detection, selection and visual orienting where learning and memory differences could also confound interpretations. Indeed, Autism-associated Shank3 knockout in mice leads to robust deficits in ocular dominance plasticity. 27 It will be interesting to determine whether our assay can more precisely quantify visual perception deficits in these mice and determine how interventions to restore gene and neural circuit function impact vision through development. iScience Article Finally, C-SPOT itself is easily modified to screen for a greater diversity of visual features that naturally evoke orienting responses in mice. For example, similar styles of virtual stimulus assays are frequently used to study visual processing and visual system development in the larval zebrafish as related to stimulus size, motion and color. 39,40 Mouse researchers could similarly use C-SPOT to identify visual field biases in processing other visual features not tested in the current study that relate innate behavioral responses to the diversity of receptive field properties so far observed in the visual system of the mouse. 33,[41][42][43][44][45] Limitations of the study The overall study was designed to discover significant differences in spontaneous visual orienting behavior in P21 mice during natural or naturalistic conditions. However, this work did not assess behavior before P20 and we did not track the eye movements of the mice studied. It therefore remains unknown to what extent eye movement behavior differences measured over finer grained developmental phases explain the observed behavioral deficits in orienting responses of P21 mice. The work reported here also did not test how specific visual experience might change behavior or vision in the developing mice. Finally, c-Fos expression levels, as mentioned previously, provide a limited interpretation of neuronal activity differences between developmental timepoints. More direct measures of neuronal activity in the SC at specific retinotopically mapped positions will be needed in the future to more fully understand how neuronal activity changes during development to support the observed developmental differences in behavior.
STAR+METHODS
Detailed methods are provided in the online version of this paper and include the following:
Materials availability
This study did not generate new unique reagents. However, a parts list and tips to create and run the described behavioral assays can be obtained by request from the lead contact.
Data and code availability d Original c-Fos staining images have been deposited at Mendeley and are publicly available as of the date of publication. The DOI is listed below and in the key resources table. Additional microscopy data reported in this paper will be shared by the lead contact upon request. https://doi.org/10.17632/ ss77yp5nfw.1.
d All behavioral data and the original code to analyze the mouse tracks has been deposited at Mendeley and is publicly available as of the date of publication. DOIs are listed below and in the key resources table. https://doi.org/10.17632/vvxfszxc88.2.
d Any additional information required to reanalyze the data reported in this paper is available from the lead contact upon request.
EXPERIMENTAL MODEL AND SUBJECT DETAILS
All mice were used in accordance with protocols approved by the University of Nevada, Reno, Institutional Animal Care and Use Committee, in compliance with the National Institutes of Health Guide for the Care and Use of Laboratory Animals. Both male and female mice were used in the study. We tested 29 P20-22 juvenile mice and 26 adult mice, aged P90 or greater; the specific number used in each statistical comparison is noted in figure legends and (Figure 2A). A pair of opposing sides of the arena consisted of Hewlett Packard VH240a video monitors that measured 60.5 cm diagonally, with a vertical refresh rate of 50-60 Hz and resolution set to 1920 3 1080 pixels. A solid white background was displayed on both monitors to achieve even lighting throughout the arena. Visual stimuli could then appear on either monitor in C-SPOT trials. Crickets during live prey capture trials were placed into the arena by the experimenter at a location away from the mice. A Logitech HD ProWebcam C920 digital camera was suspended overhead to capture the behavior at 30 frames per second throughout each trial.
Visual stimuli
Visual stimuli for C-SPOT were generated with MATLAB Psychophysics toolbox (Brainard, 1997) and displayed on an LED monitor (60 Hz refresh rate, $50 cd/m 2 luminance) in a dark room. To mimic insect proportions, we displayed ellipses with a major axis that was 2 times the size of the minor axis. We displayed stimuli that were 2 cm along the horizontal axis as these stimuli evoked the most frequent approaches in adult mice and more approaches to this stimulus were specifically increased after live prey capture experience in the adult. 22 A 2 cm long stimulus corresponds to a relative stimulus size of $4 from 30 cm away from the screen. Stimulus speed was kept constant at 2 cm/s as this speed evoked the most approach and least amount of arrests in adult mice. 22 Stimuli were on the screen for 30 s before disappearing for 7 s and then reappearing. This pattern repeated throughout the overall trial, 300 s.
Behavior
Mice were acclimated to handlers and arena for 4 days prior to either the live prey capture assay or virtual task. During habituation, mice were handled 3 times a day for 3 min each and placed in the arena 3 times a day for 5 min each time. Young mice were habituated to handler and environment 3-4 days prior to their tested age of P20-22. On the testing/behavioral measurement day, group housed mice were brought into the testing room in their home cages and allowed to acclimate to the darkened testing room. Then, each experimental subject was placed in the testing arena and habituated for 3-5 min, with controlled illumination emitted from computer screen monitors only. Either C-SPOT was run, or live crickets were introduced and behavior recorded, but all mice were naive to other testing conditions in order to assess innate responses. The arena was cleaned with 70% EtOH after each mouse was removed to mitigate odor distractions. Exposing mice to stimuli did not begin until each mouse demonstrated self-grooming behaviors and reduced defecation and thigmotaxic behavior which were taken as indications they were not anxious in the environment, again, around 3-5 min.
For testing prey capture behavior, mice were given 10 min with the cricket after habituation and were not food deprived in order to understand how visual stimuli were interpreted at an individual's baseline state. This study only quantified the first day of behavior in response to live crickets in order to quantify naive/ innate responses. All mice were then returned to their home cages with standard food. The crickets used were Acheta domestica obtained from Fluker's Farm or a local pet store and were 1-2 cm in length, group-housed, and fed Fluker's Orange Cube Cricket Diet.
DeepLabCut 47 was used to digitize and extract 2-dimensional coordinates of the mouse's nose, two ears and body center, as well as the center point of the stimulus (cricket or computer-generated stimulus) throughout the video recordings at 30Hz. These tracks were entered into customized MATLAB scripts to extract behavioral parameters: mouse speed, stimulus speed, stimulus angle, range, subjective stimulus size and speed. iScience Article An arrest ''response'' was defined as any time the mouse's nose and body moved less than 0.5 cm/s for a duration of 0.5-2 s. Arrests that occurred in the absence of a visual stimulus, or when the stimulus was more than 140 from the bearing of the nose, were excluded from analysis of visual driven arrest responses. Approach starts were defined as mice moving toward the stimulus starting from a distance of at least 8 cm, and at an average approach speed of at least 15 cm/s and a stimulus bearing of less than 150 . Using these definitions, we computed the percentage of stimulus trials in which each behavior was observed, as well as the number of arrests and approaches that occurred during individual trials. A successful approach was defined as any time the mouse's nose came within 3 cm of the stimulus after an approach start had been identified.
For both response types, approach or arrest, we calculated range as the distance between the center of the head between the two ears and the center of the stimulus, and stimulus angle as the angular distance between the line emanating from the center of the mouse body to the mouse nose, and that from the center of the mouse body to the center of the stimulus. Angular stimulus sizes and speeds were calculated using the horizontal length of the stimulus (2 cm) and the distance of the behavioral event from the stimulus in cm.
Immunohistochemistry
To assay c-Fos expression in mice exposed to C-SPOT and age-matched controls without visual stimulation and experience, mice were deeply anesthetized 90 min after behavioral testing through inhalation with 3.5% isoflurane. Subjects were then transcardially perfused with 75 mL of phosphate buffered saline (1X PBS) followed with 25 mL of 4% paraformaldehyde (PFA). Brains were removed and stored in 4% PFA for 24 h at 4 C.
Brains were then removed from PFA and rinsed with PBS before sectioning into 45 mm thick coronal sections. Floating sections were stored in 0.02% sodium azide (NaN 3 ) and PBS at 4 C for up to two weeks before assaying for c-Fos protein expression. To assay c-Fos protein expression, floating sections in well dishes were incubated in a blocking solution (4% bovine serum albumin (BSA), 2% horse serum, 0.2% Triton X-100, 0.05% NaN 3 ) for 3 h, followed by incubation with rabbit anti-c-Fos primary antibody (
QUANTIFICATION AND STATISTICAL ANALYSIS
Statistics on behavioral measures were performed using MATLAB and R software. Where means are reported and data are normally distributed we used Welch's t-test (two group comparisons), followed by Benjamini-Hochberg procedure to decrease false discovery rate and correct for multiple comparisons. The specific tests used are specified in figure legends. Where medians are reported and/or data are not normal, rank sum tests were used. Significant differences in proportional measures were determined by Fisher exact test. Test results with a p value of <0.05 were considered significant. Cohen's D > 0.8 considered a large effect, with all significant effects in this study having Cohen's D > 0. 8
c-Fos quantification
Images were obtained using a Leica Confocal microscope. Tile scans of the entirety of each coronal slice was imaged with a 20X objective and saved as a LIF image file. Sections between À3.5 and À4.6 AP were imaged. Specific regions and structures in the adult were identified as in Paxinos and Franklin's the Mouse Brain in Stereotaxic Coordinates and the Allen Mouse Brain Coronal Atlas (https://mouse.brain-map.org/ static/atlas). Developmentally similar regions were confirmed using a magnetic resonance imaging and micro-computed tomography combined atlas of developing mouse brains. 48 iScience Article c-Fos positive cells were identified and quantified using Imaris Microscopy Image Analysis Software (Oxford Instruments). LIF files were loaded into IMARIS and Gamma correction (0.8%) and background subtraction were standardized and applied similarly to all images. Images were analyzed blind to testing condition. Three circular regions of interest (ROIs) were drawn in each collicular layer: superficial, intermediate, and deep in both the medial and lateral regions. To ensure similarity of subregions quantified between subjects, ROIs for the medial region spanned the center of a region as measured from the brain's midline (medial border) to the medial/lateral border of the section (halfway to 2/3 between the mediallateral borders of the SC, Figure 4C). Lateral ROIs were drawn halfway from the medial/lateral border of the section to the lateral border of the SC. The number of c-Fos positive cells and DAPI positive cells for each ROI were used to create a ratio of c-Fos:DAPI fluorescence. These percentages were then averaged per medial or lateral region, and again averaged between 2-3 sections for each AP region for each subject. These ''per subject'' percent averages were then used to determine the relative increase in c-Fos positive cells between visually-stimulated mice and their controls (age-matched and exposed to environment only). The reported normalized metric ''ratio of c-Fos expression (w to w/o)'' is each experimental subject's percent of c-Fos positive cells divided by the average percentage obtained from their age-matched controls that were placed in the same environment with no visual stimulation. This therefore compares the percent increase in c-Fos positive cells when mice view and respond to visual stimuli in our testing arena, versus when mice wander in the same arena without the specific visual experience assayed. | 2022-05-24T13:23:50.842Z | 2022-05-19T00:00:00.000 | {
"year": 2022,
"sha1": "52597d8fa73ac0de235691511eb10ad2288a8aa3",
"oa_license": "CCBYNCND",
"oa_url": "http://www.cell.com/article/S2589004222016406/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a5bbe8b8c410f99cfc90966bb055f9dc99f1b16d",
"s2fieldsofstudy": [
"Biology",
"Psychology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
2302877 | pes2o/s2orc | v3-fos-license | Two confirmed cases of severe fever with thrombocytopenia syndrome with pneumonia: implication for a family cluster in East China
Background Severe fever with thrombocytopenia syndrome (SFTS) was first reported in China in 2011. Human-to-human transmission of the virus occurred occasionally in family clusters. However, pneumonia as an onset syndrome was not common in most SFTS cases. Our aim is to report a family cluster of SFTS with clinical manifestation of pneumonia in Shanghai. Methods Epidemiologic investigations were conducted when a family cluster of severe fever with thrombocytopenia syndrome virus (SFTSV) infection was identified in Shanghai in June 2016. Samples were collected from two secondary cases and two close contacts with fever. SFTSV was detected by Real-Time reverse transcription polymerase chain reaction (RT-PCR). Results There were two confirmed STFS cases and one potential index case. The potential index case became ill on 21 May and died on 31 May. Case A had onset from 4 to 23 June and case B from 8 June to 25 June. All the three cases experienced pneumonia at the early stage of SFTSV infection. Three (3) out of thirty two (32) close contacts had symptoms of fever or cough but were detected STFSV negative by real-time RT-PCR. According to epidemiologic investigations, the potential index case had outdoor activities on a nearby hill. A tick bite could have been the reason for the SFTSV infection in the potential index case as ticks were found both in grassland or shrubs on the hill and also found on mice caught in her house. Both cases A and B had provided bedside care for the potential index case without any protection and had contacted with blood and other body fluids. Conclusion It was a family cluster of SFTSV infection imported from Jiangsu province located in the east of China. We suggested to become alert to atypical SFTSV infected cases.
reported annually in Anhui, Henan, Hubei, and Shandong Provinces [5]. Zhejiang and Jiangsu provinces are both adjacent to Shanghai. A reported case of SFTS in Shanghai was imported from Anhui province, which was historically the only reported SFTS case in Shanghai [6].
Human-to-human transmission of the virus occurred occasionally in family clusters. Several family cluster cases were reported previously in China and Korea [7][8][9][10][11][12]. Contact with the index patient's blood was significantly associated with development of SFTS. In systematic studies SFTS case fatality rate (CFR) was 12-16% and major clinical features were fever, thrombocytopenia, leucopenia, gastrointestinal symptoms and central nervous system manifestations [13,14].
We reported a family cluster from Suzhou, Jiangsu Province of China, which was confirmed in Shanghai. Three members of a family presented with high fever and pneumonia were admitted and two of them were detected to be SFTSV positive. Pneumonia was an atypical syndrome for SFTS cases. Epidemiological investigation, clinical syndromes and symptoms, and laboratory testing were described and more evidence of transmission mechanism were provided to further understand SFTS.
Epidemiological investigation
On June 13, three cases were reported to Shanghai Municipal Center for Disease Control and Prevention (SCDC) and were interviewed immediately by Shanghai local CDC staff. Basic demographic information, clinical manifestation, epidemiological history and timeline of the cluster were collected and analyzed. Exposure to outdoor activities referring to work, live or travel in the regions of hills, forests and mountains in main epidemic season 2 weeks before onset was inquired. Epidemiological investigations were also carried out to obtain information for close contacts, defined as anyone who had contacted with blood, fluid, bloody secretion or excretion of the SFTS patients without any protection. Fourteen-day medical observations among close contacts were initiated on June 14 [15]. Environmental investigation was carried out immediately to identify ticks around the residential areas of cases. The tick specimens were collected by flagging white cloth on grasslands and also picking from animals' body surface.
Sample collection
The index patient died before the family cluster was identified and therefore there was no sample available for further testing. Two secondary patients and close contacts with fever symptoms during the first week of contact were sampled.
Laboratory analysis
Nucleic acid of SFTS was detected from serum specimens of case A (daughter of the potential index case), B (the son of the potential index case), C (granddaughter of the potential index case), as well as close contacts with fever and other symptoms (Tables 1 and 2). Real-Time Reverse Transcriptase Polymerase Chain Reaction (RT-PCR) [16] was adopted in testing SFTSV in serum. RNA was extracted from serum by Total Nucleic Acid Isolation kit (Roche Diagnostics) according to the manufacturer's instruction. SFTS viral segments were amplified by primers and probes (provided by China CDC). Real-time PCR was performed as follows: 50°C for 30 min, 95°C for 10 min. Then 40 cycles of amplification was undertaken: 95°C for 15 s and 60°C for 45 s. The cutoff cycle threshold (C t ) value for a positive sample was at 35 cycles. A C t value less than 35 was judged as positive.
Case A
Case A was a 47-year-old female. On June 4, 2016, she began to feel sick. Next day, she found herself with high fever of 39.9°C, coughing, sore throat and malaise. She visited local hospitals A and B in Jiangsu Province. Then she was admitted by hospital B and treated with Ticarcillin/Clavulanate Potassium and levofloxacin. No sign of improvement was observed. Laboratory analysis of blood revealed leukopenia (white blood cells count 2.29 × 10 9 /L) and thrombocytopenia (platelets count 97 × 10 9 /L). Occult blood (25 + cell/u) and albumin (80 mg/L) were found in urine routine testing. In biochemistry testing, lower total protein (61.5 g/l), pre-albumin (126 mg/l), and elevated blood sugar was detected. Antibiotics, Oseltamivir and Insulin were administrated to control infection and lower the blood sugar. On June 11, blood testing still showed leukopenia (white blood cells count 2.42 × 10 9 /L) and thrombocytopenia (platelets count 68 × 10 9 /L). Mycobacterium tuberculosis (TB), EB virus, Cox A16 virus, EV71 virus, Chlamydia pneumoniae, syncytial virus, adenovirus, influenza virus and para-influenza virus were all detected as negative. The case was then transferred to hospital B in Shanghai on the same day. Presenting with the symptoms of coughing and malaise and with chest computerized tomography (CT) of "Left lung patchy shadow, bilateral small amount of pleural effusion, increased width of the longitudinal diaphragm" (Fig. 1), case A was admitted in hospital C. Thrombocytopenia (platelets count 81 × 10 9 /L) continued and white cell counts were normal. Rapid testing for influenza A was negative. Alanine aminotransferase (ALT) and aspartate aminotransferase (AST) were elevated as 67.0 U/L and 59.0 U/L respectively. Ofloxacin Capsules, Methylprednisolone Sodium Succinate, Pantoprazole, Glutamine and Xiyanping (antiviral herbal medicine) were prescribed. Case A's temperature got normal. Cough and muscle soreness were relieved after treatment. X-ray detection showed increased bronchovascular shadows.
He was also screened for other pathogens such as Mycobacterium TB, EB virus, Cox A16 virus, EV71 virus, Chlamydia pneumoniae, syncytial virus, adenovirus, influenza virus and para-influenza virus, but all tests were negative. Together with case A, he was transferred to hospital C in Shanghai and was admitted for viral pneumonia and type II diabetes. CT showed two nodular shadows in the left lung and pleural effusion in the right lung (Fig. 1). A Swab was collected and a test for Influenza A showed negative results. Blood routine testing still showed leukopenia (white blood cells count 3 × 10 9 /L) and thrombocytopenia (platelets count 53 × 10 9 /L). Blood gas analysis revealed lowered partial pressure of carbon dioxide (PCO 2 ) (4.2kpa) and total carbon dioxide (TCO 2 ) (22.3 mmol/L). Coagulopathy (activated partial thromboplastin time (APTT) 46.8 s) and elevated creatine phosphate kinase (260 IU/L) was also observed. Whole blood testing was done again on June 13. Leukopenia (white blood cells 9 /L) and thrombocytopenia (platelets count 28 × 10 9 /L) had worsened.
Potential index case
The potential index case was a 72-year-old woman. She was the mother to cases A and B. Illness of potential index case began on May 21, 2016. She was sent to a local clinic by case A on the same day. Blood testing showed leukopenia (white blood cells count 3.83 × 10 9 / L) and elevated blood sugar. She was treated for viral infection. On May 23, she had worse symptoms of fever (39°C), bleeding gums, stomachache, diarrhea and malaise. Again, case A took her to a local community health center for treatment. Blood testing showed leucopenia (white blood cells count 2.38 × 10 9 /L) and thrombocytopenia (platelets count 88 × 10 9 /L). Dermal ecchymosis appeared on the index case's chest and upper lumber. On May 25, she felt nausea and vomited. She visited hospital D in Jiangsu province. Blood testing showed leukopenia (white blood cells count 1.83 × 10 9 /L), thrombocytopenia (platelets count 32 × 10 9 /L), elevated liver-associated enzyme levels (AST 805.4 U/L; ALT 220.0 U/L, coagulopathy (Prothrombin Time 13.7 s and APTT 64.1 s). Urine testing showed abnormal sugar (++), protein (++), occult blood (+++). Heteropathy was applied. However, she got worse and began convulsing. She died of multiple-organ failure on May 28. According to the guideline for prevention and treatment of SFTS [17], she was confirmed as a probable case of SFTS.
Epidemiological findings
The potential index patient was sick from May 21 and died on May 28. Case A took care of index patient during all the 8 days. Case B had visited index case on May 23 and 27. On May 27, both case A and B found bleeding from the mouth, nostrils, and ears of the potential index case. After the death of potential index case, case A and B had cleaned the body and directly touched the blood of the potential index case without any protection. The potential index patient lived in a village near Tai Lake in Jiangsu Province, which is located to the southeast of China. The village is on a downhill, and she used to climb the hill every day as she had planted some vegetables on the hill. It is unclear whether the patient was bitten by ticks or not. However, investigation on the surroundings of the patient showed that ticks could be found in the village and in other places that the patient came into contact with on the hill. Both case A and case B had their own houses and denied history of tick bite or outdoor activities within 2 weeks before illness onset (Fig. 2).
Altogether, 19 ticks were caught through flagging from the hill and grassland. Eight rodents were caught on the hill, around the village or indoor around the residency of potential index case. Ticks could be found on the surface of rodents. Three out of ten dogs were found to be infected with ticks and tick index was 0.4.
Close contacts
A total of 32 close contacts including case C were identified in this family cluster. They were mostly relatives from the family. Three close contacts became ill. One was Case C, a 30-year-old female. She was daughter to case B and became ill on June 1, 2016 with the symptoms of coughing and a slight fever. She took some selfprescribed drugs but she could not remember the drug's name. On June 11, she accompanied her father (case B) 2016 Fig. 2 Timeline of key events for the family cluster of SFTS and her aunt (case A) to hospital C in Shanghai. She had a fever of 37.8°C. Blood testing was normal at the time of admission. Rapid testing for influenza A was negative. She was admitted in hospital C and treated with antiviral medicine. She was discharged on June 14 when serum detection for SFTSV was negative. Another relative who became sick was the husband to potential index case's sister. He got fever on June 8, 2016 and recovered soon without any other symptoms. The third one who was an undertaker and had moved and cleaned the body of the potential index case. He had wiped the blood from the mouth of the potential index case. Other close contacts had no symptoms during the latent period after contact.
Laboratory testing
On June 13, serum samples were collected from case A and B. On June 14, C t values of case A and B were 32 and 29 respectively in Real-Time PCR assay and both were positive to SFTSV. Sera of close contacts with fever or other symptoms were tested negative by Real Time RT-PCR including case C.
Discussion
We presented a family cluster of SFTSV imported from Jiangsu province herein. Three family members successively became ill. The potential index case died and two other proven SFTS cases developed a secondary infection following exposure to the potential index case. One secondary case had provided bedside care for potential index case and both the secondary cases had contact with blood from potential index case. As early as 2006, two clusters were suspected to be infected with a novel virus in Anhui province and patients from one cluster were confirmed to be SFTSV positive in 2012 [10]. Another family cluster was retrospectively identified in a hilly area about 110 km south of Nanjing in eastern China in 2007 [8]. In several family cluster reports of SFTSV in China and Korea, evidence of personal contact, especially blood contact was demonstrated [7][8][9][10][11][12]18]. Most index patients reported in family cluster were infected through tick bites. Secondary patients in the family possibly became sick by contacting blood infected with SFTS. Genetic susceptibility was supposed to be one of the determinants for susceptibility of family members [19], which might explain why family members were more easily infected. However, person to person transmission was excluded in two family clusters reported in Zhejiang province [20]. In this family cluster, given the facts that two secondary cases had no history of tick bite or outdoor activities on the hill and case A and case B became sick 4 days and 7 days respectively, after the death of potential index case. Person to person transmission was suggested in this family cluster.
Lab-confirmed cases were reported in 16 provinces in China till 2014 [4]. More areas were verified as natural foci of SFTSV. Ticks could serve as a vector and reservoir of SFTSV and mice played a role in SFTSV transmission [21]. Ticks fed on SFTSV-infected mice could acquire the virus and transmit it to other developmental stages of ticks. SFTSV-infected ticks could transmit the virus to mice during feeding. SFTSV genomic RNAs were identified in Apodemus agrarius in Zhejiang province [22]. In this family cluster, the residence of index patient was located on a hilly area. Free ticks and mice were found in the surrounding environment and ticks were found feeding on mice. We inferred that SFTSV from ticks on the hill most likely caused the potential index case's infection though tick bite. However, more evidence will be needed to prove whether SFTSV was endemic in the region near Tai lake or not.
Secondary infection was person-to-person transmission. Two secondary cases had taken care of the potential index case and had close contact with the potential index case. First, two secondary cases had a history of coming into contact with potential index case's blood and handling or cleaning her corpse, which provided evidence of person-to-person transmission through blood contact. Second, in this family cluster, all three cases developed severe pneumonia at the early stage of infection, which was entirely different from most of other cases of SFTS. We proposed that aerosol personto-person transmission probably existed in the family cluster. According to previous researches, typical clinical and laboratory manifestations of SFTS included fever, gastrointestinal symptoms, myalgia, chills, thrombocytopenia, and leukopenia [13,14,23,24]. As a first group of serious symptoms, pneumonia was rarely observed in SFTSV infection. Confounded by influenza or avian influenza, it was difficult to be diagnosed. Atypical cases were easily ignored. However, researches had found that exposure to respiratory secretions led to nosocomial transmission of SFTSV among healthcare workers [25]. Positive results were detected in tracheal aspirate and gastric aspirate in an SFTS patient [26]. A cluster report in 2015 showed that SFTSV could be transmitted from person to person, by direct contact and/or aerosol transmission [27]. Our study gave more evidence for aerosol transmission in cluster. In addition, pneumonia may be an early onset symptom which should be explored in future research. One case of secondary-asymptomatic infection was found in a cluster in 2006 in Liaoning province and another asymptomatic case was reported in 2015 in Zhejiang province [28]. More integrated and sensitive clinical characters should be included in SFTS surveillance to identify possible SFTS cases.
Conclusion
A family cluster of SFTS was confirmed in Shanghai. Two family members were infected by SFTSV most possibly from one clinical SFTS case, who was potential index case for this family cluster. We should establish more surveillance system, within China and in the region, to identify more SFTS cases with atypical symptoms. | 2017-08-08T19:42:06.590Z | 2017-08-03T00:00:00.000 | {
"year": 2017,
"sha1": "52fb5955549f3c955c35e8569e4d8817d5c263fe",
"oa_license": "CCBY",
"oa_url": "https://bmcinfectdis.biomedcentral.com/track/pdf/10.1186/s12879-017-2645-9",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "52fb5955549f3c955c35e8569e4d8817d5c263fe",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119093164 | pes2o/s2orc | v3-fos-license | Searching basic units in memory traces: associative memory cells
The acquisition of associated signals is commonly seen in life. The integrative storage of these exogenous and endogenous signals is essential for cognition, emotion and behaviors. In terms of basic units of memory traces or engrams, associative memory cells are recruited in the brain during learning, cognition and emotional reactions. The recruitment and refinement of associative memory cells facilitate the retrieval of memory-relevant events and the learning of reorganized unitary signals that have been acquired. The recruitment of associative memory cells is fulfilled by generating mutual synapse innervations among them in coactivated brain regions. Their axons innervate downstream neurons convergently and divergently to recruit secondary associative memory cells. Mutual synapse innervations among associative memory cells confer the integrative storage and reciprocal retrieval of associated signals. Their convergent synapse innervations to secondary associative memory cells endorse integrative cognition. Their divergent innervations to secondary associative memory cells grant multiple applications of associated signals. Associative memory cells in memory traces are defined to be nerve cells that are able to encode multiple learned signals and receive synapse innervations carrying these signals. An impairment in the recruitment and refinement of associative memory cells will lead to the memory deficit associated with neurological diseases and psychological disorders. This review presents a comprehensive diagram for the recruitment and refinement of associative memory cells for memory-relevant events in a lifetime.
Introduction
Associative learning stands for a process, in which multiple exogenous signals, such as information, experiences and knowledge, are jointly acquired by sensory systems. Associative memory is termed as the integrative storage of these associated signals in the brain, which is indicated by memory retrievals (recall and representation) on the basis of speech, writing, gesture, countenance and emotion reactions. Associative learning and memory is a very common approach of signal storage for cognitions in life 1-6 since the acquisition of new unitary signals or the reorganized learning of previously acquired unitary signals are fulfilled in the integrative manner 7 . In learning processes, the associated exogenous signals come from sensory organs and reside into sensory cortices through cross-modal and intramodal manners 7,8 . The coactivation of sensory cortical neurons makes their axon projections and synapse innervations mutually, in which these neurons are recruited to be primary associative memory cells for the integrative storage and reciprocal retrieval of multiple exogenous signals 9-11 . In the meantime, these primary associative memory cells project their axons and make synapse innervations convergently onto their downstream neurons in certain brain areas for the integrative storage of endogenous associated signals that are essential to logical reasoning, associative thinking and other integrative cognitions. Their axons also divergently innervate the neurons in various brain areas that are relevant to cognition, emotion and behaviors for the storage of endogenous signals in many places and the participation in multiple memory-relevant events 7,12 . As cognitive processes and emotional reactions can be recalled, these neurons that memorize endogenous signals are named as secondary associative memory cells 7,8 . In other words, when the neurons encoding one of associated signals are activated, they can attract axon projection and synapse innervations from the coactivated neurons encoding another of associated signals. After associative memory cells are recruited from the coactivated neurons by receiving the innervation of multiple synapse inputs, their subsequent activities will change their excitability and synapse functions. The recruitment and refinement in the population of associative memory cells constitute a principle of activity together, connection together, strengthening together and coordination together 7,8 . Therefore, associative memory cells are nerve cells that are able to encode multiple signals which have been associatively acquired, as well as receive axon projection and synapse innervations that carry these signals. Currently, two kinds of associative memory cells have been detected experimentally, which carry out the activities of searching basic units of memory traces. Comprehensive cellular architectures underlying associative memory in the different periods of the lifespan are yet to be revealed to better understand memory-related physiology and psychology in the brain. This review focuses on current advances in associative memory cells as basic units of memory trace or engrams, an expansion on our previous review 7 .
Associative learning and memory
Learning is defined as the acquisition of new information, knowledge and experiences, which may be new unitary signals or the reorganized unitary signals that are learned previously.
Signal acquisition can be classified into associative and nonassociative styles 1,6 . Associative learning stands for the joint acquisition of multiple signals that can be sensory signals or sensory signals plus rewards after behavior operations are achieved. Their joint acquisitions are characterized as the association of new signals with an innate signal or a formerly learned signal, as well as the reorganized association of unitary signals that have been previously learned, such as the reorganization of letters into different words and phrases 7 . Associative memory to these signals appears emerged if these associated signals can be retrieved reciprocally by cues or recalled by automatic conversion among sensory modalities. Two physiognomies of associative memory are the integrative storage and distinguishable retrieval of associated signals 11 . On the other hand, nonassociative learning is termed as the acquisition of a given sensory signal, in which repeated stimulations lead to habituation or sensitization to this sensory signal 1 . It is thought that non-associative learning is not involved in acquiring new knowledge and experiences, except repetitive activations in a sensory system for its upregulation or downregulation by the sensitization or desensitization of sensory receptor and cortical neurons. In this regard, it may fall into the term "review" instead of learning. Therefore, the acquisition and storage of almost all of the new signals require them be associated with a signal that has been stored in the brain for the facilitation of their memory 7 .
In associative learning, multiple featured signals in each object or an environment are detected by different modalities from sensory receptors to cerebral cortices. These cross-modal signals are integrated for their associative storage. For instance, a fruit is detected by the olfactory system for aromatic odor, the visual system for shape and color, the taste system for sweetness, the auditory system for name and the somatosensory system for its surface sign. After they are jointly memorized, a signal induces the recall of its associated signals, or the other way around. The recall can even be done by automatic signal conversion among different modalities, e.g., image signals are recalled by verbal signals with no additional cues. In addition, multiple signals with identical features can be associatively detected through a single modality, i.e., one type of sensory receptors and its projected cortex. These signals are associatively acquired in an intramodal manner and primarily stored in one of sensory cortices. Therefore, memory traces imprint the joint storage and distinguishable retrieval of multiple associated signals. Primary associative memory cells that encode multiple signals based on the synapses from innate inputs and new innervations from coactivated brain areas have been detected for the integrative storage of associated signals 9-11,13-17 .
In addition, logical reasoning, associative thinking, computation and integrative imagination based on those exogenous associated signals that have been stored in primary associative memory cells of sensory cortices may lead to the secondary integrations of those signals for their storage and representation, i.e., secondary associative memory that is essential for cognition, emotions and behaviors at the high orders under the consciousness condition 7,8 . Although the memory occurs presumably in the prefrontal cortex, hippocampus and amygdala [18][19][20][21][22][23][24][25][26][27][28][29][30] , these studies still do not reveal whether the memory in their data is secondary to the information storage in primary associative memory cells from sensory cortices. Currently, secondary associative memory cells are detected in the motor cortex, the prefrontal cortex and the hippocampus, which reside in the downstream of sensory cortices and receive synapse innervation from primary associative memory cells 12,31,32 .
Various memory patterns are classified in psychology, such as explicit versus implicit memory, declarative versus nondeclarative memory and episodic versus semantic memory in memory contents as well as sensory versus short-term or long-term memory in temporal feature 6,33 . Declarative memory, i.e., explicit memory, refers to stored information that can be stated consciously, including episodic memory (specific processes and their contexts) and semantic memory (generalized knowledge and concepts). Non-declarative memory, or implicit memory, denotes the operations of various skills and procedures without the need of consciousness. In fact, there is no clear border line between explicit and implicit memory. The procedures and skills operated in the implicit memory can be consciously stated. The specific processes and their contexts after repetitive practices can be executed effortlessly. In the field of neuroscience, the classifications of memory formation in the brain are based on the combination of the location of information storage with the memory of featured signals, for instance, spatial memory in the hippocampus, emotional memory in the amygdala, perceptual memory in sensory cortices and prospective, attentive and working memory in the frontal cortex 6 . Moreover, it has been suggested that memory formation is classified based on cellular mechanisms, such as the different types of associative memory cells in the neural circuit for memory, i.e., memory trace or engram, and the sources of memorized signals from crossmodal versus intramodal sensory systems or exogenous versus endogenous resources 7,8 .
Associative memory to exogenous signals from external environments refers to the integration and storage of associated signals that are inputted from cross-modal or intra-modal sensory modalities. Associative memory to endogenous signals refers to the integration and the storage of associated signals that originate from sensory cortices and regenerate during logical reasoning and associative thinking in cognition-and emotionrelevant brain areas. Intramodal associative memory is related to the integration and storage of associated signals inputted from a single modality, such as a sensory modality or a brain area involved in cognition, emotion or behavior. Cross-modal associative memory is termed as the integration and storage of associated signals that come from different sensory modalities or brain regions related to cognitions, emotions or behaviors 7,8 .
To better understand these memory patterns and their correspondent mechanisms, we should figure out the basic units in memory traces or engrams that conduct the integration and storage of associated signals, constitute the foundation of cognitions (logical reasoning, associative thinking, computation, imagination and so on), achieve the integration and storage of endogenous signals generated from associative thinking and logical reasoning, and control the future presentation of stored associative signals. How the memory is formed in different modalities and encoded under the different states of consciousness, attention and psychological motion remains to be elucidated. Therefore, the comprehensive view of cellular mechanisms underlying associative memory should be established like an effort to see individual trees as well as the forest.
The highlights of memory-relevant events for searching memory cells
The mechanism underlying learning and memory has been systemically studied for more than one century 6,34-36 . Many observations and concepts have been proved to be solid, however, data inconsistencies and indication controversy still block our clear vision to abstract cellular architectures and molecular profiles. Major reasons for this vagueness may be due to lack of the reliable standards to uncover memory cells in neural circuits (the basic units of memory trace or engram) that encode specific signals stored, to identify molecules resided in memory cells specifically for their recruitment, as well as to validate behaviors specifically initiated by memory cells. In order to set up reliable criteria for judging whether memory cell ensembles or memory traces being recruited are correlated to memory formation and retrievals, the changes at the levels of molecules, neurons and behaviors in learning and memory should be precisely estimated.
In the study of memory retrieval, the stimulus-induced or cue-induced expression of specific behaviors that have been presented during learning events and memory retrievals are better used to denote the persistent presence of memory traces. The use of this strategy may have the following shortfalls. Behaviors, perceptions and cognitions are quickly developed postnatally 37,38 . Postnatal developments in perception and cognition versus behavior are not parallel in aspects of their patterns and contents 39-41 . The number of arm/body language patterns is much lower than the number of memory contents as well as the number of verbal language patterns. Although the number of memory contents is matched with the number of verbal language patterns, one arm/body language pattern may represent several memory contents. For instance, the thumbup gesture usually represents memory contents relevant to all positive events. Moreover, patterns and varieties in sensory input signals, memory contents, cognitive processes or emotional reactions are much enriched in comparison with behavioral patterns that are presented by common output pathways, i.e., all of these signals, contents and processes are expressed by behavioral output patterns and pathways in the limit number. For instance, the "OK" gesture is used to express appropriate sensory stimulations, good perception, successful memory retrieval and other good cognitions. In other words, behaviors may not well present the retrievals of specific memory contents, except for verbal language. This limitation of behavioral presentation to memory and cognition may be an issue in the study of memory retrieval based on behaviors in animals. For instance, body freeze and involuntary/voluntary shaking used to signify fear memory can be induced by extreme fear, anxiety, emotional reactions (e.g., anger and fighting) and physiological processes (e.g., hypothermia and hypoglycemia). Furthermore, the brain in matured human beings and animals is highly wired, and its different regions are interconnected 42 . Stimulations to potential memory traces by electrical, optogenetic or chemogenetic approaches being given to a location of the brain may indirectly activate other areas connected with this location to induce memory-related behaviors indirectly or behaviors across or similarly to memory retrieval, i.e., the replay of "memory-related behaviors" may not be directly or realistically controlled by memory traces.
The learning process generally includes the associative acquisition of simple signals or unitary signals and the reorganized acquisition of these unitary signals. At young age, language learning includes letters and words, and knowledge learning is mainly definition and concept. After the primary learning of unitary signals, the advanced learning moves forward to more complicated concepts by the reorganization of unitary signals, i.e., the acquisition of sentences and articles by the association of letters and words in language, as well as the acquisition of principles and theories by the association of definition and concepts in knowledge. Unlike verbal presentation, arm/body behaviors are not obviously upregulated to the complicated version for expressing the advanced language and knowledge, such that similar behaviors in different postnatal periods may represent different contents and knowledge. In other words, memory retrievals represented by similar arm/body behaviors likely include different contents. Taken together, we assume that the retrievals of stored signals by the replay of similar behaviors may be changeable in their contents spatially and temporally, i.e., behavior replays are unreliable, except for the reoccurrence of cues-induced behaviors.
At the cellular level, the basic units in the brain are neurons and glia cells. It is important to figure out new features of those neurons that have been recruited as memory cells for storing specific signals, in order to map their working principles during memory formation and retrieval. In addition to their conventional natures, such as innate synapse input, synapse transmission, neuron excitability and excitation outputs, memory cells theoretically encode the newly learned signals and receive new synapse inputs that carry these newly learned signals. As the most common style of learning and memory is associative in nature, i.e., the integrative storage of associated signals, associative memory cells recruited should encode both innate signals and new signals, as well as receive new axon projection and synapse innervations in addition to the innate input. In this regard, the detection of new synapse innervations and multiple signal encoding by recording approaches (cell electrophysiology and imaging) is critically important in reporting the finding of memory traces. Moreover, learning and memory involve the memory of unitary signals in the young and the memory of complicated signals, i.e., reorganized unitary signals, in the later period of development. Individual associative memory cells presumably encode multiple unitary signals, and their assemblies work together to store unitary signals in different reorganizations. In other words, ensembles of associative memory cells store advanced knowledge contents in specific spatial and temporal patterns 7 .
In the principle of cell physiology, neuronal excitation is driven by synapse inputs and neuronal excitability is controlled by spiking threshold 43,44 . The patterns and frequencies of neuronal spikes are influenced by synaptic transmission and spiking threshold, i.e., there is a proportional correlation between the intensity of neuronal activities and the strength of synapse inputs, but not the nature of input contents. Similarly, activity patterns and spiking frequencies of memory cells in memory traces can denote their activity strength but not memory contents, such that the replay of certain neuronal activity patterns, such as spontaneous sharp-wave ripple, indicates the reemergence of neuronal activity strength without the necessary implication of memory features and contents being encoded. Cues-induced neuronal activity may reflect the retrieval of memory contents. Learning-cues should be used to track the distribution of associative memory cells in the different grades of memory traces.
In order to figure out the features of memory cells and their working principle during memory retrieval, we expect to reveal the featured molecules in associative memory cells to label these memory cells. Based on analyses above, the formation of memory cells recruited from neurons involves axon projection and synapse innervation, two nonspecific processes in neurons. In this regard, the elucidation of molecular markers for memory cells is challenge. Recently, immediate early genes have been used to label memory cells 45,46 , which depends on a proposal that activated memory cells express immediate early genes 47,48 . Unfortunately, the expression of immediate early genes is proportional to the strength of neuronal activities which are not specific for memory cells. In this regard, the neurons with combo features in the labeling of immediate early genes, the innervation of new synapses and the encoding of new/innate signals are better termed as engrams.
In summary, neurons that meet all of the following criteria, such as cues-induced behaviors, cues-induced replay of neuronal activities, new synapse innervations and active molecule labeling can be defined as memory cells. Strategies to find out memory traces that meet these criteria are discussed below.
Strategies used to search basic units of memory traces
Two issues are important to clearly address the cellular and molecular mechanisms underlying associative memory formation and retrieval, i.e., animal models and strategies for searching cell assemblies in memory traces or engrams. Based on the studies of learning and memory over centuries, we summarize the animal models and strategies that have been used.
As associative learning of multiple signals is the most common approach of signal acquisition in life, the mechanisms underlying the integrative storage of these associated signals should be addressed by using appropriate animal models featured by association. A few animal models have been used to the study of associative learning and memory, such as classical conditioning that includes Pavlov's conditioned reflex, eyeblink conditioning and fear conditioning in rodents and withdraw reflex in Aplysia, as well as operant conditioning that includes various types of reward memory (e.g., operation plus reward and place plus reward) in mammalians 45,49-63 . In these models, a stimulus is unconditioned, whereas another stimulus is conditioned. However, in human beings, the memory of associated signals occurs by the signal inducing the recall of its associated signals, or the other way around. This reciprocal retrieval of associated signals constitutes the basis of associative thinking, logical reasoning, computation and imagination in forward and backward manners. It seems to us that these animal conditioning models do not signify whether the air-puffing to the cornea or the electric shocks to the feet is able to induce the recall of sound signal after the onset of eyelid-blinking conditioning or fear conditioning. That is, these conditioning models may not be ideally used to study associative memory. Moreover, electrical shocks may activate the whole brain by spreading electrical current in the body, so that the association is not regionspecific in the brain 7,8 . Compared with electrical stimulations used in the study of fear memory, physical and psychological stresses in social interactions are closer to real life situations 64-66 .
Recently, an animal model has been introduced to study associative memory in that the association of whisker and olfactory stimulations in mice leads to odorant-induced whisker motion and whisker-induced olfactory responses, a typical example of reciprocal retrieval of associated signals 9-11,15,16,67 .
In terms of strategies to study associative learning and memory, theoretical analyses and experimentation in vivo are used 68-72 . Theologists in the field of learning and memory focus on drawing potential units for information storage in the brain, such as memory traces, engrams and cell assemblies. Experimenters make efforts to figure out molecular substrates and cellular architectures for memory formation. In order to prove causal relationships between newly formed neuron substrates and memory behaviors, three criteria should be met. The emergence of new substrates and architectures is parallel to memory formation. The downregulation of newly formed substrates and architectures substantially reduces memory formation through the approaches of surgical ablations to brain tissues, pharmacological blockades to neuronal activities and genetic knockout/mutagenesis to molecules in nerve cells or synapses. The upregulation of these newly emerged substrates and architectures significantly facilitate memory formation through the approaches of pharmacological, electrical or optogenetic stimulations to nerve cells and gene overexpression in neurons and synapses 7-9,73 .
In addition to the term "memory traces" for information storage coined by the ancient Greeks, theoretical terms "engram and ecphory" have been suggested by Richard Semon 74 , a renowned theologist in the field of learning and memory. Engram and ecphory correspond to memory traces and memory retrievals, respectively 71, 75 . In addition, his view on the relevance of memory retrievals says that the interaction between the stored engram and retrieval cues may generate new engrams. As long as an engram-awakening stimulus is similar to an original stimulus, this incomplete retrieval cue is sufficient to retrieve the stored engram. Awakening the originally stored engram may generate a new engram related to this event. The old retrieved engrams and new engrams become associated through contiguity to strengthen original memory. Moreover, the simultaneous retrieval of multiple engrams with similar contents and their subsequent associations, i.e., resonance among engrams, would provide the basis for the complicated cognitive processes, such as abstraction, generalization and knowledge formation 76 . This theory may be the first to hypothesize that awakening engrams is dynamic and use-dependent. Although the engram termed by Simon lacks experimental evidence during that period, his frameworks about engrams are consistent with the features of memory activities. For instance, more representations lead to deeper memory, and the repeated simultaneous recalls of similar memory contents induce them to be summarized, conceptualized and generalized. In brief, his work has led to the consensus of memory traces or engrams as the basis of information storage.
Donald Hebb, another well-known theologist, describes memory traces or engrams to be cell assemblies as the basis of memory behaviors. According to his and Penfield's observation that the destruction of large amounts of cerebral cortices in human beings produces little effect on memory 77,78 , as well as Lashley's experiments that the ablation of widespread cortices in animals does not induce parallel changes in memory behaviors 79,80 , he has proposed the term "cell assemblies" that are the widely distributed neural substrates for memory. Each cell ensemble is a group of interconnected cells in that their interconnections are formed during their simultaneous activities 81,82 . Since these cells are interconnected, the activity in this circuit is maintained briefly after the event, i.e., short-term memory. Activities recurring for a sufficient duration within this cell ensemble can induce growth or metabolic change that strengthens those interconnections among assembly cells, such that short-term memory is converted into longer-term memory 82 . The strengthening of connections between presynaptic and postsynaptic nerve cells in their simultaneous activities confers these neurons the property of firing together and strengthening together, which has been hypothesized as being a neuron connection. The strengthening of neuron connections has been shown in long-term potentiation of synaptic transmission 83,84 . The high number of interconnections among cells may allow the entire ensemble to be activated if a subset of cells is activated by the process of pattern completion that induces memory retrieval. As Hebb's cell assembly is widely distributed in and across brain areas, destruction in a small proportion of cells may not lead to catastrophic memory traces, or graceful network degradation, which may account for Lashley's experimental results. In summary, Hebb's theory has overlapped multiple spatial scales from the integrated synaptic strengthening (a microscale level) to cell assembly formation (a mesoscale level).
The computational simulation of neuronal substrates for learning and memory has been used to deliver the theoretical model of memory traces, in which the data for modeling is based on experimental results. In the study of neuronal and synaptic architectures for memory traces and memory related behaviors, there are clear indications that show the involvement of neuronal ensemble and synaptic plasticity in processes of learning and memory in spite of a lack of evidences about synapses, neurons and their plasticity specifically correlated to memory formation 76,85-89 .
In summary, the study of memory formation by theoretical models has generated great frameworks that can provide useful guideline for addressing cellular mechanisms underlying learning and memory. However, these hypotheses about memory traces (or engrams) and cell assemblies have not indicated any insight about the integrative storage of associated signals and need be proved by experimentation. In experimental studies about learning and memory, three strategies can be used to confirm causal relationships between memory traces (cell assemblies) and memory-related behaviors. Memory traces should be detected during memory formation and cue-induced memory retrievals. The downregulation of memory cell assemblies can restrain memory-relevant behaviors. The upregulation of memory cell assemblies can facilitate memory-relevant behaviors 7-9,73 . There are two usual methods to track memory traces (engram) or cell assemblies, i.e., the detection of memory cells during learning and memory and the activation of memory cells to retrieve memory-relevant behaviors. The detection of memory cells involves observing their responses to memory cues by electrophysiological recording and two-photon calcium imaging and localizing their distribution by AAV-carried fluorescent neural tracing after memory formation. The activation of memory cells can be done by electrical, pharmacological, optogenetic or chemogenetic stimulations to induce the emergence of memory-relevant behaviors 7 . It is noteworthy that memory traces are widely distributed in the brain and brain areas are interconnected. These stimulations may lead to the antegrade and retrograde activation of neural pathways. The indirect activation of memory traces is unable to localize primary versus secondary allocations for memory formation.
Neuronal activities are indicated by electrical signals generated on the cell membrane and calcium signals raised in cells, such that the recording of electrical signals and the imaging of intracellular calcium dynamics can be applied to track cell assemblies relevant to memory formation and retrieval, i.e., the functional detection of memory traces 90 . The electrophysiological recordings by electrodes or electrode array have been used to detect the replays of neurons in the hippocampus, visual/ auditory cortices, the amygdala and ventral tegmental areas under different conditions, such as retrieval cues, wakefulness and sleep state 30,91-104 . For example, coordinate interactions from the hippocampus to the prefrontal cortex and associative cortices, including parietal and midline areas but not primary areas, are involved in spatial memory tasks 105,106 . The cortical-hippocampal-cortical circuit is critical for memory consolidation 107 . Hippocampal assemblies trigger neuronal activities in the ventral striatum during the replay of place-reward information 55 . The acquisition of associative memory in the hippocampus initiates a gradual-to-stable encoding process in the medial prefrontal cortex without continued trainings 29 . Emotional memory is reactivated in the hippocampusamygdala system during the sleeping state 108 . These data from this functional study are supported by anatomical evidence within the hippocampus, the prefrontal cortex and the thalamic nucleus 109 .
Recently, two-photon cell calcium imaging in vivo [110][111][112] has been used to detect memory traces or memory cell assemblies in cerebral cortices. For instance, the gradual emergence of neuronal activity relevant to spatial memory in the retrosplenial cortex, which is the major recipient of hippocampus, depends on the intact hippocampus. Indirect connections between the retrosplenial cortex and the hippocampus indicate hippocampal influence polysynaptically within the neocortex, i.e., widely distributed memory traces in the hippocampus and cerebral cortices 113 . Repetitive motor learning induces the formation of dendritic spines in vivo 114 . Associative memory cells developed in response to retrieval cues are detected in primary sensory cortices and the prefrontal cortex 9,11,12,32 . Thus, memory traces or cell assemblies can be tracked by electrophysiological and imaging recordings based on their activities in response to retrieval cues and during memory-relevant behaviors.
Importantly, the data above support the functional presence of memory traces or cell assemblies, which is better validated by morphological traces, i.e., their morphology and distribution should be quantified and localized. Two methods can be used for this purpose: the trace of their synapse innervations from axon inputs that are carrying the learned signals, as well as the labeling of these cell assemblies by molecules specifically relevant to memory. In the study of associative memory cells, fluorescent expression mediated by adeno-associated virus (AAV) vectors in neurons and their axons has been done by injecting AAVs, tagged with genes of fluorescent proteins, in the source side of predicted memory traces and by detecting axon terminals and their target on associative memory cells 9,13,15 . These associative memory cells receive both innate and new synapse innervations. It is noteworthy that the combination of tracing new synapse contacts and labeling memory assemblies with memoryrelevant molecules would be an ideal way to denote memory cell assemblies.
Neuronal activities may lead to a change in certain molecules 115-117 , so that learning and memory recruit the neurons to be memory cells presumably by molecular substrates. The labeling of memory cells by these molecules can be applied to indicatethe allocation of memory cell assemblies, based on the facts that the stimulation of neurons couples with the expression of immediate early genes 48 and their expression in dendrites is regulated by synapse activity 47 . For instance, immediate early gene Arc is specifically linked to the neural encoding process 118 . Immediate early genes are widely expressed in the brain after fear memory and the number of labeled cells is positively correlated to fear memory behaviors 45,46 . It seems that there is an association between the expression of immediate early genes and the active strength of memory cells. The detection of immediate early gene expression is usually used to label the engrams, so that their morphologies and functions can be studied 112,119-122 . However, the upregulated expression of immediate early genes is also associated with neuron hyperactivity, such as seizure discharge in epilepsy [123][124][125][126] and neuron toxicity in the brain ischemia 127-129 . In these regards, immediate early genes may be suitable for identifying all of the neurons that are highly active. Genes and proteins specifically linked to memory cell assemblies and their memory contents remain to be explored 68,130 .
As the retrieval of memory-specific behavior is presumably based on memory traces that are formed during memory formation, the activation of memory cell assemblies to induce the emergence of memory-relevant behaviors should be included in the study of memory formation and retrievals. The use of this strategy is based on the positive correlation between memory cell assemblies and memory formation/retrieval, i.e., if some neurons are memory cells that store specific memory content, the activation of these cells by electrical, pharmacological and optogenetic methods should induce the representation of memory-relevant behaviors. Electrical stimulations to memory traces in the brain were given by Penfield who expected to localize the source of epilepsy 131 . Stimulations to the temporal lobe in wakeful epileptic patients induced their memory recalls, i.e., engrams were detected in this cortical area 132,133 . Pharmacological stimulation has been used to activate the serotonin or norepinephrine system to examine the facilitation of memory formation by the transmitters successfully 134,135 . Recently, optogenetic stimulation has been used to activate memory engrams, which mediate fear memory and false memory 22,136-139 . Therefore, these results support the positive correlation between memory traces and memory-related behaviors. It is noteworthy that the direct optogenetic activation of neurons without increases of synaptic strength and dendritic spine density leads to memory retrievals 140 , implying nonspecific neuron activation. As pointed above, the wide distribution of memory traces in the brain and the interconnection among brain areas may result in the stimulations being antegrade and retrograde activations of neural pathways. The indirect activation of memory traces is unable to localize primary versus secondary allocations for memory formation.
Similarly, if behaviors related to specific memory content depend on memory traces that are formed during memory formation, the downregulation of certain molecules critical for memory cell assemblies by pharmacological blocking, gene knockout or optogenetic methods should prevent or attenuate the formation and emergence of memory-relevant behaviors, which is commonly used to address the causal relationships among molecular substrates, cellular architectures and memory formation. The first use of surgical ablation to search the distribution of memory traces or engrams was done by Lashley. Although he failed to localize memory traces, his studies imply the wide distribution of memory traces in the cerebral brain 79,80,141 . The following studies indicate that the removal of the temporary lobe in human beings leads to the loss of recent memory due to the impairment of the hippocampus 78,142-144 . In the study of memory traces using pharmacological reagents, recent memory can be blocked by using intracerebral injection of puromycin 145,146 . These studies reveal a causal relationship of memory traces in wide brain areas to memory formation and retrieval, although memory traces specific for content-related behaviors remain to be tracked and localized. With the advanced molecular biology, the downregulation of gene expressions by gene knockout 147 and optogenetics 148,149 have been successfully used to find negative correlations among molecules, memory cells and behaviors. These studies provide strong evidence for the causal relationships among molecular substrates, cellular architectures and memory formation.
The advantages and disadvantages of these strategies and approaches have to be evaluated and validated. In logical analyses, parallel changes, negative correlations and positive correlations between functions and changeable factors should be met in order to ensure the causal relationship. Studies involving manipulations of molecules and cells causing changes to memory-relevant behaviors in these three criteria should be combinedly used to identify memory traces formed after learning. Consistent results are expected to reach the conclusion. However, inconsistent results may occur in these studies. For instance, silencing and stimulating the patriate cortex lead to inconsistent results in memory retrieval. Parietal lesions do not normally yield severe episodic-memory deficits, whereas parietal activations are seen frequently in the functionneuroimaging studies of episodic memory 150 . These two categories of evidence suggest that the answer to the puzzle requires us to distinguish the contributions of dorsal versus ventral parietal regions and the influence of top-down versus bottom-up attention on memory. The features of memory traces or engrams based on these studies include the following. Memory traces encode the trained signals, receive synapse inputs and undergo synaptic plasticity 35,69-71 . The activation of memory traces evokes strong memory retrieval. Memory events are upregulated by norepinephrine and serotonin. How memory traces memorize multiple associatively learned signals needs to be addressed by observing cells in memory traces that encode the associated signals.
Associative memory cells as basic units of memory traces
Associative learning includes the acquisition of associated signals that are basic features of various objects, knowledge and experience as well as the acquisition of the complicated signals that are reorganized from those basic featured signals in intramodal or cross-modal manner. Associative memory stands for the integrative storage and the distinguishable retrieval of these associated signals in neurons. Associative memory cells are presumably basic units to fulfill these processes during associative learning and memory by encoding multiple associated signals as well as receiving innate and new synapse innervations in the cerebral brain 7,8 . The integrative ability of associative memory cells indicates that activity-dependent synaptic plasticity in a single neural pathway, such as long-term potentiation and depression of synaptic transmission 83,151,152 and activity-dependent neuronal plasticity 43,153-155 , may not be directly involved in the integrative storage of multiple associated signals, though this plasticity may influence memory retrievals 7,8 .
In terms of the location of information storage, memory traces appear to be widely distributed in the brain, such as the hippocampus, amygdala, motor cortex, sensory cortices and associative cortices 3,12,21,22,25,26,28,30,106,156-160 . Memory contents reside hypothetically in cell assemblies by the strengthening of neurons' interconnection that is triggered by their correlated activity in information acquisition 81 . These studies do not explain why cell assemblies are widely distributed and how plasticity at synapses and neurons coordinately integrate associated signals for their storage in primary and secondary manners, i.e., the characteristics and working principle of these neurons that coordinately encode associative memory 7,8 . Neuronal and synaptic plasticity cannot interpret memory patterns, e.g., explicit versus implicit memory, declarative versus non-declarative memory, episodic versus semantic memory and memory transformation among these patterns 33 , the temporal features of associative memory as well as the contribution of associative memory to cognitive processes, e.g., associative thinking and logical reasoning. How endogenous signals generated in associative thinking and logical reasoning are memorized for future representation remains unknown. How memory is encoded under different consciousness states needs to be addressed. The natures of these cell assemblies, the patterns of their connection strengthening and the coordination of their encoding memory need to be examined in a comprehensive manner.
Associative memory cells that encode multiple associated signals as well as receive innate and new synapse inputs have been detected to be recruited by the coactivation of cortical neurons 7-9,11,67 . The coactivation of sensory cortices evokes their mutual synapse innervations, and recruits associative memory cells to integrate and encode associative signals 10,11,16 . Based on mutual innervations among associative memory cells 9,11,15 , the associations of sensory signals for their integrative storage make each signal induce the recall of its associated signals in a reciprocal manner. In the meantime, these primary associative memory cells in the sensory cortices send their axonal projections toward brain areas relevant to cognitions, emotions and behaviors, and undergo synaptic convergence with individual neurons in these areas during logical reasoning and associative thinking to recruit them as secondary associative memory cells 7,8,12 . In this regard, mutual synapse innervations among primary associative memory cells in sensory cortices and their innervations to secondary associative memory cells in brain areas related to cognition, emotion and behavior constitute the basic cellular architecture for the reciprocal recall of associated signals, the automatic conversion of associated signals during their recalls and cognition at the high orders 7,8 ( Figure 1). In addition to the learning of associated signals from cross-modal sensory cortices, the acquisition of associated signals can be achieved in one of intramodal sensory cortices, such as the association of letters or words in the auditory cortex, the association of unitary images in the visual cortex, and so on. The recruitment and features of these associative memory cells are described below.
Associative memory cells recruited in sensory cortices: Associative learning by paring whisker, odor and tail stimuli in mice leads to reciprocal responses induced by each of these signals, such as odorant-induced whisker motion, odorant-induced tail withdraw, tail-induced whisker motion, tail-induced olfaction response, whisker-induced olfaction response and whisker-induced tail withdraw 9,11,13,15 . Their barrel cortical neurons are able to encode new odor and tail signals alongside innate whisker signal as well as receive new synapse innervations from the piriform and Three groups of primary associative memory cells (blue, green and yellow) in sensory cortices are synaptically innervated. Three groups of secondary associative memory cells (blue, red and pink) in brain areas relevant to cognition, emotion and behaviors are synaptically innervated. Mutual synapse innervations among associative memory cells in each group are intramodal, and mutual synapse innervations among three groups of associative memory cells are cross-modal. The axons of primary associative memory cells convergently and broadly innervate secondary associative memory cells, whose axons project back to primary associative memory cells. All neurons possess innate synapse innervations (yellow axons). The synapse innervations among the functional corresponding groups of primary and secondary associative memory cells are labeled by bigger presynaptic boutons. S1-tail cortices besides innate inputs from the thalamus 9,17 . Their piriform cortical neurons encode new whisker signal and innate odor signal, as well as receive new synapse innervations from the barrel cortex alongside innate input from the olfactory bulb 15 . In other words, a portion of the sensory cortical neurons in mice after associative learning become able to encode associated signals as well as receive new synapse inputs based on their mutual innervations alongside innate input, which are named as associative memory cells 10,11,13,16 . These associative memory cells have been assured to include glutamatergic neurons, GABAergic neurons and astrocytes 9-11,13,15-17 . Thus, the coactivation or simultaneous activity of sensory cortices can trigger the new synaptogenesis for mutual synapse innervations and the recruitment of associative memory cells for the storage of associated signals. The association of cross-modal sensory signals may occur among all of sensory cortices, such as visual signal with auditory, olfactory, taste and somatosensory signals; auditory signal with visual, olfactory, taste and somatosensory signals; and so on, i.e., primary associative memory cells can be recruited in auditory, visual, olfactory, gustatory and somatosensory cortices by their mutual synapse innervations 7,8 ( Figure 2 and Figure 3).
Associative memory cells recruited through the coactivation of sensory cortices are diversified in their encoding abilities and contents. Some cells encode all associated signals (full associative memory cells) and others encode two or more signals (incomplete associative memory cells), e.g., triple, two or one of odor, whisker and tail signals 9 . If neurons are activated together and wired together, the coactivated strengths among these sensory cortical neurons may be different based on their variable excitability 44 . Neurons that encode one signal are called new memory cells or innate memory cells 9,11 . The recruitment of diversified populations of associative memory cells in their encoding ability dissects complicated events, objects or images into simple unitary signals for their storage, future retrievals in different patterns and the reorganization of unitary signals in future associative learning 7 . In addition, the repeated coactivations of these sensory cortical neurons can facilitate the recruitment of full associative memory cells from incomplete associative memory cells, as well as the formation of more en passant synapses among their mutual innervations, so that the number and the activity strength of associative memory cells are upregulated 32 . The proportional relationship among associative memory efficiency, associative memory cells and their plasticity 9,11,13,14,161,162 indicates an activity-dependent positive cycle between the recruitment and refinement of associative memory cells 7 .
A feature of associative memory cells is the mutual axon projections and synapse innervations for encoding multiple associated signals. The molecules potentially responsible for axon growth and synapse formation are likely substrates underlying the recruitment of associative memory cells. Current studies indicate that antagomirs for microRNA-324 and microRNA-133a, through influencing Ttbk1 and Tet3 expression, attenuate associative memory, new synapse innervation and associative memory cell formation 9,73 . The downregulation of miRNA-342 expression and the upregulation of Nlgn3 and Nrxn1 expression are coupled with the recruitment and refinement of associative memory cells 15,17 . These genes and proteins are related to axon prolongation and synapse formation. Thus, the recruitment of synapse innervations and associative memory cells may be based on a chain reaction from intensive neuronal spikes to microRNA-regulated genes and proteins that specifically manage axon prolongation and synapse formation 9,13,73 . In addition, the inhibition of sensory cortices blocks associative memory 11,13 and the injection of microRNA antagomirs into sensory cortices lowers the strength of associative memory and the recruitment of new synapse innervations and associative memory cells 9,73 . Therefore, the primary location to encode associative memory is likely in the sensory cortices, where mutual synapse innervations and primary associative memory cells are recruited 7,8 .
The pair-encoding neurons that encode two signals, similar to the encoding property of associative memory cells, have been detected in the animal visual cortex in vivo 2,104 . These pairencoding neurons in intramodal cortices may work for the integrative memory of the associated signals inputted from a single sensory modality, such as associated photon beams in images to the visual system, associated odor signals to the olfactory system, associated letters and words to the auditory system and so on (Figure 2). It should be emphasized that the morphological evidence about mutual synapse innervations among the pair-encoding neurons in single modality cortices remains to be indicated.
As nerve cells, associative memory cells recruited in sensory cortices have specific features for associative memory and general features for neurons, in which specific features are used as criteria for identifying whether the neurons detected are associative memory cells. As their coactivation via the synchronous activity of cortical neurons triggers their mutual synapse innervations and recruits them as associative memory cells, the specific features of associative memory cells include the following 7,8 . Associative memory cells receive new synapse innervations from coactivated sensory cortical neurons for their mutual connections alongside innate sensory input. Associative memory cells encode new and innate associated signals for their integrative storage. Their axons convergently project to and synapse onto the neurons in brain areas relevant to cognitive processes, emotional reaction and behaviors. Their recruitment is controlled by microRNA-regulated genes and proteins that manage axon projections and synapse formations 9,13,73 . The mutual synapse innervations among associative memory cells allow the reciprocal recall of associated signals and the conversion of signal retrieval among different modalities, e.g., image signals are presented through verbal language, verbal signals in stories are presented by visual diagrams. Their synapse convergences onto downstream neurons and the activation of associative memory cells permit logical reasoning, associative thinking, computing and so on. In general, for neuron and function outcome, the number and the function state of associative memory cells influence memory strength and maintenance. The number of associative memory cells is affected by their mutual synapse innervations under the induction of coactivation strength and repetitive coactivations Associative learning and memory include the acquisition of associated signals, the integration and storage of exogenous signals, the integration and storage of endogenous signals, as well as memory retrieval through behavioral presentation. Associative memory cells (AMCs) are classified into primary AMCs (pAMCs) in sensory cortices, including visual, auditory, olfactory, gustatory and somatosensory cortices for the integrative storage of exogenous associated signals, as well as secondary AMCs (sAMCs) in brain areas related to cognitive processes (logical reasoning, associative thinking, computation, imagination, concept, judgement conclusion, decision and so on in the prefrontal cortex), emotional reactions (fear, aversion, happiness, angry and so on in the amygdala, ventral tegmental area (VTA) and nucleus accumbens (NAc)), sensation integration (understanding and perception in association cortices) as well as spatial localization in the hippocampus. pAMCs are mutually connected through cross-modal and intramodal synapse innervations for the integrative storage and the reciprocal retrieval of associated signals. The axons of pAMCs convergently innervate onto sAMCs for cognition, emotion and spatial localization. sAMCs are mutually connected through their synapse innervations for the integration of cognition, emotion, perception, localization and so on. All of these primary and secondary associative memory cells will send their axons toward brain areas relevant to behaviors (language, gesture and countenance in motion cortices) and their coordination (the systems for maintaining internal environment, e.g., the hypothalamus to control autonomic nerves and hormones). Cross-modal associative memory cells are recruited by mutual innervations among sensory cortices or between cognition-and emotion-relevant brain areas. Intramodal associative memory cells are recruited by mutual innervations among the neurons in single-modality sensory cortex, cognition brain area or emotion brain area. In addition to the activation by innate input and new synapse innervation from the coactivated brain regions to integrate and encode associated signals, associative memory cells are activated by the arousal system, including the ascending reticular activating pathway in the brain stem and thalamus as well as the ascending activating pathways from the cholinergic nuclei, midbrain raphe nuclei, locus coeruleus and substance nigra that release acetylcholine (ACh), serotonin (5-HT), norepinephrine (NE) and dopamine (DA), respectively, which can maintain wakefulness, permit normal consciousness as well as grant specific alertness and attention. In addition, associative memory cells are regulated by hormones that are released from the hypothalamus-pituitary-glands. The upregulations of AMC number and activity strength can facilitate memory to be impressive, or vice versa. The function downregulation of motion-relevant brain regions leads to the inability of memory retrieval and presentation.
as well as by developmental stages 9,11 . The functional state of associative memory cells is influenced by the strength of innate and new synapse inputs, their ability to convert synaptic analogue signals into digital spikes, as well as their ability to output spikes 44,163-165 . In addition, glutamatergic associative memory cells will suppress the activity of other neurons through GABAergic associative memory cells and lateral inhibition to have themselves to be dominantly active for memory retrieval 16,17 .
In summary, synapse innervations to associative memory cells determine the specificity of memory content. The number and functional state of associative memory cells as well as the connection and activity strengths in their synapse inputs and axon output partners influence the power and persistence of memory and retrievals 9,14,73,162 . For instance, barrel cortical neurons receive new synapse innervations from the piriform cortex after associative learning alongside innate inputs from the thalamus. Synapse activities in the pathway of odor signal . Each of these primary associative memory cells receives synapse innervations from the innate inputs (their colors correspondent to those of the cell bodies), the input from the arousal system (dark red) as well as the mutual synapse innervations among them (i.e., from other primary associative memory cells). These primary associative memory cells send their axons convergently to secondary associative memory cells (green) and make synapse innervations. All of these associative memory cells send their axons to memory output neurons (MON). B) The relationships between the excitation state of associative memory cells and the strength of memory formation/retrievals. The excitation state of such associative memory cells is influenced by the number and the function state of their synapse inputs and by their own excitability. If the excitability of associative memory cells rises, their relationship curve (dark red) shifts towards the left (yellow) and the efficiency of learning and memory increases. C) denotes the relationship between different associative memory cells and their excitation levels. If the threshold to fire spikes (excitation) decreases, the relative excitation levels of associative memory cells increase, as well as more neurons will be coactivated for the recruitment and refinement of associative memory cells in memory formation and retrieval.
will drive barrel cortical neurons toward spiking threshold under the basal activity of thalamic inputs. Once the spiking threshold reaches, their spikes activate downstream motor cortical neurons for odorant-induced whisker motion. With these associative memory cells in sensory cortices 9,11,73 , their axon-innervated downstream neurons are able to encode these associated signals 12,18,23,27,29,31,166 . The stimulations to any of these areas in neural circuits from sensory cortices to behavior-and emotion-related brain nuclei induce memory representation 21,22,25,26,28,30 . It is noteworthy that there are around ten thousand types of proteins in living cells 167 , which is much less than unit signals remembered in life, such as words, unitary images, odorants, and so on. As more than ten billion neurons reside in the central nervous system, those neurons with synapse interconnection, i.e., associative memory cells should be the basic units for memory traces, instead of the possibility of a specific protein for the given memory content.
Associative memory cells in cognition-, emotion-and behaviorrelevant brain areas:
In addition to primary associative memory cells in sensory cortices to integrate exogenous signals, secondary associative memory cells that integrate and store endogenous signals may be recruited in cognition, emotion and behaviors 8 . The contents, processes and outcomes generated from logical reasoning and associative thinking can be remembered. Emotional reactions to various stimulations and operations can be recalled. All of these specific events in mind may be generated based on the associative storage of learnt exogenous signals in sensory cortices, such as images, stories, tastes and odors, and can be memorized in brain areas relevant to cognition, emotion or behaviors in the integrative manner for subsequent recalls. In terms of cellular substrates, the reorganized association of the stored signals in the sensory cortices may make primary associative memory cells to strengthen their mutual synapse innervations and convergent innervation on downstream neurons as well as to receive feedback synapse innervations during cognitive processes and emotional reactions. These downstream neurons become able to encode the associated signals and are recruited to be secondary associative memory cells that memorize specific contents generated in associative thinking and logical reasoning 12,32 . The feedforward and feedback interaction among primary and secondary associative memory cells make associative thinking and logical reasoning with the inclusion of sensory origins 7,8 (Figure 1 and Figure 2).
In terms of brain areas to produce secondary associative memory 7 , prefrontal cortical neurons demonstrate a sustained activity after pair stimulations 27,29 . Cue-response neurons in the inferotemporal cortex are detected after associative learning 23 . Neurons in response to conditioned and unconditioned stimulations and their response transformation are seen in the amygdala 168 . Neurons in the hippocampus and amygdala are involved in contextual fear memory 169 . Memory cell assemblies for temporal signals are overlapped and recorded in the hippocampus 18 . The activation of engram cells in the amygdala or the hippocampus is sufficient to induce fear responses 21,22 . These data imply that memory cells are generated in the prefrontal cortex, hippocampus, amygdala and associative cortices for memory retrievals 26,170 . Whether these memory cells are synaptically innervated by primary associative memory cells in sensory cortices remains to be examined.
After associative learning by pairing whisker, odor and tail signals, neurons that encode three signals are detected in the motor cortex, prefrontal cortex and hippocampus 12,31,32 , in addition to the barrel and piriform cortices 9,11,15 . The responses of the neurons in the prefrontal cortex, the hippocampus and the motor cortex to the signals are attenuated by inhibiting barrel or piriform cortical functions. Their responses and plasticity are sustained in the barrel cortex for long-term and are decayed in the motor cortex after the pair training ends. Individual neurons in the prefrontal cortex, motor cortex and hippocampus receive synapse innervations from the coactivated sensory cortices after paired stimulations 12,31,32 . These results provide functional and morphological evidences for the recruitment of secondary associative memory cells in the prefrontal cortex, the hippocampus and motor cortex through their coactivity with primary associative memory cells in sensory cortices 8 .
Whether memory cells in the downstream of sensory cortices undergo cross-modal connections, similar to primary associative memory cells 8 , appears indicated by recent studies. The pathway from the ventral hippocampus to the nucleus accumbens is involved in social memory 24 . Engrams in the prefrontal cortex emerge after receiving inputs from the hippocampus and amygdala in contextual fear memory 20 . Axon projection from the prefrontal cortex and hippocampus to the amygdala is formed during fear memory 171 . The pathway from the prefrontal cortex to the striatum plays a crucial role in reward memory 25 .
The characteristics of secondary associative memory cells in cognition-and emotion-related brain areas and association cortices are listed below. They receive new synapse innervations convergently from primary associative memory cells in coactivated sensory cortices in cognitive processes and emotion reaction. They encode endogenous associated signals from sensory cortices for the integrative storage. The association of cognition events and emotion reactions induces mutual synapse innervation among these secondary associative memory cells. Their axons project to memory-output cells in behavior-related brain areas for memory representation by language, countenance, gesture and writing. The number of secondary associative memory cells is influenced by mutual synapse innervation evoked by coactivation strength and repetitive coactivations during cognition as well as by development stage. The function state of secondary associative memory cells is influenced by synapse input, their ability to convert synaptic analogue signals into digital spikes and their ability to output spikes that drive memory-output cells. Synapse innervations to secondary associative memory cells determine the specificity of memory contents during cognitions and emotions. The number and excitability of secondary associative memory cells as well as their connection and activity strengths set up the persistence and power of memory formation and retrievals. Activations to secondary associative memory cells permit the rehearsal of associative thinking, logical reasoning and emotional reactions. It is pointed out that the outputs of secondary associative memory cells innervate brain areas, such as the hypothalamus and extrapyramidal system, to influence sympathetic/parasympathetic balance, temperature set-point, food ingestion and hormones to be involved in emotional reactions and behaviors.
Associative memory cells detected in cerebral cortices include glutamatergic neurons, GABAergic neurons and astrocytes 9-11,13,15-17 . The connections between glutamatergic neurons and GABAergic neurons is mutually upregulated after memory formation 15,17 . These data indicate that all of these memory cells constitute the basic units to store specific associated signals. The activation of glutamatergic associative memory cells will cause them to be excited and their neighboring neurons to be inhibited by GABAergic associative memory cells and lateral inhibition, so that the memory of associated signals is maintained in a contrasting manner. In the meantime, these glutamatergic associative memory cells can limit themselves so not to become over-excited through GABAergic associative memory cells and recurrent inhibition 7 . In terms of interactions among associative memory neurons and astrocytes, the working load of associative memory neurons can be supported by associative memory astrocytes, which transfer nutrients and waste products between neurons and blood vessels 7,9,11 .
In addition to associative memory cells in cross-modal sensory cortices or among cognition-and emotion-related brain areas, associative memory cells can be located in intramodal cortices, such as associated photon beams in images to the visual system 2,104 , associated odor signals to the olfactory system, associated letters and words to the auditory system and so on (Figure 2 and Figure 3). Neuronal afferent pathways for associated signals in a single sensory modality may innervate multiple groups of neurons, in which neurons in each group encode one of these associated signals. For instance, different groups of auditory cortical neurons receive neural afferents carrying different frequency sounds in a point-by-point manner and each group of neurons encodes one of specific frequency sounds. The different visual cortical neurons receive synapse innervations from different retina cone cells in a point-by-point manner. The coactivation of the neurons that encode different intramodal signals can induce their mutual synapse innervations, such that associative memory cells in a single modality of the sensory cortices are recruited. The associative memory cells in a given sensory cortex are recruited to memorize intramodal signals with different features, strengths and locations of input signals. With associative memory cells in intramodal sensory cortices, intramodal memory to associated signals is formed, e.g., image one induces image two recall, odor one induces odor two recall and word one induces word two recall, or the other way around 7 . It is noteworthy that there is the time delay among intramodal signals, in which activity persistence in different sets of neurons in a given sensory cortex may grant the partially temporal overlap of their coactivity to recruit intramodal associative memory cells. The different proportions, activity strengths and connections of the intramodal associative memory cells are responsible for the storage and retrieval of intramodal signals with different features 90 . Intramodal associative memory cells may also be recruited within one of brain areas relevant to cognitions and emotions 8 .
In terms of the relationship between primary and secondary associative memory cells in memory traces and their role in memory-related processes, our proposed model is given as follows. Basic architectures for their working together include mutual synapse innervations among primary associative memory cells in the sensory cortices and their axon terminations onto secondary associative memory cells in brain areas relevant to cognitions, emotions and behaviors. Each set of primary associative memory cells connects one set of secondary associative memory cells reciprocally, whose functions are closely related ( Figure 1). The axons from all of these associative memory cells terminate to motor neurons for memory output (memory output cell) and innate reflex (Figure 2). Mutual synapse innervations among primary associative memory cells constitute the interaction circuits for the reciprocal retrieval of associated signals by each of the sensory cues, as well as the automatic conversion retrieval of associated signals among different modalities 9,11,15,17 . The convergent synapse innervations from primary associative memory cells to secondary associative memory cells (Figure 1) confer logical reasoning, associative thinking and other integrative cognition induced by one of cues 12 . For instance, one of secondary associative memory cells is convergently innervated by three sets of primary associative memory cells that carry three kinds of signals, which maintain basic activities in this secondary associative memory cell. When an input cue activates three sets of primary associative memory cells by their mutual synapse innervations, these primary associative memory cells can convergently activate this secondary associative memory cell, in addition to its activation through the dominantly innate chain from one set of primary associative memory cells onto one set of secondary associative memory cells. In other words, three kinds of signals triggered by one of these cues drive this secondary associative memory cell to achieve the integration of three associated signals for associative thinking and logical reasoning. This integration is also facilitated by mutual synapse innervations among secondary associative cells that contribute to interactions of the higher order cognition and emotions. The divergent synapse innervations from primary associative memory cells to secondary associative memory cells (Figure 1) mean that associated signals are stored in several brain areas for long-term maintenance with less chance of being lost, as well as being used for different cognitive processes and emotional reactions 12 . In addition to this feedforward innervation from primary to secondary associative memory cells, there may be a feedback connection from secondary associative memory cells to primary associative memory cells, by which the learnt exogenous signals will automatically initiate cognition and emotions as well as endogenous signals from cognitive events and emotional reactions which usually contain sensory signal sources 7 and (Figure 1 and Figure 2).
The refinement of associative memory cells
Cell assemblies formed by connection strengthening through their correlated activities, especially the coincidence activity of presynaptic and postsynaptic cells, presumably work for learning and memory 81 . This hypothesis is well matched by synaptic and neuronal plasticity 69,172-174 , e.g., long-term potentiation and depression in synaptic transmission 83,152 or neuronal activity 27,155 . Many studies about synapse and neuron plasticity were not carried out in memory cells despite brain areas presumably relevant to memory. Synaptic plasticity in a given neuronal pathway does not reveal how multiple signals are integrated and encoded in associative memory cells. These uncertainties raise the issue of how these data about plasticity are written in the profile of cellular mechanisms underlying associative memory. Based on current studies, there are two forms of plasticity in associative memory cells, i.e., the refinement during their recruitment for them to coordinate with each other and the refinement induced by cues to recall specific signals, both of which are activitydependent based on coactivation among neurons 9,11,13,15,17,32,73 , i.e., recruitment-related refinement and activity-dependent plasticity.
In the recruitment of associative memory cells from cortical neurons through their coactivation and mutual synapse innervations, the number of excitatory synapses and the transmission strength at each of these synapses on glutamatergic and GABAergic neurons are enhanced; the output of glutamatergic neurons is enhanced and the output of GABAergic neurons is weakened 15,17,161,162,166 . In addition, the active intrinsic property of glutamatergic associative memory cells is upregulated and the excitability of GABAergic associative memory cells is downregulated 15,17,161,162 . Mutual synapse innervations among the associative memory cells are increased 15,17 . Increases in the driving force from excitatory synapses and in the excitability of memory cells, as well as decreases in the driving force from inhibitory synapses, shift the balance of these cortical neurons between excitation and inhibition towards excitation. Their high activity can attract more synapse innervations, recruit more glutamatergic/GABAergic associative memory cells, promote their functional state to an optimal level for information storage and facilitate the activation of these associative memory cells for retrieval of the associated signals 11,13,14,17 . The increased number and function of excitatory synapse inputs in associative memory cells strengthen the encoding ability and precision 44,164,165 for efficient memory formation and precise retrieval. If excitatory associative memory cells are over active, they can activate neighboring inhibitory neurons to prevent hyperactivity through recurrent negative feedback 43,44,175 .
There are two forms of neuronal excitation plasticity to interpret how neuronal refinements are involved in the formation and the retrieval of associative memory, i.e., the downregulation of threshold potential to fire spikes and the upregulation of spiking ability to fire more sequential spikes. The intensive activity of cortical neurons by high frequency stimulus, similar to neuronal coactivation during associative learning, shifts spike threshold potential toward the resting membrane potential, so that the firing of neuronal spikes is facilitated 155 . The intensive neuronal activity also upregulates the capacity to fire sequential spikes 27,69 . Both mechanisms elevate the neuronal capability to encode digital spikes, which strengthens a chain reaction from spikes to microRNA-regulated expression of genes and proteins that facilitate the recruitments of new synapse innervations and associative memory cells 9,11,15,17,73 as well as the retrieval of the associated signals 176 . These changes have been detected in associative memory cells 15,17,162 . Thus, plasticity in neuronal excitability may play one of central roles in learning and memory, which is reiterated by a current review 177 .
In the study of memory traces or cell assemblies, synaptic potentiation has been detected at engram cells in slices of the prefrontal cortex, the hippocampus and the amygdala 140 and the excitation enhancement of B51 neurons is isolated from Aplysia 178 . In the study, by using cues to sensory inputs in vivo, activity-dependent potentiation in response to associated signals was evoked at input pathways in the active group of primary and secondary associative memory cells, and activity-dependent conversion from silent into active neural pathways in response to associated signals was initiated in the inactive group of associative memory cells 7,12,32 . This activity-dependent upregulation in response to associative signals in the given group of associative memory cells may allow them to become more excited than their neighboring neurons and to be highly sensitive to the excitatory driving force from sensory cues, such that more associative memory cells are recruited by their increased mutual synapse innervations in response to all associated signals 15,17 . Furthermore, activity-dependent potentiation in response to associated signals can be induced through homosynaptic and heterosynaptic pathways 32 , which facilitates the reciprocal recall and logical reasoning of those associated signals. Activity-dependent potentiation at associative memory cells in response to the associated signals inputted through new synapses may be mechanistically caused by the enhancement of individual synapses and/or the conversion of inactive or silent synapses into functional synapses 179,180 , since new mutual synaptic innervations have been formed among these associative memory cells 7,9,11,12,15,17,32,73 . In terms of function impacts, activity-dependent potentiation at primary associative memory cells may facilitate the memory retrieval of exogenous associated signals. Activity-dependent potentiation at secondary associative memory cells facilitates the memory retrieval of endogenous signals generated during cognitive processes and emotional reaction. Thus, the spontaneous or cue-induced recalls of these signals are emerged for the rehearsal of cognitions and emotional pulses. Recruitment-related neural potentiation and activity-dependent neural potentiation are supported by the fact that the enhancement of neuronal excitability is multigrade in nature 155 .
Recruitments of primary associative memory cells in sensory cortices and of secondary associative memory cells in cognition/ emotion-relevant brain areas endorse the specificity of the storage of associated signals 8,9,11,13,15,17,73 . The number and function state of associative memory cells influence the strength and maintenance of specific memory as well as the efficiency of memory retrieval 9,13,14,73 . Structural and functional plasticity at subcellular compartments of associative memory cell influences whether they sensitively integrate associated signals, precisely memorize these signals and efficiently trigger their target neurons for memory retrievals 15,17 . The maintenance of activitydependent refinement at associative memory cells supports the period for them to be sensitive to the cue for memory retrieval. It is emphasized that both recruitment and refinement of associative memory cells depend on their simultaneous activity 9,11,13,15,17 . The activities of associative memory cells as central point comprise coactivity-dependent positive cycle in their recruitment and refinement, i.e., activity together, mutual innervation together and strengthening together. Highly active neurons while receiving associated signals are recruited as associative memory cells and are functionally upregulated. The upregulated population and function state of associative memory cells during repeated learning processes recruit more associative memory cells and upregulate their active state further 7 . Activity-dependent positive cycle in the recruitment and refinement of associative memory cells, which is based on the function compatibility between neuronal partners 163 , can interpret realistic practices under conditions of normal consciousness and well attention, i.e., the more learning times is, the more associative memory cell recruitment/refinement is, and the more impressive memory is. It should be pointed that associative memory cells fall into the active group of neurons in the brain, but active neurons labeled by non-specific immediate early genes may not be memory cells.
In terms of the functional states of primary and secondary associative memory cells influenced by synapse inputs, the number and strength of the inputted synapses are proportional to the excitation levels of these associative memory cells 9,14,73,161,162 . The increase of synapse inputs that carry specific memory content and their upregulation from receiving repeated cues drive associative memory cells to become more excitable for the retrieval of this specific memory and the full recruitment of memory cells. The increased activity of synapse inputs from the arousal system boosts associative memory cells to become more excitable for the retrieval of memory contents stored in a nonspecific manner. Moreover, the increase of excitability or the decrease of spiking threshold in associative memory cells will make them be easily activated for the retrieval of memory contents nonspecifically and the recruitment of more associative memory cells 15,17 . A theoretical illustration of associative memory cells driven by synapses and neuronal excitability is given in Figure 3.
The neurons in the central nervous system that are dominantly recruited as associative memory cells needs to be determined. Based on the principle that the simultaneous coactivation of cortical neurons and the activity-dependent positive cycle between the recruitment and refinement of associative memory cells are the primary driving force for the neurons being recruited as associative memory cells 7 , we assume that the neurons with high levels of excitation and synapse inputs are preferentially are recruited as associative memory cells. In other words, the cortical neurons, which possess a lower spiking threshold caused by their activities, as well as stronger synapse inputs driven by attention calls from previously learned relevant associated signals carried by the synapses formed in those events or by the consciousness levels maintained by the arousal system plus memory, are favorably recruited as associative memory cells. These dominantly active neurons are always recruited to be associative memory cells at the first grade, their activation and recruitment trigger the neighboring neurons through their synapse connections to be more active and become associative memory cells in the second grade, and so on. This preferential grading of the recruitment of associative memory cells leads to a time sequence for groups of cortical neurons to be recruited as associative memory cells when multiple associated signals are exposed to learners sequentially, such as words by words in sentences or articles and images by images in visual or video views 7 .
There are a few interesting observations about the recruitment and refinement of associative memory cells. The establishment of associative memory follows a development change, i.e., memory formation shows initial increase and then decrease with aging 11 . Synapse and neuron plasticity mature during postnatal development 155,180 . These studies indicate the dominant roles of recruitment versus refinement of associative memory cells in memory formation and retrieval in different developmental stages. The activity-dependent recruitment of associative memory cells may play the dominant role in associative memory during early and young age, while the activitydependent refinement of associative memory cells works dominantly after these stages. The knowledge learned in young age is in the form of relatively simple unitary signals whereas the knowledge learned in matured age is complicated in the form of reorganized unitary signals. In this regard, associative memory cells recruited in young age store unitary signals, and associative memory cells refined in matured age work by learning the reorganized unitary signals 7 .
Associative memory cells are modulated by transmitters and hormones
In addition to new and innate synapse innervations on primary associative memory cells, their convergent and divergent innervations on secondary associative memory cells and reciprocal synapse innervations among them (Figure 1), such associative memory cells may receive synaptic innervations from the arousal system, including the ascending reticular activating pathway 181,182 and the ascending activating pathway from the neuronal axons of the locus coeruleus, the midbrain raphe nuclei, the cholinergic nuclei and substance nigra [183][184][185][186] . The arousal system widely innervates the neurons in cerebral brain to maintain wakefulness and to permit consciousness through their released acetylcholine, serotonin, norepinephrine and dopamine. It has been proposed that this arousal system, under the conditions of alter and rewards, supports the coactivation of cortical neurons for their recruitment to be associative memory cells, as well as maintains the basal activity of primary and secondary associative memory cells 7 (Figure 2). This proposal is supported by recent studies that memory formation and retrievals are upregulated by acetylcholine, norepinephrine, serotonin and dopamine 134,135,[187][188][189][190][191][192][193] , although these studies are not focused on associative memory cells. In addition, there is a coordinated strengthening effect of serotonin and norepinephrine on associative memory cells to raise the efficiency of associative learning and memory 32 . Serotonin increases neuron responses to synaptic inputs 194,195 , and dopaminergic neurons enhances synaptic bouton formation 196 , indicating that these neurotransmitters act on synapses and neurons to facilitate memory formation.
In addition to neurotransmitters, hormones may influence the recruitment and refinement of associative memory cells. It has been found that estrogen upregulates the dendritic spines on hippocampal neurons [197][198][199] , luteinizing hormone downregulates cognitive processes and spine density 200 , and gonadotropin-releasing hormone regulates spine density 201 . Moreover, estrogen and luteinizing hormone can upregulate associative learning, but this upregulation is attenuated by their combined applications 202 . These data indicate that monoamine transmitters and hormones modulate learning and memory. The targets of these molecules on memory cells need to be addressed.
Associative memory cells in physiology and psychology
Associative memory cells are essential for memory formation, memory retrieval, cognitions and emotional reactions 9,11,12,15,31,73,161,162 . The features and activity principles of associative memory cells can be applied to construct a working map (Figure 1 and Figure 3) relevant to associative memory by cross-modal or intramodal manners, which includes the efficiency of associative learning, the integrative storage of multiple signals, the strength and preservation of associative memory, the efficiency of memory retrieval, the transformation of simple to complex information storage, the temporal sequence of learning and memory to multiple signals, the correlation of associating memory to cognitive process and emotional reactions, and so on. The features and working principles of associative memory cells also assist with interpreting memory patterns, e.g., declarative (explicit) versus nondeclarative memory (implicit), episodic versus semantic memory and transformation between such patterns under the conditions of consciousness and attentions.
The simultaneous activity of the neurons among different brain areas is essential for recruiting new synapse innervations and associative memory cells. The coactivity of sensory cortical neurons by cross-modal or intramodal manners induces their mutual synapse innervations, so that these neurons become able to encode multiple associated signals, i.e., these neurons are recruited as associative memory cells 9,11,15,17 . The coactivation of these primary associative memory cells also drives their axon prolongation and convergent synapse innervations onto the neurons in cognition and/or emotion-relevant brain areas, recruiting them as secondary associative memory cells in logical reasoning and associative thinking 7,12,31,32,166 . These associative memory cells based on their synapse inputs and mutual synapse innervations constitute memories specific to associated signals. The activity-dependent positive cycle in the recruitment and refinement of associative memory cells recruits more associative memory cells to enhance memory strength and maintenance 7 . These data provide new insights for memory formation, suggesting that mutual synapse innervations among primary associative memory cells endorse a reciprocal retrieval of associated signals and that secondary associative memory cells based on synapse convergences from primary associative memory cells function in associative thinking and logical reasoning. These results in activity together, connection together and strengthening together also upgrade a hypothesis by Hebb that the repeated coactivation of interconnected cells evokes the strengthening of neural wire to form cell assemblies for memory 81 .
Associative memory formed by the association of multiple signals from cross-modal sensory modalities is commonly seen in life, such as the association of visual and auditory signals. Memory retrievals can be achieved by the automatic conversion of visual signals into verbal signals, or other way around, in addition to the retrieval reciprocally induced by either of the associated signals 7 . For instance, images in movies or videos can be recalled and represented by verbal styles. The contents in verbal stories can be recalled as diagrams. Primary associative memory cells, by mutual synapse innervations among cross-modal sensory cortices, may contribute to the reciprocally induced retrieval and the automatic conversional retrieval of associated signals among cross-modal sensory modalities. Similarly, the intramodal association of multiple signals is commonly seen, such as different objects in single view and different words in single sentence. Primary associative memory cells by mutual synapse innervations 7 and pair-encoding 2,104 in the single sensory cortex endorse memory retrieval in a picture-by-picture or word-by-word manner. It is noteworthy that signals in the visual system and the auditory system are usually complicated. An image consists of numerous photon beams with various light strengths and colors. Each sentence consists of many words and letters. Physiologically, the images that consist of numerous photon beams with different spatial distributions and light strengths are detected by different cone cells in the retina, which transmit these photon signals through visual nerves to visual cortical neurons in a point-by-point manner. Sound wave frequencies from words and letters are detected by hair cells in different segments on the cochlea base membrane, where hair cells are stimulated and their electrical signals are transmitted via auditory nerves to auditory cortical neurons in a point-by-point manner 203 . How these unitary signals included in an image or a sentence are reintegrated and memorized in cerebral cortices is largely unknown 7 .
In line with the principle of activity together, connection together and strengthening together 7 , the coactivations of auditory cortical neurons, which receive synapse inputs from hair cells on cochlea base membrane and encode words or letters with different sound frequencies in early life, induce mutual synapse innervations among these neurons to recruit intramodal primary associative memory cells that store these unitary sound signals. As cortical neurons possess a few folds of differences in their excitability 44,204 , it may be postulated that the neurons with the highest excitatory state are dominantly activated. The afterdischarge of the neurons initially activated by the first letter or word coincides with the discharge of the neurons activated by the second ones, the afterdischarge of the neurons for the second letters or words coincides with the discharge of neurons for the third ones, and so on. The coactivation of these neurons evokes their mutual synapse innervations, which may constitute the integrative storage of letters in a given word or words in a given sentence. In repeated learning of this sentence or word, this group of auditory cortical associative memory cells is strengthened in their mutual synapse innervations and activities. The recruitment and refinement in this group of auditory cortical associative memory cells confer the consolidated memory of this word or sentence for subsequent retrievals. In the subsequent lifespan, sound signals to the auditory system become complicated, which are often the reorganization of unitary sound signals including letters and words. The learning of these reorganized unitary signals will strengthen their correspondent associative memory cells that have stored unitary sound signals via their synapse innervations and excitability, in order to encode these newly listened words and sentences, which will be preferentially activated in memory retrievals. In addition, glutamatergic associative memory cells suppress the activity of other neurons through GABAergic associative memory cells and lateral inhibition to have themselves activated preferentially for memory retrievals 16,17 .
Similarly, the coactivation of visual cortical neurons that receive point-by-point synapse innervations from retina cone cells in early life evokes mutual synapse innervations among these neurons in order to recruit intramodal associative memory cells that store unitary signals (photon beams with different intensity and color) in visual images. There is a proportional relationship between neural activity strength and stimulus intensity 17 , so the neurons receiving the stronger light are more active. In line with the principle of neurons becoming active together, connecting together and strengthening together 7 , mutual synapse innervations among these strongly active neurons will be dominant. These active neurons are recruited to become a group of intramodal primary associative memory cells to fulfill the integrative storage of strong light beams in given visual images. In the meantime, the axons of these associative memory cells may project to visual association cortices 205,206 and make convergent synapse innervation onto their neurons to recruit secondary associative memory cells 32 . This process of transferring from primary to secondary associative memory cells fulfills a transferring of image signals, especially strong photon beams, into the integrative storage at the secondary level as well as allows primary associative memory cells in the visual cortex to be able to receive new signals. As the neurons in the visual cortex correspond to the retina cone cells in a point-by-point manner, intramodal primary associative memory cells in the visual cortex receive major and minor synapse innervations based on their activity strength stimulated by signals from cone cells. Secondary associative memory cells in visual association cortices mainly receive convergent synapse innervations from active primary associative memory cells with major synapse innervations and active synapses converted from silent ones, such that major features in images are the integrative storage and subsequent retrieval. Our suggestions are supported by a recent report that visual association areas are recruited during memory formation 207 . In subsequent associative learning, based on the reorganization of unitary signals in various new images, the portion of associative memory cells reactivated by those reorganized unitary signals will be integrated together through the conversion of inactive/silent synapses into active synapses among them to fulfill the integrative storage of new associated signals 180 .
In practice, intramodal and cross-modal associative learning and memory occur simultaneously, especially the association of visual and auditory signals. For instance, unitary signals in visual images are associated to verbal signals during social activities, such as family activities, personal communications and classroom studies, in which each feature of a visual image is given clear definition by words or sentences. During social interactions, numerous associations are formed between unitary signals from the visual modality and words/phrases from the auditory modality. These associations at the unitary level will confer the learning of complicated information based on the reorganization of these unitary signals and the reorganized integration of associative memory cells. After cross-modal associative learning, individuals are able to fulfill the reciprocal recall of the associated signals, i.e., a signal evokes the recall of its associated signals, or other way around, as well as the automatic recall of signals through one modality that have been learned through another modality, i.e., the view of images is converted into verbal recall, or other way around 7 . There are two mechanisms underlying these processes. In early life, the learning of associated signals from two or more modalities coactivates sensory cortical neurons in these modalities and induces mutual synapse innervations among them. Visual cortical neurons that encode unitary signals in the image mutually innervate with auditory cortical neurons that encode words or phrases, e.g., the neurons for unitary signals in an image "lemon" connect the neurons that encode words "lemon", "yellow" and "oval", based on their coactivation in initial learning 7 . Numerous associations between unitary visual signals and auditory signals in social activities induce mutual synapse innervations between visual and auditory cortical neurons to form thousands and thousands of cell-pairs, primary cross-modal associative memory cells. Their active states grant memory retrievals. The accumulation of these associative memory cells that encode pairs of unitary signals will confer the learning of complicated signals based on the reorganization of these unitary signals. In postnatal development, the capabilities of axon growth and synapse formation are gradually attenuated 11 . The learning of complicated signals during aging may utilize another mechanism for memory, i.e., the coactivity-dependent upregulation of associative memory cells in their excitability and mutually innervated synapses 7,17,32 or the activity-dependent conversion of inactive synapses into active synapses 180 . As long as their function upregulations are maintained at a sufficient high level, these complicated signals can be retrieved automatically and/or by cues.
Through the coactivation-induced mutual synapse innervation for recruiting associative memory cells and coactivationinduced functional upregulation among associative memory cells, individuals can gradually memorize associated signals from unitary to reorganized unitary, i.e., the transformation of simple to complicated information storage, in a topic-related manner 8 . Initially, the associations of simple images in the different intramodal features with words based on letters activate visual and auditory cortical neurons, respectively. With their mutual synapse innervation, intramodal and cross-modal associative memory cells are recruited including AMCs for pictures, letters as well as for picture and word. The repeated activations of these associative memory cells through practices will induce their activity-dependent plasticity and recruit more associative memory cells, i.e., coactivity-dependent positive cycle in the recruitment and refinement of associative memory cell. The first grade of associative memory cells is formed 7 . With the accumulation of associative memory cells to store unitary signals, they become recruitment-ready neurons to be associative memory cells that encode complicated associative signals. The complicated visual and auditory signals can be associatively learned through activating the first grade of associative memory cells in visual and auditory cortices. Their mutual synapse innervation and activity upregulation lead to the formation of the second grade of associative memory cells that encode complicated images and sentences organized from unitary signals. Thus, numerous groups of the first and second grades of associative memory cells are accumulatively recruited in lifespan learning. In advanced learning, multiple grades of associative memory cells are recruited to encode more complicated signals. When the different groups and grades of associative memory cells are accumulated, subsequent learning may be based on their activity-dependent function upregulation, which makes them to be easily activated for quickly memory. Reading book or looking images induces the intensive activities in some groups of associative memory cells that encode these sentences and images, which leads to activity-dependent function upregulation at these associative memory cells. Their low threshold potential to fire spikes and active synapse inputs to drive these associative memory cells permit the cues dominantly to reactivate them for the recalls of images and sentences, and even the spontaneous activation of these cells to drive secondary associative memory cells for free associative thinking. The activity of these associative memory cells will lead to memory presentation by behaviors if they successfully drive the activation of memory-presentation neurons in the motor cortex 7 .
It is noteworthy that the complicated signals can also be dissected and memorized through the formation of associative memory cells that are able to encode multiple signals 9 . The complicated signals are composed of numerous unitary signals, which can be detected through the dissections by different sensory systems and intramodal sensory neurons. While learning these complicated signals, associative memory cells are recruited to integrate multiple simple signals, based on the random association of these unitary signals to induce mutual synapse innervations among their correspondent recruitment-ready neurons. These associative memory cells with different integrative ability to associated signals are recruited and the activation of portions of these associative memory cells leads to the selective recall of these complicated signals 7 .
There are three resources of synaptic inputs to drive and maintain the activities of associative memory cells, including new synaptic innervations from coactivated brain areas, innate synaptic inputs formed during development, and synaptic inputs from the arousal system. The latter two synapse-driving forces activate the neurons ready to be recruited as new associative memory cells. The ascending reticular activating pathway from the brain stem and the thalamus receives various sensory inputs and widely innervates the entire cerebral brain to permit the wakefulness and consciousness 181,182,208 . The ascending pathways from neuronal axons in the cholinergic nuclei, midbrain raphe nuclei and locus coeruleus innervate the forebrain to keep alertness and consciousness by releasing acetylcholine, serotonin and norepinephrine 183,184,186 . This arousal system maintains the basal activity of associative memory cells, and confers them to integrate innate and new synaptic inputs specifically and to memorize associated signals. This arousal system may also activate recruitment-ready neurons to influence the efficiencies of associative learning, of associative memory cells to facilitate memory retrievals as well as of primary and secondary associative memory cells to permit the association of memory with cognitive process and emotional reactions.
Learning efficiency is influenced by neuronal excitability, synapse responsiveness and neurons ready to be recruited 7 . Neurons ready to be recruited for storing new associated signals may be those cells that have been able to encode the storage of previous learnt signals from specific synapse innervations. These stored signals may be closely relevant to those associated signals that will be learned, and these ready recruited neurons can be activated by giving topic cues in the attention call. The number of the ready recruited neurons influences how the information is acquired and memorized easily as well as how the complicated signals can be efficiently learnt. That is a reason why the efficiency of associative learning is influenced by a fact whether individuals are knowledgeable in the topic to be learnt. In addition, the cortical neurons are diversified in their synapse inputs and intrinsic property 44 , and the neurons with more synapse inputs and lower threshold potential are easily activated to fire spikes for high learning efficiency 11,162 , which triggers the chain reaction of intensive spikes and microRNA expression changes for axon prolongation and synapse innervations 9,73 . Thus, activity-dependent upregulations in neuron excitability and synapse innervations facilitate the recruitment of associative memory cells to influence learning efficiency.
The efficiency of memory retrievals is influenced by the number and function state of associative memory cells as well as the coactivity-dependent positive cycle between recruitments and refinements of associative memory cells 7 . Under the conditions of normal consciousness and alertness, the recruited number of associative memory cells is positively proportional to the activated associative memory cells in memory retrieval, so that the efficiency of memory retrieval would be consistent to the efficiency of associative learning 11,162 , the functional state of associative memory cells affects how they are easily activated in memory retrievals 17 , and the coactivity-dependent positive cycle between the recruitment and refinement of associative memory cells will add more associative memory cells into memory traces. Therefore, the efficiency of memory retrieval would be high under the conditions of normal consciousness and alertness. Whether the stored information can be successfully retrieved also depends on the function state of memoryoutput cells, since the function downregulation of memory execution cells in the motor cortex leads to the inability of memory retrieval (i.e., memory extinction) though primary associative memory cells are well-maintained in the normal function 14,31 . Thus, the high number and the active intrinsic property of associative memory cells in memory traces as well as the coactivity-dependent positive cycle of their recruitment and refinement lead to automatic memory retrieval after repeated learning and thinking without the need of cues.
In the transformation from exogenous signals to endogenous signals and their integrative memories 7,12,31,32,166 , the efficiency to correlate associative memory with cognitive processes and emotional reactions is a critical issue. In this process, the interactions between primary and secondary associative memory cells by their mutual synapse innervations ( Figure 1) as well as the number and functional state of these associative memory cells should be taken into account during logical reasoning and associative thinking 7 . Thus, cellular processes involved in the efficiency for the learning, storage and retrieval of exogenous associated signals may similarly work for the transformation of exogenous-to-endogenous signals.
In terms of the relationships between associative memory cells and memory patterns, such as declarative or explicit memory versus nondeclarative or implicit memory, episodic memory versus semantic memory as well as the transformation between these patterns, our interpretations are below. In spite of these psychological classifications, there is no clear border line to separate them. Declarative memory is an intentional remember with clear state under consciousness, while nondeclarative memory is an effortless remember with no conscious awareness 6,33 . In fact, implicit memory is formed in individuals by paying attention when they initially learn these processes and operations. With long-term practice to be skilled, the expression of such processes and operations are not necessarily to be fulfilled with conscious effort. Based on coactivitydependent positive cycle in the recruitment and refinement of associative memory cells, the repeated coactivations of primary and secondary associative memory cells can recruit more associative memory cells and upregulate their function state 7,11,15,17,32,166 , as well as strengthen synapse connections from associative memory cells to memory-output cells in the motor cortex 14,31 , so that explicit memory can be converted into implicit memory. In other words, there may be the reverse relationship between the number and upregulation of associative memory cells and the requirement of consciousness, a homeostasis for memory retrieval. Implicit memory based on more associative memory cells that are easily activated is supported by phenomena that it can usually be expressed spontaneously. In explicit memory, episodic memory in individual events can be converted into semantic memory after the repeated associative thinking and logical reasoning strengthen associative memory cells that have stored a common signal of the events through central synapse innervations or place associative memory cells that have stored those events with similar topics together to reorganize them into a group of memory cells for the general concepts and to convergently innervate on another grade of associative memory cells in an abstraction manner 7 .
Consciousness is the combinational state of wakefulness and memory for individuals to aware and identify themselves and objects in the environment 209 . The normal consciousness may be based on the basal activation of associative memory cells by the arousal system and the specific activation of associative memory cells from their associated inputs triggered by sensory cues 7 . Thus, the number and functional state of associative memory cells are proportional to the state of consciousness. The combination of consciousness and a specific alert constitutes the attention, in which a specific group of associative memory cells is activated for memory retrieval as well as the alertrelevant recruitment-ready neurons are coactivated for learning alert-relevant signals. Once individuals are under consciousness, they have two forms of logical reasoning and associative thinking, i.e., critical versus creative. The critical thinking activates more recruited secondary associative memory cells for the evaluation, while creative thinking may generate newer secondary associative memory cells for inspiration 7 .
The awareness state can be classified into consciousness and unconsciousness. The sleeping can be fell into unconsciousness (slow wave sleeping) and incomplete consciousness (fast wave sleeping) 209 . How do different groups of associative memory cells work together during fast wave sleeping or dreaming? Dreams are often accompanied by highly activities in electronic encephalograph and behaviors, such as rapid eye movement, muscle twitch and active respiration/heat beat, indicating high activity in the forebrain. In the meantime, associative memory cells for specific events, which have been frequently thought in daytime, are activated. Associative memory cells that are intensively activated in daytime lead to the coactivity-dependent positive cycle of their recruitment and upregulation. So, these events are playbacks. As the reverse relationship between the upregulation of associative memory cells and the requirement of consciousness, associative memory cells with large population and upregulated function due to repeated learning and thinking can be activated under incomplete consciousness condition, such that playback events are incompletely identical to realistic ones 7 . As the playbacks can be recalled and stated, associative thinking and logical reasoning (the integration of endogenous signals) based on primary and secondary associative memory cells can be fulfilled under incomplete consciousness 8 . This viewpoint is granted by an observation that temporal sequences of place cell activities in a novel spatial experience are detected during the resting or sleeping period preceding the experience. This preplay occurs in the disjunction to sequences of replay in a familiar experience. These results suggest that internal neuronal dynamics during resting or sleep organize cellular assemblies into temporal sequences that contribute to encode a relevant novel experience in the future 210 .
Furthermore, images, odors, tastes and events are presented by word-based language in associative thinking and logical reasoning. In initial learning, the sensations, perceptions and events are associated to their correspondent word descriptions, such that associative memory cells for encoding these processes and word descriptions have been recruited. Once these processes are recalled in the sequential playbacks, their word descriptions in these associative memory cells are initiated to substitute the complicated images and events, which is the requirement of speeding up memory retrieval and cognition. The substitution of words to images and events is realized based on the recruitment of more associative memory cells and their upregulations in coactivity-dependent positive cycle manner by repeated practices. However, if words and these processes are associated improperly, the corrections of these associations are difficult because of the presences of these recruited synapse innervations, associative memory cells and their circuits 7 .
Associative memory cells in pathology
The integrative storage and the reciprocal retrieval of the associated signals are critical for the bidirectional alertness and prediction in the life. Based on primary and secondary associative memory cells as well as their multi-grade integrations 7 , one signal will induce the recall of its associated signals, or the other way around, as well as the signals learned from one modality are recalled through the conversion into another modality. Individuals are able to fulfill logical reasoning and associative thinking as well as to predict future events in forward and backward manners. Furthermore, associative memory cells in each of the coactivated brain regions encode the associated innate signal and newly learnt signal, as well as each of the associated signals is stored in multiple brain areas, which largely reduces the chance of memory loss 8,11 . The storage of multiple signals in an associative memory cell strengthens the efficiency of memory retrieval 9 . The storage of multiple signals in a cortical area and the recall of one signal triggered by multiple signals will enable these individuals to strengthen their abilities in memory retrieval and well-organized cognitions. In these regards, the deficit of associative memory cells in their morphology, functions and local environment declines memory retrievals and cognitions as well, which are usually associated with neurological diseases and psychiatric disorders.
It is widely accepted that the normal consciousness and well attention are important for memory formation 33,211,212 , which can be explained by associative memory cells and their features 7 . With the arousal system to maintain wakefulness and the activation of recruitment-ready neurons by the topic cues in attention call, their activation and activity make them to be able to encode the associated signals. These recruited associative memory cells under the wakefulness condition will grant individuals to identify themselves and environmental objects, which constitute consciousness. On the other hand, the consciousness based on wakefulness and memory supports the activation and activity of associative memory cells to execute activity-dependent positive cycle in their refinement and recruitment, so that more associative memory cells are recruited and impressive memory is formed in the mind. Therefore, the deficit of associative memory cells will make consciousness to be obscure.
Psychological disorders, such as anxiety, depression and even schizophrenia, are accompanied by unusual memory 143,213 . For instance, fear memory induced by acute stress is often associated with anxiety 202,214 . Stimulations of engram cells through an optogenetic approach in the hippocampus activate fear memory recall and anxiety 22 . Memory regarding the outcomes of chronic mild stresses are associated with depression-like behaviors 215,216 . On the other hand, the activation of positive memory traces by optogenetic methods in the amygdala suppresses depression-like behaviors 139 . These data indicate that the formation of associative memory cells induced by different patterns of abnormal stimulations can lead to psychological disorders, i.e., acute severe stresses recruit associative memory cells relevant to fear memory and anxiety, and chronic mild stresses recruit associative memory cells related to negative memory and depression 214,215 .
The proper coactivation of active neurons makes them to be recruited as associative memory cells 7,11 , and the activitydependent upregulation of associative memory cells facilitates the integrative storage of associated signals 7,14,17,161,162 . These processes constitute the coactivity-dependent positive cycle in the recruitment and refinement of associative memory cells, such that more associative memory cells will be recruited. However, the further upregulation of associative memory cells, such as the dysfunction of GABAergic neurons in schizophrenia and epilepsy 217,218 , allows associative memory cells to be overly and widely activated. The overly upregulation of associative memory cells in sensory cortices will lead to hallucination. The overly upregulation of associative memory cells in cognition-and emotion-related brain areas leads to illusion 7 .
The efficiency of learning and memory decays in age-relevant manner 219,220 . There is a bell-shaped pattern in the efficiency of associative learning and memory 11 . In terms of cell mechanisms, synaptic potentiation matures during postnatal development 180 , and neuronal excitability in cortical neurons is upregulated until a plateau level at postnatal weeks 3-4 155 , which matches dynamical changes in associative memory well 11 . Neural plasticity and associative memory cell recruitment in postnatal development constitute the coactivity-dependent positive cycle in the recruitment and refinement of associative memory cells, such that more associative memory cells are recruited to increase the efficiency of learning and memory 161,162 . In older mammalians, the accumulations of insoluble β-amyloid and phosphorylated tau-proteins in the brain influence axon prolongations and synapse formations 9,73 to suppress the recruitment and upregulation of the associative memory cells, to silence active associative memory cells and/or to deteriorate those recruited associative memory cells for the memory deficit 7,8,221 . On the other hand, the activity of associative memory cells can strengthen the coactivity-dependent positive cycle in the recruitment and refinement of associative memory cells, which prevents the conversion of soluble β-amyloid into its insoluble form and promotes the clearance of β-amyloids by associative memory astrocytes 7,11 . A current report supports this point in that light and sound stimulations coordinately reduce the accumulation of β-amyloid 222 .
In age-related neurodegeneration, such as Alzheimer's disease, insoluble β-amyloid may be accumulated differently in various brain areas. For instance, the optogenetic activation of engram cells, which have a lack of increased synaptic strength and dendritic spines under protein synthesis inhibition-induced amnesia, leads to memory retrievals 140 . The optogenetic activation of hippocampal engram cells leads to memory retrievals in mice, though they show the amnesia under the condition of using natural recall cues in the transgenic mouse model of early Alzheimer's disease 137 . In addition to the indication about the wide distribution of memory traces for signal storage and retrieval, these results suggest that areas involved in natural memory retrieval are dominantly impaired by the deposition of β-amyloid, rather than memory trace cells, as well as that areas responsible for memory retrieval are not specific for a given memory. In this regard, synapse connections from associative memory cells to memory-output cells should be strengthened in the early stage of Alzheimer's disease 7,14,31 .
In terms of memory maintenance versus extinction, the recruitment and refinement of associative memory cells are not significantly declined, but the activity of memory-output neurons in the motor cortex is lowered 14,31 . The sustained presence of associative memory cells as well as the recruitment of more associative memory cells in repeated brain activities confer memorized signals to be retrieved in lifespan, in which the information can be retrieved as long as their innervations onto memory-output neurons successfully drive the latter to be functionally active. It is noteworthy that memory retrievals show different patterns in spontaneous, cue-induced and realistic object-triggered manner with the ages. For instance, spontaneous retrievals often occur in child stage or brain excitation, the cue-induced retrievals usually occur in young and adult, the real object-induced retrievals occur in senior individuals. In addition, when the brain is highly excited in many areas, such as euphoria perception, extreme fear and strong stimulations, more associative memory cells are recruited through their mutual innervations, so that impressive memory and spontaneous recalls to these experiences are generated in lifespan 14,31 . It is difficult to remove the newly formed synapse innervation and the recruited associative memory cells to relieve fear memory and addiction. Alternative ways are the avoidance of fear stimuli and the induction of happiness in order to rebalance these two states and weaken fear memory, since the lack of uses in neural circuits related to fear memory, especially from associative memory cells to memory-output neurons, may drive them to be functional silence. In the brains of individuals with history of substance abuse or addiction, primary and secondary associative memory cells relevant to these events are recruited in large amount and in extensive areas under euphoria condition, leading to potential relapses in their lifetime 8 . Strategies to weaken the addiction in these individuals include the avoidance of the environment cues associated with substance abuse to reduce the output of the relevant associative memory cells, as well as the establishment of alternative happiness to recruit associative memory cells that innervate memory-output cells through competition with the innervations from addiction memory cells, so that the rebalance of these two states strengthens memoryoutput pathway for happiness 7 .
Data availability
No data is associated with this article.
Grant information
This study is funded by National Key R&D Program of China (2016YFC1307100) and Natural Science Foundation China (81671071) to JHW.
The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
2.
3. I have read this submission. I believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard, however I have significant reservations, as outlined above.
Open Peer Review
The benefits of publishing with F1000Research: Your article is published within days, with no editorial bias You can publish traditional articles, null/negative results, case reports, data notes and more The peer review process is transparent and collaborative Your article is indexed in PubMed after passing peer review Dedicated customer support at every stage For pre-submission enquiries, contact research@f1000.com | 2019-04-15T14:47:07.460Z | 2019-04-12T00:00:00.000 | {
"year": 2019,
"sha1": "24f7b5433e9f84b26a6053359c8f69fb32361006",
"oa_license": "CCBY",
"oa_url": "https://f1000research.com/articles/8-457/v1/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "06a82e14575b1ce5857a3d3696773c666dcb79cf",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
269850379 | pes2o/s2orc | v3-fos-license | “The Elegance of Quantum Mechanics”: a didactic path for high school
This paper describes the work of design, testing and evaluation of the effectiveness of a pilot teaching-learning sequence on Quantum Mechanics (QM) presented to high school students and teachers. The aim is to construct a path starting from the discussion of real experiments to get to the mathematical basic structure of QM, and to test whether this path can be effectively proposed at high school level. The experimentation consisted of 10 Zoom meetings, between October 2021 and January 2022. At the end of each of the first nine meetings, each student was given a form aimed at bringing out the reasoning used, and the level of understanding achieved. At the end of the course a satisfaction survey was also given. The effectiveness of the activity was assessed by means of all homework and interviews with 13 students and 6 teachers.
Introduction
The knowledge and understanding of Quantum Mechanics (QM) are two essential aspects of every citizen's life, since this theory has generated a revolution in the way we see ourselves and the world around us.In fact, although it seems very far from the way the world appears (or seems to appear) to us, QM influences practically every aspect of our life, from the most modern technologies to chemistry, from biology to medicine [1][2][3], playing the role of guiding theory in the construction of new knowledge, and constituting the theoretical paradigm of reference for the description of microscopic world.
Since one of the objectives of physics education is to build a conceptual framework of the discipline (as well as to convey its contents), and to address and disseminate culturally and socially relevant issues -facilitating a profound and meaningful understanding of the world in which we live and providing basic knowledge of the reality that surrounds us -there is no doubt that a teaching of this discipline is fundamental.To fulfil this purpose, profound reforms have been implemented within the school system on the entire international scene, starting from about 2000 [4][5][6][7].
Despite the numerous proposals put forward by the various research groups in almost 20 years of work [8][9][10][11][12][13], in general, at a school level, in common textbooks [14][15][16][17][18][19][20][21] and programs it is still presented the so-called "Old Quantum Theory", that is a set of ad hoc models proposed between 1900 and 1925, in order to justify phenomenologies that could not be explained (or, at least, that was hard to explain) by means of classical physics (i.e., black body spectrum, photoelectric effect, Compton effect, Bohr atomic model, etc.).These models are presented in a vaguely and predominantly chronological order, and constitute a set of unstructured information, explained in an uncoherent and didactically ineffective logical framework.In this way, we obtain what is usually defined as the "traditional" approach to QM, that is a fragmentary and not very consistent reconstruction of the discipline that makes quantum physics confusing, obscure and incomprehensible.The choice of limiting the discussion to these semiclassical models is often justified by the idea that the mathematics required for the presentation of the theory of QM is inaccessible to students.In our opinion, instead, this trend arises essentially from the lack of an adequate didactic reconstruction of the contents for secondary education, which is also reflected in the presentation proposed by school textbooks; added to this, there is often also an inadequate knowledge of the fundamental mathematical concepts and tools for QM owned by teachers (many of whom have a degree in mathematics, and have never faced these issues in their studies and training).
The reasons behind our approach
Classical Mechanics can be presented at school even without reading Newton's Principia and without knowing how to solve Cauchy problems, but by handling second degree equations.Electromagnetism can be treated without using complicated differential equations, as in Maxwell's treatise, but by passing through integrals and derivatives.In the same way, an appropriate educational reconstruction is also necessary for QM, i.e., an adequate didactic reconstruction of the contents, aimed at simplifying the procedures, without however distorting in any way its spirit and meaning, and which, above all, takes into account not only the conceptual nodes of the discipline, but also those of students' learning.Different approaches have been adopted in various proposals (such as, for example, the Dirac "spinfirst" approach [22] or that of Feynman's paths [23]); but there is probably no ideal one that addresses all learning problems in the simplest way.
The methodological and epistemological premise of the whole work of our research unit is based on the fact that what we call "reality" is subject to continuous changes, because the status of the fundamental entities that form reality itself is very flexible, and changes over time, in the same way that, for example, the concepts of time, space, ether, atom, etc… have changed [24].In fact, thanks to theories -more or less structured -we can be aware of what reality is.More specifically, physical theories are mental constructions that help us find and define reality, and also use its resources [25].This fact is valid for all physical theories, such as Mechanics, Thermodynamics, Electromagnetism, and, a fortiori, it is and must be true also for QM.Therefore, taking into account that it is the physical theory with its interpretations that provides our physical image of the world, it follows that, in the approach for didactic research in physics, we must primarily: • choose a reference theory in one of its formulations; • identify the concepts of this theory (what it talks about, what it can talk about, and therefore also what it cannot talk about); • understand the relationships between concepts and their meaning within the theory; • carry out an appropriate educational reconstruction.The educational path that we present in this paper is the result of research that has made it possible to carry out a first educational reconstruction of quantum physics, intended as the set of formal principles that every quantum theory must satisfy.The primary purpose is essentially to understand and firmly anchor the meaning and reality of quantum physics within the framework of the theory itself, just as is the case of any other classical theory.The mathematical formalism of the theory, therefore, will not simply be seen as a trivial language to describe the elements of reality, but will become the context within which it is possible to interpret reality itself.This is why the objective of this path intends to be the construction of an axiomatic formal framework accessible to high-school students, starting from some crucial experiments that highlight the fundamental and specific characteristics of the theory of QM, and discussing its conceptual and didactic aspects with the teachers.
The aim of this paper is to present a comprehensible summary of our path, together with the research results that indicate its feasibility at high school level.The path presented here, starting from some clear experimental situations, therefore focuses on the motivations that lead to induce and introduce, one after the other, the axioms of QM.In summary, we therefore wonder: A1) Why is a Hilbert space associated with each physical system and the state is represented by a unit vector?
A2) Why and how are quantum aspects linked to a precise probability theory?A3) What does the measurement process involve?A4) Why do we use self-adjoint operators to represent the observables?
The conceptual path
The path is divided into a series of key steps (from now on, the word "complex" will be referred only to complex numbers).
1.
From experiments to a linear, complex, and probabilistic theory.As a first step, analogies of behaviour between suitably prepared beams of matter (electrons, neutrons, fullerenes, etc.), mechanical waves and electromagnetic beams are analysed.From interference experiments it can reasonably be induced that the theory describing the behaviour of these electromagnetic and material beams should be linear and able to describe the behaviour of suitable electromagnetic and matter beams.
Unlike what happens for mechanical waves or electromagnetic radiation, with matter beams the physical quantities do not show an oscillating trend (charge density, mass density, energy, etc., are in fact constant over time for the beams we are considering).In interference experiments, on the other hand, the wave aspects emerge clearly.Therefore, the wavelike aspects of matter beams are somewhat less evident than those of the electromagnetic ones; we might therefore think that such aspects are hidden in a complex description.In fact, the main physical quantities are generally obtained by means of the squares of the fields (for example the density of electromagnetic energy is given by the sum of the squares of the electric field and the magnetic field).In our description we presumably need to construct quadratic quantities that are independent of time, and which nevertheless allow us to describe the observed interference aspects.With evident symbology, we are induced to pass from expressions of the form sin( − ), usual for electromagnetic fields, to complex expressions of the type (−) which have a constant square modulus, but nevertheless allow a wave description.
Very low intensity experiments show the "granularity" of the detected radiation (electromagnetic or material) and therefore introduce the need for a quantum description.The analysis of the distributions of the observed quanta also highlights the probabilistic nature of the theory.
From the analysis of single quantum interference experiments (typically "which-path" experiments, such as double slit, Mach-Zehnder interferometer, experiments with birefringent crystals and Fresnel biprism) with electromagnetic and matter confirms in more depth the opportunity of a linear and probabilistic aspect of the theory.2. Space, state and probability.At this point, we can choose whether to proceed with a theory of waves interacting through quanta or to construct a theory of quanta showing wave-like behaviour.With the first choice we will proceed towards quantum field theory; with the second, as we will do, we will proceed towards the construction of QM.The state of our system, i.e., of our quantum, will have to comply with the linearity and complex-numbers requirements that we have highlighted.We will choose a linear space in which to set the theory; the state of a quantum will thus be represented by a vector.
We have yet to understand what the other characteristics of this space are.For example: how many dimensions has it?What kind of mathematical structure, if any, does it have, besides linearity?What role do vector's components play?And how do we choose the bases in this space?
To answer these questions, we need to specify how to calculate the probability.In fact, "which-path" experiments lead us to think that the "classic" way of calculating probability is not adequate.For example, in the double slit experiment, if an electron has a probability P1 of being detected in area A of the screen knowing that it passed through slit 1, and a probability P2 of always being detected in area A knowing that it passed through slit 2 (events that we spontaneously understand as independent), then from this it does not follow that the probability that the electron has of being detected in A is equal to P1 + P2.
Since the probabilities must be positive numbers which add up to 1, a fairly natural way, but different from the classical one, is to consider the projections on orthogonal axes of a unit segment; the sum of the squares of these projections will then give 1.More formally, let us take an orthogonal basis.The number of possible events will match the number of space dimensions.In this way, the state of the considered system will be given by a unit vector, whose projections on the axes (taken in square module) will correspond to the probabilities that each of the possible outcomes will occur.Independent events will then correspond to orthogonal segments (A2).3. Scalar product.The need to consider the orthogonality between segments leads to the introduction of a scalar product.Since the space is complex, the only way is to introduce a sesquilinear form.Leaving aside questions of completeness that we deem unnecessary at this stage, a linear, complex space with a scalar product is what is called a Hilbert space (A1).4. Collapse of the state.Single quantum polarization experiments [26] are taken as emblematic of the description of measurement in QM.Through these experiments, the postulate of state precipitation (A3) is introduced.Let us consider, for example, situations in which the state is changed -typically single quantum polarization experiments.If we will let photons with fixed linear polarization go through a horizontal polarizer followed by a detector, in general only the fraction cos 2 (where is the angle made by the direction of polarization with respect to the horizontal) will be revealed, i.e., each photon will pass with a certain probability.The passing photons are polarized in the direction of the polarizer, horizontally in our case: their state has changed or, in jargon, has precipitated, i.e., it has become the one dictated by the polarizer (A3).Measurements, beam splitters, etc. change the state of a system, hence we need to provide a mathematical representation of this fact.The concept of (linear) operator is therefore introduced as an object that to every vector of the space associates a vector, respecting the linearity of the structure.
Operators associated with the action of some parts of the Mach-Zehnder interferometer are analysed (action of the beam splitter, interference of the beams, etc.). 5. Measurement and projection operator.We thus move on to the formalization of the measurement process.Let us consider an "observable" quantity G (relative to a certain state), which can provide results g1,...,gn, and the state of the system written on the basis identified by g1,...,gn.Measuring G consists in obtaining one of the possible values gi between g1,...,gn.The probability with which this result will be found is given by the square module of the state projection on the i th axis.Immediately after the measurement, the state of the system will be given by a unit vector along the i th axis.
The idea of a projection operator is thus born.The fact that a measure provides a real-number result pushes us to consider linear combinations with real coefficients of projectors.In fact, a linear combination of orthogonal projectors is exactly the operator we are looking for: the one that projects onto the subspaces identified by the possibilities given by the measurement.If we identify the coefficients of the linear combination with the possible outcomes of the measurement, we obtain an operator that, starting from the state of the system, gives us both the probability of the single eventsi.e., the Born rule -of the possible results of the measurement (the spectrum of the operator) and of the state of the system after the measurement (the projection postulate).6. Properties of projection operators.The concept of projector (projection operator) is explored by observing that projectors are idempotent.It is then observed that the projectors are self-adjoint operators, that is, they can be brought from one part of a scalar product to the other without altering the result.From this it follows that even a linear combination of them is symmetrical.We will define selfadjoint all operators who enjoy this property.Eigenvalues and eigenvectors of the self-adjoint operators are thus related to fundamental physical concepts: the eigenvalues provide the possible results of a measurement, while the eigenvectors the axes of the basis associated with the observable we are considering.7. Self-adjoint operators.It is therefore natural to associate a self-adjoint operator to each observable of the system (A4).Of course, it is necessary to explicitly observe that not everything that is measurable becomes an observable (in this simple scheme, for example, mass is not, neither is time).At this point it may be appropriate to show how, from the knowledge of the operator associated with an observable, it is possible to obtain the results of the measurement and the states in which the system may be after the measurement (an inverted procedure with respect to what previously done in step 5).8. Eigenvectors and eigenvalues.The concepts of eigenvector and eigenvalue are studied.The eigenvectors will provide the axes of the base identified by the self-adjoint operator.
Two-and three-level systems are studied.In two-level systems, the energy operator and also the interaction energy operator are introduced.9. Position, momentum, energy and angular momentum operators.At this point, if we are measuring only one observable, we are able to define the corresponding operator starting from the experimental results and, vice versa, to calculate the results starting from the knowledge of the operator.Otherwise, in different cases, the situation is more complicated.In Classical Mechanics, once the initial state of the system (position and momentum) is defined, all the quantities associated with the system (energy, angular momentum, etc.) are automatically defined as well.In QM, on the contrary, the state contains only information on the probability of obtaining a certain result, and state and observable are two separate concepts.We start by studying the observables position and momentum.We then define the position and momentum operator, bearing in mind that, in QM, the physical link will no longer be = ̇ (as in the classical case), not even between the results of the measurements.Since the definition of the position operator involves dealing with states with infinite components, a graphical method is proposed to view vectors in a space of infinite size.The construction above, along with the strategy employed to construct the operator corresponding to a certain measurement, leads immediately to consider the position operator as the multiplication operator by "".Subsequently, it is highlighted that the square modulus of the function resulting from this representation -called the wave function -corresponds to the expression we used to describe the intensity of the interference figure created in the double-slit experiment.Finally, the momentum operator, the energy operator and the angular momentum operator are defined.The time-evolution operator is introduced, and the Schrödinger equation is written.
The didactic methodology
The conceptual path presented in Section 3, born as the development and implementation of an educational path proposed in a training course for teachers, organized in 2019-2020, was translated into the activity "The Elegance of Quantum Mechanics", organized in 2021-2022 within the Scientific Degree Plan (Piano Lauree Scientifiche -PLS) of Physics, of the University of Milan.This activity was proposed jointly to a group of teachers and students from the last three years of high school (for a total of 120 participants, 30 of which were teachers) and took place over a period of four months (from October 2021 to January 2022), through 10 weekly appointments of one hour and a half each.Given the situation linked to the Covid-19 pandemic and the large number of participants, it was decided to deliver the course remotely, through the Zoom platform, integrating lessons with slides, questions with Kahoot! (useful to take stock of the situation and verify the correct understanding of the topics by the students), and interactive graphic examples created with GeoGebra (visually very effective and intuitive; an example is available at the link: https://www.geogebra.org/m/aqf2dgn3).
At the end of each of the first 9 lessons, a Google Form was sent to the students (an example, translated in English, can be found at: https://forms.gle/e6o78B4Pq932Sani7) with questions and exercises related to the topics covered, to be carried out and delivered before the start of the next meeting.To stimulate an active participation from students during the meetings, as well as a constant commitment to completing the modules proposed at the end of each lesson, the course was also delivered as a PCTO (Percorsi per le Competenze Trasversali e l'Orientamento -Paths for Transversal Skills and Orientation) activity, compulsory for Italian high-school students, for a total of 25 hours.
Course effectiveness for students learning: analysis and results
The evaluation of path effectiveness was carried out by collecting and analysing different types of data deriving from: • an anonymous satisfaction survey (given at the end of the course), • 9 Google Forms, containing a total of 38 open questions and 24 exercises (about 2000 answers overall), • 19 individual interviews: 13 with students and 6 with teachers.
In order to be able to conduct an analysis based on meaningful data, for each module, the answers provided by students who did not continuously follow or answer to the previous lessons and modules, or who did not follow the lesson regarding the topic of the form, were excluded (our analysis was thus limited to 1972 answers over 3018 given).The only exceptions concerned: the first module, of which we took into account all the answers; the second module (in which the first question was misinterpreted by all students, therefore we decided to exclude the whole form from our analysis) and the fifth module (corresponding to the lesson on complex numbers), for which we decided to consider the answers of all the participants, being an independent topic.
The analysis of the open answers was done independently by each of the three authors of this work, cataloguing each answer in one of the following categories: 1) the student used the formalism and concepts presented, in an appropriate way; 2) the student did not use the formalism and concepts presented, or used them only to a very little extent; 3) the student used the formalism presented, but improperly mixing together the concepts discussed; and, in addition to that, even if the numerical answers provided in problems were right or wrong.
By the analysis of the answer given by students, several interesting aspects emerged.We recall some of them.
Module 1: When dealing with a new topic in physics, and above all in modern physics, it is important to know how students imagine the entities they are going to talk about.We must not underestimate these pre-existing ideas that can derive both from a formal context, such as schools, and from informal contexts.A significant path must therefore take into account previous knowledge and start from it, and then, naturally and gradually, lead the student to a more mature scientific point of view.The purpose of the first questions proposed was therefore to observe how students imagine entities (photons, electron, protons, etc.) involved in the path.These questions do not have a "right" answer from a scientific point of view, as the visualization of the entities in question is substantially impossible (or at least problematic).Nevertheless, it is natural for students to make a pictorial representation of what they study and, therefore, it is useful to know the most common types of representations.From the answers provided, it emerged that almost all the students imagine the electron (95%) and the proton (98%) as spheres.Conversely, only 12% imagine the photon only as a sphere; in fact, for 47% of students the photon is a small sphere moving with a sinusoidal motion.Many students drew the electron (53%) and the proton (47%) not alone, but only inside the atom; for others (49%), the proton is a ball of larger dimensions than the electron.Referring to the atom, for about 95% of the students it is like a planetary system.Finally, as regards the answers relating to the number of energy levels of the hydrogen atom, only 9% correctly recognize that they are infinite; for 65% there is only one level, presumably the one occupied by the only electron in the ground state; and for 10% there are 7 levels, an answer that might probably derive from previous knowledge of chemistry, as 7 shells are enough to make the periodic table.These aspects are in accordance with research findings already present in the literature [27][28][29][30][31][32].
Module 4: Most of the students expressed themselves with reasonable words regarding the complex (64%) and probabilistic (80%) nature of quantum theory.The most problematic point concerned linearity: in fact, some students, were uncertain about what a linear space is, and what this entails.It also emerged how the modelling process is deficient: about 30% of the students explicitly believe that it was the quanta themselves that added up (like waves), not the states.It thus emerged a confusion between the state of the quantum and the quantum itself.This aspect has already appeared in the literature: some articles on students' understanding of models have shown how they have difficulty distinguishing between a model and the reality described by that model [33], as some of them often consider the models as exact representations, magnified or resized, of the "real thing" [34].
Module 6: Concerning the concept of state evolution, it seemed a widespread idea (45% of students) that only in Classical Mechanics is it possible to follow the evolution of a state, while in QM it is not.There may also be a confusion between the concept of object trajectory and the concept of state evolution: in fact, the concept of state evolution in the context of the Hilbert space, and of how the evolution in the absence of measures is deterministic (analogously to the classical case) was dealt in little detail and in a too-short time in our path.Much greater attention must be paid to this point in future experimentation.As far as the Hilbert space is concerned, 78% of the students were able to list its characteristics; as far as the concept of scalar product is concerned, it was noted that, from an operational point of view, most of them (82%) make correct use of it, but from a conceptual point of view; only few students recognize its importance/utility in QM (18%).Moreover, 37% of students faced some difficulties in using Dirac's notation; in reality, this result was not a surprise, since there are several studies [35][36] in the literature, especially at the university level, which highlight these students' difficulties, and the inconstancy students show in its use.
Module 7: Most of the students expressed themselves in fairly correct terms regarding the characteristic aspects of the measurement process in QM, what the measurement process involves (88%), and what an operator is (81%).These results were also found by other research groups which developed paths starting from the discussion of key experiments [37].With regard to matrix calculus, on the other hand, although for almost all the students (92%) it was a complete novelty, they were generally able (69%) to manage it without any difficulties.
Module 9: The use of GeoGebra during the explanation allowed students to have a graphical vision of the concept of eigenvector and eigenvalue and this certainly improved the general understanding: in fact, everyone managed, at least graphically, to recognize the eigenvectors and eigenvalues of simple operators.However, it was interesting to note that more than half of the students (55%) recognized vectors having the same direction but opposite orientation as different eigenvectors: this is an aspect to be taken into consideration for a future proposal.As regards the study of physical systems, given a state written as a superposition and the observable written as a matrix, all the students were able to identify the probability of obtaining a specific value; 82% also found those values; and 91% also calculated the mean value correctly.These results show how the use of software for visualizing abstract mathematical aspects plays a significant role in learning.Indeed, there are numerous projects, such as the Physics Education Technology (PhET) project [38], the Quantum Interactive Learning Tutorials [39] and the Physics Applets [40], which aim to create useful tools to help students become more familiar with abstract concepts.
This analysis allowed us to identify: some strengths of the activity, some limits of this work, and some problems that were not foreseen during the preparation and the organization of the course.The detailed analysis of the results transcends the scope of this work.However, to give an idea of what was achieved on the "students" side, we provide a very brief summary here.
• The mathematical formalism (complex linear spaces, matrices, Dirac notation, etc.) did not prove to be an obstacle to understanding, but, rather, it was seen by students as a supporting and a reassuring aspect, and it was found to be an aid towards conceptual understanding.However, this result was achieved only when the presentation of mathematical concepts made use of interactive geometric manipulation tools (such as GeoGebra).• Students of the last three years of high school followed the activities; as was to be expected, the ability to solve exercises turned out to be better (at least on average) with increasing schooling.However, the understanding of some conceptual aspects (i.e., explaining why is necessary to have a complex space to describe a quantum system) also with regard to their formal writing, was on average better for the youngest students than for those of the other two years.• We were able to identify some problems that we had not foreseen in preparing and organising the course, such as, for example, the fact that almost all the students were not familiar with complex numbers and matrices.• All the main problems encountered by students regarding the issues dealt during the course, were clarified during the following meeting, such as, i.e., the concept of linearity and self-adjoint operator.• Finally, by evaluating students' homework with criteria similar to those usually used at school, we have found that the knowledge and skills acquired were completely in line with those achieved in the "classic" subjects of high school physics.
In the interviews with teachers -conducted in a separate way and moment with respect to those with students, and focused on what teachers thought about the actual feasibility of the proposal in the classroom -it was also highlighted that: • in compiling Google Forms, the effort of the students was certainly lower than that which would have been obtained in an ordinary school context, through tests, oral examinations and assessments; | 2024-05-19T15:28:34.269Z | 2024-04-01T00:00:00.000 | {
"year": 2024,
"sha1": "e26948b36a0fa3efcd9eb60dbe69bdf47c7b32e9",
"oa_license": "CCBY",
"oa_url": "https://iopscience.iop.org/article/10.1088/1742-6596/2750/1/012022/pdf",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "e8385257237357f53e19e3125eba1c80da8b1364",
"s2fieldsofstudy": [
"Physics",
"Education"
],
"extfieldsofstudy": []
} |
246670689 | pes2o/s2orc | v3-fos-license | Effects of Microwave Radiation on Human Brain: The Positives and Negatives
Several RF sources are present for the commercial as well as defense application. In this article we will focus on the neurological consequences of Microwave Radiation (MWR). Since microwaves impact so many facets of our lives, this research focuses on their health implications. The brain is perhaps the most sensitive organ to MWR, with mitochondrial damage manifesting faster and more profoundly than in other regions. The effects of MWR on brain metabolic pathways have piqued public attention. The possibility for significant numbers of people to be subjected to dynamic, multi-frequency microwave energy nowadays, is a reality. Many urban people residing in the highrise structures of the city, come in the main beam radiation of the antenna which is mounted at a comparable height. Owing to the pervasive presence of MWR, its extensive usage, and the potential for harmful effects, comprehensive analyses of the health risks are imperative. It is crucial, therefore, to assess the level of exposure that is safe for the general population so as to minimize adverse effects despite reducing the favorable uses of microwaves. Keywords— EM spectrum, Radiation, Microwave, Neurological hazards
I. INTRODUCTION
The electromagnetic (EM) spectrum, as we all know, encompasses all kinds of EM radiation. Radiation is a form of energy that flows and disperses as it does so. EM radiation can take many types, including visible light from a lamp in your home or radio waves from a radio station. The EM spectrum starts all the way from DC, 3 kHz, it goes to a few THz. From the application point of view, at extremely low frequencies of the spectrum, the sources are, basically earth and subways. After that, the power outlets: 50 Hz AC capacity in India and 60 Hz in the USA, find its place in the EM spectrum. Next comes AM FM radio; AM radio frequency is from 530 to 1620 KHz and then FM radio which ranges from 88 MHz to 108 MHz, followed by TV transmission [1]. Beyond that, microwaves work at 2.45 GHz including Wi-Fi, which works from 2.4 to 2.483 GHz. EM radiations are classified into 2 zones-one is known as a nonionizing zone, and another is known as an ionizing zone. There's far more energy in ionizing radiation, given by the formula E = hf , where f is frequency. So, the higher the frequency, the higher the energy would be. So, it can sever the molecular bond and is thus referred to as ionizing radiation. The microwave frequency, essentially has a lower frequency and therefore, has lower energy. Energy, however, is not only defined by E = hf ; energy also equals power multiplied by time. Therefore, if power is greater, less time will be needed and if the power is lower, the task can take longer to do. In order to focus on the detrimental effects of MWR, let us consider an instance. There are AM towers where the standard frequency range is from 530 to 1620 kKz and these AM towers can transmit approximately, 100 ISSN KW of power up to even 1 MW of power. The telecommunication authorities, however, take care and there is at least no residential building or complex within a 1 km radius. They know, therefore, that such actions cause a health hazard, and therefore they take that precaution. Microwave devices, on the other hand, operates between 300MHz to 300GHz.There are airports, railway stations, universities, schools and numerous Wi-Fi-enabled facilities. Also, there are plans to build cities with Wi-Fi. This , in fact, can also create a lot of health issues for several individuals. Several technologies in India operate in 800MHz-band including CDMA, GSM at 900MHz, then GSM at 1800MHz, then we have 3G and 4G. 3G-operators in India are allowed to transmit 20 W of power An additional concession is given to a 4G operators; they can transmit up to 40 W of power and consequently there are more than 6 lakh towers in India.
A. Specific Absorption Rate
Let us consider a case where it goes to the nearest base station where a mobile phone propagates 1 W of power and that base station actually transmits 20 W of power to communicate with this same cell phone.Then, via a switching network, this base station (say, BS1). In fact, the Specific Absorption Rate (SAR) value of the cell phone determines the radiation from the cell phone. It's limit has been set at 1.6 W/kg. However, the threshold was set in 1998 and this threshold was set at only 6 minutes per day use. So, the mobile phones that people use were originally planned for just 6 minutes of daily usage, communicates with some other base station (BS2), which transmits another 20 W of power to the cell phone, which transmits 1 W of power. So, for one mobile phone connection 1 + 20 + 20 + 1 = 42 W of power has been consumed. Surprisingly, the effective power used by cell phones as well as cell towers is only 0.0000001 W, which means, 41.99999 W power is getting dissipated in the atmosphere. When a call is initiated, roughly one-third of the power is getting absorbed in the human body; specially if the cell phone is held erect against the ears, then one-third of the power is going towards the head. But as far as the cell towers are concerned, it affects people living in direct proximity.
B. Effects of SAR
The dose of the absorbed energy is measured in terms of Specific Absorption Rate (SAR), expressed in watts per kilogram (W/kg) of body weight. In fact, the SAR value of the cell phone determines the radiation from the device. In 1998, the limit (safety threshold) of SAR has been set at 1.6 W/kg which translates into daily use of cell phone for only 6 minutes. So, the mobile phones that people use were originally planned for just 6 minutes of daily usage. By virtue of a safety margin of 3 to 4 times, a person should not use cell phone for more than 18 to 24 minutes per day. Thus, the lower the SAR value the better it would be. The typical SAR value may vary from 0.3 W/kg to about 1.6 W/kg. Depending on the holding posture, there are chances that 90% of the radiation is going towards human body. It is of utmost importance to reduce the exposure to Radio Frequency (RF) energy. The RF electromagnetic fields have been classified as potential carcinogen (class 2B) in the factsheet released by the International Agency for Research on Cancer (IARC) [2], [3]. Scientists are recently, persuading WHO to make this scenario as class 2A which is known as probable carcinogen or even class 1, which is known human carcinogen [4]. Based on several epidemiological studies reporting about the incidence of leukaemia in children and brain tumours in adults after prolonged exposure to Magnetic Field (MF) of approximately, 0.4µT, it has been speculated for many years that both residential and industrial exposures to extremely low frequency (ELF) MF may be a potential carcinogen. The carcinogenic potential of cellular communication systems has been determined to be limited to glioma [5].
III. POSITIVE ASPECTS OF MICROWAVE RADIATION A. Microwave Ablation Technique
The charge switches signs almost 2 billion times a second (9.2 × 10 8 Hz) in a microwave oscillating at 9.2 × 10 8 Hz. As a radiationinduced oscillating electric charge interacts with a water molecule, the molecule flips. To optimise this interaction, microwave radiation is tuned to the natural frequency of water molecules. The electrical charge on the water molecule flips back and forth 2-5 billion times a second as a result of the microwave energy impacting the molecules, depending on the frequency of the energy. The vigorous motion of water molecules enhances the temperature of water. Temperature is an indicator of how quickly molecules move in a medium.
Microwave ablation (MWA) is yet another minimally invasive cancer procedure. Ultrasound, computed tomography (CT), and magnetic resonance imaging (MRI) are all used by MWA to guide the placement of a specialised needle-like probe through a tumour. MWA uses microwaves to heat and dissolve tumours. MWA is a low-risk procedure with a shorter hospital stay for the patient. Many tumours may be treated by ablation at the same time. When a new cancer develops, the treatment should be replicated. It is necessary to use a straight needle. The MW generator produces EM waves in the MW energy spectrum. It is bound to the needles by insulated wires. In certain patients with liver cancers that are unsuitable for surgical resection, MWA may be a successful therapy for primary liver cancer and cancers that have metastasized to liver. Similar argument is applicable to lung tumors. This therapy of MWA can be extended to the treatment of brain tumours as well. More than half of liver tumours treated with ablation have not recurred in most studies. Tiny liver tumours can be entirely removed with a success rate of more than 85%. Significant treatment-related complications are uncommon, and the trauma is less than that of surgery. Ablation can be used to combat chronic liver cancers on a regular basis. There is no need for a surgical incision-only a slight nick in the skin that does not require stitches. The amount of tumour tissue that can be removed by ablation is limited. This is due to current equipment constraints.
B. Detection of Brain Strokes using Microwave Tomography (MWT)
In developed countries, cerebrovascular injuries (CVAs) are one of the leading causes of physical disability and mortality in adults. Ischemic or cerebral infarction (85% of cases) and hemorrhagic CVAs (or strokes) are the two forms of CVAs (or strokes) (15 percent ). The extent of the stroke must be determined. In order to implement the appropriate treatment, it must be assessed quickly. Strokes induce alterations in the dynamic electric permittivity of brain tissues, which can be detected using microwave tomography, according to recent research in biomedical imaging. MRI or CT imaging are now the only ways to distinguish the type of stroke. However, these cumbersome and costly frameworks necessitate too much infrastructure to be used outside of the hospital. A microwave imaging system with 24 antennas was created recently [6] for evaluating an algorithm based on the Truncated Singular Value Decomposition scheme [7], and a previous analysis on 2-D phantoms used the linear sampling approach for brain stroke monitoring [7]. In the microwave frequency spectrum, recent studies have shown that complex permittivity is dependent on the type of stroke (ischemic or hemorrhagic). Indeed, complex dielectric permittivity increases by up to 20% in the hemorrhagic stroke region [8], whereas it decreases by 10% in the ischemic stroke area [8], [9]. Because of their intrinsic contrast mechanism, low cost, and short acquisition time, microwave imaging techniques are quite a promising form of stroke classification [10]. EMTensor GmbH (Vienna, Austria) has developed a compact microwave scanner for stroke detection and management [11], which can be integrated into ambulances. MWT could present a better, portable and cost-effective alternative to existing imaging techniques for non-invasive examination of acute and chronic functional and pathological problems of soft tissues. MWT was stunted for a variety of technological reasons, including the high prices of specialized hardware modules and an inadequate processing capacity. Huge progress has been made in recent years. Over recent years, remarkable advancements in both communication technology (mobile) and computation have launched unique possibilities for MWT research and innovation in clinical and therapeutic applications. Biological tissues are distinguished in the microwave spectrum and can thus be imaged based on their dielectric properties. The safety aspect of MWT imaging is crucial. In comparison to CT imaging and nuclear medicine, which use ionising radiation, this modality uses a non-ionizing electromagnetic (EM) field. Applications of MWT includes: radiology of the extremities, breast cancer screening, lung cancer diagnostics, brain imaging, and cardiac imaging. It was discovered in the 1980s and 1990s that malignant tumours and healthy tissues possess different dielectric properties [12]- [15]. This principle is used in identifying the stroke area by MWT.
A. Related Articles on Brain Dysfunction
In the present article, various studies reporting the effects of MWR on the brain (especially the hippocampus) have been scrutinized. The present article has focused on recent advances in this arena and encompasses the reviews of epidemiology, anatomy, electroencephalograms, learning and memory capabilities and the dynamics substantive to cognitive impairments. In broadcasting, the applications of microwaves are primarily TV broadcasting antennae and FM radio , that emit frequencies ranging between 80 and 800 MHz. Microwaves are created in telecommunications as a result of the proliferation of mobile phones and their associated base stations and microwave links. Microwaves are found in cordless phones, terrestrial trunked radios, blue-tooth services, and wireless local area networks (LANs). Various experiments have been undertaken to examine the effect of electronic networking technologies on humans, but only a few have identified a statistical correlation between cell phones and brain tumours. Those that are used to using a mobile phone ipsilaterally have a two-fold elevated chance of developing brain tumours relative to those who do not [16]- [18]. In a particular study group examining the risk of glioma and acoustic neuroma according to age at first exposure to mobile phones, the highest odds ratios was found among those first exposed before age 20 years [19]. Many other studies haven't yet endorsed the inference that brain tumors can be caused from mobile phone usage [20]- [24]. Yet another study conducted by the Interphone study group [25] concluded that the risk of malignant tumours like glioma or meningioma in mobile phone users did not increase. Gliomas are not present in the most exposed regions of the brain, according to new research by Larjabaara et al. [26]. There were 347 cases of melanoma in the head and neck region, as well as 1184 controls, in this study. Hardell et al. [27] analyzed the usage of cell and cordless phones and observed no significant risk. Dasdag et al. [28] examined personnel working at a TV transmitting station with a frequency spectrum of 202-209 MHz, 694-701 MHz, 750-757 MHz, or 774-781 MHz and mediumband broadcast station. Questionnaires on their health conditions were issued to the workers. The latter showed that symptoms including stress, fatigue, headaches and sleeplessness were experienced. To maintain vital functions, the brain needs a high supply of oxygen and energy intake. As a consequence, non-harmful stimuli such as ionising radiation and hypoxia can impair this organ [29], [30]. MWR destroys hippocampal structures in rats, impaired cognitive potentiation, lowers neurotransmitter concentrations, decreases the amount of synaptic vesicles, and causes memory impairment, according to certain research communities [31].The long-term risks from radiofrequency radiation (RFR) exposure from mobile phones appears to be high in children owing to their rapid growth rates and greater vulnerability of nervous system. The increasing use of mobile phones in children, a form of addictive behaviour, has been associated with emotional and behavioural disorders [32]. In a study involving 13,000 mothers and children, it was found that prenatal exposure to mobile phones was associated with behavioural problems and hyperactivity in children. [33] In a Danish study involving 24,499 children, the emotional and behavioural difficulties at age of 11 years among children was noted to be higher (23% increased Odds) in children whose mothers reported any mobile phone use at age 7 years compared with children whose mothers reported no use of mobile phones at age of 7 years. [34] A recent cross-sectional multicentric (20 study sites) study in US involving 4,524 children aged 8-11 years indicates that shorter screen time and longer sleep periods independently improves cognitive function. [35] Another recent study also indicates about the potential adverse effect of RFR on adolescents' cognitive functions. Interestingly, this impairment of cognitive function includes the spatial memory related to the brain regions which are exposed during mobile phone use. [36] Exposures to various non-thermal microwave EMF can adversely affect with diverse neuropsychiatric problems including depression. [37] In a comprehensive literature review, Pall ML states that "Wi-Fi causes oxidative stress, sperm/testicular damage, neuropsychiatric effects including EEG changes, apoptosis [cell death], cellular DNA damage, endocrine changes, and calcium overload,". [38] Furthermore, these effects from continuous, long-term exposure may be cumulative, and that pulsed signals are more biologically active than a smooth carrier wave. Different study have reported variable effect of exposure to RF-EMF in the vicinity of short-wave broadcast transmitter on the sleep parameters ranging from prevalence of difficulties in the initiation and maintenance of sleep [39] to no effect [40]. Studies evaluating sleep quality among humans exposed to extremely low frequency electromagnetic field (ELF-EMF) have not reached to any conclusive evidence [41]. A recent study conforms with the lack of overall effects on sleep architecture, well-being, cognitive function and clinically considerable effects on sleep by RF-EMF similar to those emitted by 3G mobile phones [42], [43]. However, in the same study a reduction of sigma-1 power spectrum was observed which might have implications on long-term sleep quality. [44] Contradictory outcomes have been reported in literature owing to methodological limitations and hence no final conclusions can be drawn about the potential effect of microwaves on sleep. Brain deterioration and structural damage are two of the detrimental effects of MWR on the brain. According to an epidemiological review, MWR induces human exhaustion, headaches, excitement, dreams, memory loss, and other neurasthenia symptoms [45]. A cross-sectional study designed to detect neurobehavioural disorders among the residents living in the vicinity of base stations and are exposed to radiofrequency-electromagnetic field (RF-EMF) found that the prevalence of neuropsychiatric complaints such as memory problems, headache, sleep disturbances, depressive attitude, dizziness etc were considerably more among the exposed individuals compared with the controls. [46] In a cross-sectional communitybased study in Singapore involving people using hand-held mobile phone, headache was found to be the most prevalent central nervous system symptoms compared with the nonusers; the prevalence of headache increased further with increased duration of usage per day. [47] Several studies have identified a correlation between exposure to ELF-EMFs and the onset of Alzheimer's disease [48], [49] though their physiological nexuses are uncertain. One speculation is RF-EMF induced biochemical modifications, oxidative stress and ROS generation which is involved mainly in neurodegenerative disorders such as Alzheimer disease, Huntington disease, and Parkinson disease. This can also be associated with induction of several neuropsychiatric disorders, including anxiety disorders, depression, impairment of emotional and mental well-being [50], [51].
B. Adverse effects of MWR on neurological activity: Mechanism of Action
The adverse effects of electromagnetic field (EMF) are assumed to be indirect effects of several biochemical modifications. Thermal and Nonthermal interactions, oxidative stress, decrease in melatonin secretion, disturbances in calcium ion efflux/influx and thereby influencing cAMP pathway and serotonin/melatonin conversion and their efflux from the cells of pineal gland-are some of the proposed mechanisms. [52] The interaction with NADH oxidase in the plasma membrane leads to formation of reactive oxygen species (ROS) which activate the matrix metalloproteinases. This causes activation of ERK cascade (one of the four mitogen-activated protein kinase signalling cascades) which adversely affects the cell cycle progression, apoptosis, differentiation and metabolism in a complex manner. [53] Among all, the oxidative stress and ROS generation appears to be the most important mechanism for damage to DNA, protein and lipids [54]. The brain shows high metabolic rate and high oxygen intake but has poor energy stores. The brain has the highest need for oxygen in the human body and hence it is vulnerable to any disruptions in energy metabolism owing to ionizing radiation and hypoxia. It is said that the nervous system is practically helpless to ROS insults owing to its high metabolic rate, inadequate oxidant protection and reduced cellular turnover. [54] The primary site for Oxidative Phosphorylation and Adenosine Triphosphate (ATP) synthesis is mitochondria. The respiratory chain's redox enzymes an d coenzymes are found in close proximity to the inner mitochondrial membrane. Mitochondria play a variety of roles in the body, including apoptosis regulation and Ca 2+ storage, in addition to energy conversion. Mitochondria are both the initiating point and the target of several signaling pathways. Neurons are intrinsically very responsive to a decrease in ATP supply. Mitochondria, as the body's primary source of energy, are vulnerable to MWR damage. Succinate De-Hydrogenase (SDH) is one of the most important enzymes of mitochondrial energy metabolism. In animal model, the activity of SDH in rat hippocampus has shown to be decreased dramatically at 6 hours after radiation, culminating in alterations of mitochondrial energy metabolism. The terminal complex of mitochondrial electron transport chain, the Cytochrome c OXidase (COX), is embedded in the inner membrane of mitochondria [55]- [57]. COX activity has been reportedly inhibited by certain levels of MWR. The findings revealed that MWR has toxic effects on COX activity compounded over time and that there was a dosedependent correlation [56]. A mutation in the gene encoding the enzyme Copper-Zinc Cu 2+ /Zn 2+ Superoxide dismutase (SOD) can alter the function of this antioxidant. This mutation (SOD1) has been found in about 20% of patients of Amyotrophic Lateral Sclerosis (ALS), a neurodegenerative condition that causes motor neurons in the spinal cord, motor cortex, and brainstem to degenerate progressively [58]. MWR alters the expression of genes that code for the respiratory chain, resulting in problems with brain energy metabolism. MWR has the potential to increase molecular rotation and vibration, as well as the frequency of collisions between molecules, causing chemical bonds to break and thus damage the mitochondrial membrane [59]. Secondly, MWR causes oxidative modification of biological macromolecules and mitochondrial damage by increasing intracellular reactive oxidative species (ROS) and disrupting antioxidant enzymes, resulting in oxidative modification of biological macromolecules and mitochondrial damage. Similar to X-rays, the exposure to microwave radiation can generate reactive oxygen species (ROS) [60]- [63]. Additionally, MWR activates phospholipases and proteases, which triggers intracellular Ca 2+ overload and mitochondrial membrane damage [64]- [67]. MWR induced brain damage involves the phosphatidylinositol 3-kinase (PI3K)/Akt pathway.This is a pro-survival anti-apoptotic kinase signaling cascade which plays an important role for cellular survival [68], [69]. Akt is a serine/threonine protein kinase, also known as protein kinase B (PKB). This Akt is the primary protein effector situated downstream of the PI3K signalling pathway. Akt plays an important role in glucose metabolism by regulating the biological function of insulin [70]. Dysregulated PI3K/Akt pathway has been linked or incriminated with various carcinogenesis [71]. MWR can alter the expression or can cause structural damage of mitochondrial DNA. This results in decreased ATP production [72]. In animal model, exposure to microwaves resulted in excessive activation of N-methyl-D-aspartate (NMDA) receptor signalling pathway. Thus the microwaves adversely affect the learning process and memory by modulating the hippocampal synaptic plasticity. Maternal exposure to microwaves may lead to numerous neurological effects in the baby [73]. Table I summarizes the degrading effects of prolonged exposure to MWR.
V. CONCLUSION
In this field of study, considering the negative effects of MWR, the following problems are present: (a) The dosage, duration, and frequency of MWR-induced disruption of brain energy metabolism are all variables that need to be investigated further;(b) The longterm consequences of MW radiation-induced mitochondrial damage are unknown, and its connection to mitochondria-related neurodegenerative (ND) disorders like Alzheimer's disease warrants further investigation. The lack of uniform standards among laboratories creates a barrier to further growth and knowledge sharing. Many ND disorders, such as Parkinson's disease (PD), Alzheimer's disease (AD), Huntington's disease (HD), and Amyotrophic Lateral Sclerosis (ALS), are caused by ND mechanisms, and many of them are known as pathologies because of protein aggregation. Several reports seem to point to a connection between MWR exposure and occupational health but it still requires more conclusive results.
On the positive side, several technological advancements will possibly allow for the treatment of larger tumours in the future using MWA. Ablation is as yet, incapable of destroying microscopic tumours or preventing cancer from recurring. In this article, we have also focused on microwave imaging to diagnose and identify strokes. The advantage of the hemorrhage's physical properties can be utilised. It includes a 10% improvement in permittivity on both real and imaginary components. Thus, in many aspects MWR proves to be very beneficial for life. Mitochondria were swollen and vacuolized with ultrastructural changes due to the increasing SAR.
Kesari et al. [74] Exposed 45-day-old male Wistar rats, 2 h/d for 60 d by mobile phone (3G) 1) Induced DNA strand breaks in the brain. 2) MWR induced apoptosis in the brain by activation of p38MAPK, the pathway of principal stress response.
Xu et al. [75] Exposed cortical neurons of neonatal rats to MWR Expression of mtTFA mRNA and protein increased.
Lu et al. [76] Exposed primary cultures of glial cells to 2450 MHz MWR (4 mW/cm 2 for 2 h/d for 3 d) An increased intracellular free Ca 2+ was found.
Sander et al. [78] Exposed SD rats, MWR with a frequency of 591 MHz (13.8 mW/cm 2 ) Reduced availability of ATP, resulting in brain energy metabolism disorders. | 2022-02-09T16:23:33.577Z | 2022-01-15T00:00:00.000 | {
"year": 2022,
"sha1": "0dda7f61aba63d9999253f561959539d1e40b1b9",
"oa_license": "CCBY",
"oa_url": "https://www.ej-eng.org/index.php/ejers/article/download/2650/1198",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "b81c596ed8c114bd33a2a8625130b0a7e07819ba",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": []
} |
8057431 | pes2o/s2orc | v3-fos-license | Regulated unfolding of proteins in signaling
The transduction of biological signals often involves structural rearrangements of proteins in response to input signals, which leads to functional outputs. This review discusses the role of regulated partial and complete protein unfolding as a mechanism of controlling protein function and the prevalence of this regulatory mechanism in signal transduction pathways. The principles of regulated unfolding, the stimuli that trigger unfolding, and the coupling of unfolding with other well characterized regulatory mechanism are discussed.
Introduction
Proteins are the work horses of biological systems, performing a plethora of tasks, including chemical catalysis, signal transmission, molecular transportation, cellular movement and forming the structural framework of cells and tissues. Protein function is dictated by the primary amino acid sequence which, in turn, determines the three-dimensional organization and dynamic behavior of proteins. Through evolution, proteins have achieved a fine balance between thermodynamic stability and dynamic fluctuations to optimally perform their biological functions in the environmental setting of their host [1]. It has long been understood that the three dimensional structure of a protein determines its function. Growing evidence, however, establishes the pervasive roles of disorder and dynamics in mechanisms of protein function [2][3][4][5][6]. In fact, nearly a third of all proteins, in all kingdoms of life, contain disordered regions of at least 30 amino acids [7]. Disorder is manifested in different ways, from short, flexible linkers and long ''random coil-like'' disordered segments to compact but disordered domains and whole proteins termed intrinsically disordered proteins (IDPs) [8]. Structural flexibility and disorder mediates critical biological functions; consequently, these dynamic features are often evolutionarily conserved [9,10]. A noteworthy example is the topologically conserved activation loop in kinases [11]. In the inactive state of Serine/Threonine and Tyrosine kinases (e.g., PKA, IRK) the flexible loop is collapsed on the active site, preventing substrate binding. An evolutionarily conserved kinase activation mechanism involves phosphorylation of this loop, which results in (i) stabilization of an open conformation, and (ii) rearrangement of key catalytic residues, enabling substrate binding and phosphotransfer, respectively [11]. Classic allostery, which mediates signal transduction through the tertiary and quaternary structure of proteins (e.g., hemoglobin, receptor tyrosine kinases), causes structural rearrangements in one functional domain or subunit in response to ligand binding within a distal domain/subunit of the same protein [12]. This regulatory mechanism depends upon the ability of whole proteins or domains to fluctuate between different defined conformations to regulate function. However, accumulating evidence shows that partial or complete protein unfolding is also utilized as a mechanism of regulating function, particularly in signal transduction pathways. Here we introduce the concept of regulated unfolding as a protein regulatory mechanism, provide illustrative examples, and discuss its future implications.
Protein unfolding as a type of signaling output
Signaling mechanisms often involve posttranslational modifications and/or protein-ligand (e.g., protein, nucleic acids, lipid, etc.) interactions that couple an upstream input to a conformational Abbreviations: IDP, intrinsically disordered protein; NLS, nuclear localization signal; NES, nuclear export signal change, which alters function and produces a downstream signal. The extent of the conformational change ranges from subtle, local unfolding events to full unfolding of protein domains. For example, the cyclin-dependent kinase (Cdk) inhibitor p27 Kip1 (p27) regulates progression through the cell division cycle by interacting with and inhibiting several Cdk/cyclin complexes in the nucleus [13]. Cell cycle progression to S phase is characterized by rapid turnover of p27 via the proteasome pathway, a fate which is signaled by phosphorylation of p27 on Thr187 [14]. Counter intuitively, this posttranslational modification is performed by the Cdk/cyclin complexes for which p27 is a potent inhibitor [14,15]. Grimmler et al. [14], demonstrated that non-receptor tyrosine kinases phosphorylate Tyr88 of p27, a residue which occupies the active site of Cdk2 [16]. This modification causes an inhibitory 3 10 helix containing Tyr88 to be ejected from the ATP binding pocket of Cdk2, partially restoring kinase activity. Intrinsic flexibility of the C-terminal domain of p27 allows Thr187 to fluctuate into close proximity to the Cdk active site and become phosphorylated, creating a phosphodegron that leads to selective p27 ubiquitination and degradation, and ultimately full activation of Cdk/cyclin complexes (Fig. 1). Regulated partial unfolding of the inhibitory conformation of p27 through tyrosine phosphorylation triggers this signaling cascade that ultimately drives progression of cells into S phase of the division cycle.
Regulated unfolding mechanisms are also involved in the control of programmed cell death. Cytoplasmic p53 tumor suppressor initiates apoptosis by binding to and activating pro-apoptotic proteins [17]. This lethal function is inhibited by association of p53 with the anti-apoptotic protein BCL-xL [18]. Release and activation of p53 in response to DNA damage is signaled by a BH3-only protein ligand (PUMA) binding to BCL-xL. A p-stacking interaction between His113 in BCL-xL and Trp71 in PUMA, causes unfolding of BCL-xL at an allosteric site comprising two a-helix structural elements and dissociation of p53 from BCL-xL [19]. This example illustrates a signaling mechanism which combines traditional allosteric, ligand binding-induced structural changes with unfolding to release a binding partner.
The Wiskott-Aldrich syndrome protein (WASP) provides an example of both posttranslational modification-and ligand binding-induced unfolding involving several protein domains. WASP regulates cytoskeletal actin polymerization through direct interaction of its C-terminal domain with the Arp2/3 and actin complex. However, this domain is auto-inhibited through tertiary interactions with other domains of WASP. Cdc42, a Rho-family GTPase, signals activation of auto-inhibited WASP to initiate actin polymerization. Cdc42 and the C-terminal domain of WASP compete for binding to the WASP GTPase binding domain (GBD). Activation of WASP by Cdc42 involves partial unfolding of the hydrophobic core of the auto-inhibited conformation of WASP and folding of the WASP-Cdc42 complex. Furthermore, the partially unfolded conformation exposes Tyr291, a phosphorylation site for the non-receptor tyrosine kinase Lyn. This modification further relieves inhibition and enables the unfolding required for the structural switch to the Cdc42-bound conformation [20,21]. This activation mechanism ( Fig. 2A) is an example of regulated unfolding wherein two input signals, posttranslational modification and ligand binding, synergize to control the three-dimensional organization and function of WASP with switch-like precision. Utilization of two input mechanisms allows WASP to integrate disparate upstream signals [21] and to respond through regulated unfolding.
However, these two mechanisms are not the only inputs that propagate biological signals through regulated unfolding. For example, phototropins, a class of Ser/Thr kinases, play critical roles in signal transduction in plants. Their activation is signaled by exposure to blue light, when a covalent bond forms between a flavin chromophore and the light-oxygen-voltage 2 domain (LOV2), causing unfolding of an inhibitory Ja helix and consequently the activation of the kinase domain [22,23]. A similar mechanism is utilized by a class of bacterial photoactivatable proteins [24]. These examples have illustrated regulated unfolding mechanisms involving relatively subtle alterations of secondary and tertiary structure.
Protein shape-shifters
Other examples of regulated unfolding include a class of socalled 'metamorphic proteins' ( [25,26], Fig. 2B). The intriguing structural shape-shifting of these proteins mediates multiple cellular functions. For example, the chemokine lymphotactin (Ltn) switches between a monomeric a-helical and dimeric b-sheet sandwich conformation. The monomeric form, which exhibits the classical chemokine fold, binds to the canonical XCRI receptor. In contrast, the dimeric form binds to heparin and localizes to the plasma membrane [27]. The two mutually exclusive functional states exist in equilibrium under physiological conditions and require global unfolding for their inter-conversion [28]. Mad2, a protein involved in regulation of the mitotic spindle assembly, provides another example of metamorphic behavior. This protein undergoes a significant structural reorganization from an inactive to active conformation which requires a partially unfolded intermediate [29]. While the in vitro evidence for the alternative structures of metamorphic proteins supports the observations of functional switching in cells, the exact mechanisms that regulate conformational switching of Ltn and Mad2 in vivo are currently not well understood. Fig. 1. p27 as a signaling conduit. Tyrosine phosphorylation-dependent partial unfolding of p27 triggers signal propagation through the length of the protein and regulates its degradation.
Cdk2/Cyclin
Step 1 involves phosphorylation of Y88 of p27 that is bound to Cdk/cyclin complexes [Cdk2 (K2)/cyclin A (A) here] by non-receptor tyrosine kinases such as BCR-ABL, Src, Lyn, and Jak2, which ejects Y88 from the ATP binding pocket of Cdk2 and restores partial kinase activity. Following Step 1, Step 2 involves phosphorylation of T187 within the flexible C-terminal domain of p27 by partially active Cdk2 through a pseudo uni-molecular mechanism (indicated by gray arrow). Phosphorylation of T187 creates a phosphodegron signal for ubiquitination of Lysine residues within the p27 C-terminus by the E3 ligase, SCF Skp2 , during Step 3. Finally, during Step 4, ubiquitinated p27 is selectively degraded by the 26S proteasome, leading to the release of fully active Cdk2/cyclin A, which drives progression into S phase of the cell division cycle.
Proteins such as the glycoprotein MUC2, the major colon mucin, constitute the scaffold for formation of extensive biomolecular networks. Trimerization of MUC2 via its N-terminal domain, coupled with dimerization via its C-terminal domain forms planar polymers that assemble as stacked gel sheets on the inner epithelium of the colon [30]. Compact, ring-shaped polymers composed of folded monomers are stabilized in the presence of Ca 2+ and at low pH (6.2) and transported by secretory granulae to the epithelial cell layer. At pH 7.4 and in the presence of chelating agents, conditions which mimic those at the epithelial cell layer, the N-terminal rings of MUC2 partially unfold, causing an expansion of the proteinaceous network by greater than 1000-fold (Fig. 2C). This expanded polymer is stabilized by covalent disulfide bonds formed within the N-terminal trimerization domains [30]. The use of regulated unfolding maximizes the surface area that can be engaged by the polymer and likely mediates the physical and mechanical properties required for its function as a protective barrier in the colon. The energy expenditure for delivering MUC2 from the site of synthesis to the epithelial layer via the secretory pathway is significantly reduced though the employment of the compact form in early stages of the functional cycle.
Protein unfolding as a mechanism for revealing occluded signals
While some proteins perform unique, well-defined tasks, many exhibit multiple functions, often performed in multiple subcellular locations. A preponderance of these multi-functional proteins is involved in cellular signaling. Translocation between subcellular compartments is mediated by specialized machinery which recognizes specific signals, such as nuclear localization (NLS) or nuclear export signals (NES), which are encoded by short linear motifs within the primary sequence [31]. The transport machinery is always active; therefore, switchable signals are needed to control when a particular protein is transported from one cellular compartment to another. For example, KSRP, also known as FBP2, a protein involved in various aspects of mRNA metabolism [32], contains a 14-3-3f consensus binding sequence which is structurally occluded within an atypical KH1 domain. Phosphorylation by AKT of Ser193 causes the kinase domain of KH1 to unfold, consequently revealing the 14-3-3f interaction site [33]. This regulated unfolding event results in re-localization of KSRP to the nucleus in a 14-3-3f dependent manner [33] and reduction of the rate of mRNA degradation [34]. A similar mechanism of exposing structurally inaccessible localization signals is employed by the influenza virus to hijack the nuclear import machinery of its host cell. The C-terminal segment of viral polymerase PB2 unfolds in order to reveal a bipartite NLS which binds to importin a5 and allows the parasitic enzyme to enter the host cell nucleus and process newly synthesized viral genomic material [35].
The Crk-like (CRKL) adaptor protein, involved in mediating a variety of signal transduction cascades, including subcellular re-localization and activation of kinases and other signaling molecules [36], is another example of a protein which harbors an occluded recognition sequence [37]. An evolutionarily conserved NES is encoded in SH3C, a functionally important domain of CRKL that is otherwise uninvolved in recruitment of signaling molecules. Through a combination of structural and biophysical analyses, Harkiolaki et al. [37] demonstrated that the SH3C domain of CRKL is able to form a domain-swapped dimer that exposes two symmetrically disposed NESs. These signals are structurally occluded in the monomeric form of the protein [37]. Interestingly, domain swapping is also employed by other proteins as a method of regulating function [38,39]. The 'hinge loop', a topologically required region for formation of domain swapped dimers, switches from its collapsed configuration in the monomeric form to an extended conformation in the dimer. This hinge loop is a favorable location for conditional signaling sequences, such as sites of phosphorylation that regulate function, which become solvent exposed upon dimerization. Tyr926, a conserved phosphorylation site in the 'hinge loop' of the focal adhesion targeting (FAT) domain of focal adhesion kinase (FAK) is modified by Src with greater efficiency when the protein adopts the domain-swapped conformation, affecting downstream signaling through the Ras-MAPK pathway [40,41].
Due to its critical role in maintaining DNA integrity and controlling cell fate, the level and activity of the tumor suppressor p53 is controlled by complex signaling networks involving a staggering number of positive and negative feedback systems [42]. Acetylation of tetrameric p53 by the acetyltransferase p300 enhances specific DNA binding [43]. The acetylation site, located in the Cterminal regulatory domain of p53, is sterically occluded when this domain is phosphorylated, but becomes accessible for p300 modification when p53 binds to DNA, as well as under heat-denaturation conditions. These results suggest an allosterically regulated local unfolding mechanism [44].
Central to the conserved, inter-cellular Notch signaling pathway are the Notch family of modular, single-pass transmembrane receptors [45]. In their resting state, Notch receptors adopt an autoinhibited fold, in which two key proteolytic sites, S2 and S3, located within the negative regulatory region (NRR) are sterically protected from proteolytic cleavage. Binding of Notch on the signal-receiving cell to a transmembrane ligand located on the signal-sending cell causes ligand endocytosis as well as simultaneous endocytosis in trans (into the signal-sending cell) of the ecto-domain of Notch [45]. Since the transmembrane domain of Notch remains anchored in the membrane of the signal-receiving cell, the adjacent NRR domain is subjected to mechanical strain, which exposes the occluded S2 proteolytic site for cleavage [45]. In a molecular dynamics study, Chen and Zolkiewska [46] identify the protease-sensitive conformation of Notch1 as an on-pathway unfolding intermediate, in which two Lin12/Notch repeats dissociate from the heterodimerization domain (HD), causing unfolding of a secondary structure element within HD that contains S2. Furthermore, Stephenson and Avis [47] demonstrated through a combination of atomic force microscopy, biophysical assays and molecular dynamics that a bstrand containing the S2 site within the NRR domain of Notch2 undergoes stepwise unfolding in response to pulling force. Unfolding of the S2 site exhibited a low energy barrier and was an early event on the unfolding pathway. Experimental evidence associated the unfolding of the S2-containing structural element with proteolytic cleavage by the TACE and ADAM10 proteases, linking mechanically-induced unfolding with trans-endocytosis, a critical step in the Notch signaling pathway.
The mechanism of regulated unfolding as a means of exposing hidden signaling sequences is also utilized by a giant amongst proteins, the Van Willebrand factor (VWF), which forms ultra-large multimers. Buried protease recognition sites are revealed via local unfolding generated by tensile force created in response to arterial bleeding. Cleavage by the metalloprotease ADAMTS13 severs the ultra-large VWF multimers into smaller oligomers as part of a regulatory mechanism of hemostasis [48,49].
Together, these findings demonstrate that regulated unfolding to expose otherwise structurally occluded signaling sequences is a frequently utilized and effective mechanism for controlling the functional repertoire of numerous multi-tasking proteins.
Chaperones are molecular machines that recognize misfolded proteins and promote their refolding. Interestingly, cellular stress signals that trigger protein misfolding also initiate chaperone activation. For example, stress-responsive chaperones, such as the bacterial holdases Hsp33 [49,57,58] and HdeA [59,60], are activated upon oxidative stress and a drop in cellular pH, respectively. Strikingly, activation of these chaperones is achieved through conditional domain unfolding [56]. The structural transition to the partially unfolded state confers high affinity towards partially unfolded chaperone substrates, to which they bind and 'hold' until environmental conditions favor native protein folding. When these normal conditions are restored, substrates are released and allowed to fold independently [60] or are transferred to an ATPdependent foldase [49]. Exposure of hydrophobic surfaces on the C-terminal substrate binding domain (the so-called 'sensor' domain) of the chaperone through regulated unfolding provides selectivity and high binding affinity for unfolding/misfolding intermediates. Utilization of folded-to-unfolded transitions in the functional cycle of these disordered chaperones provides twofold functional advantages. First, this energy-independent mechanism allows maintenance of proteostasis under stress conditions, when the pool of ATP required by ATP-dependent chaperones is depleted. Second, utilization of a disordered chaperone region for substrate recognition enables binding to a broad pallete of unfolded protein substrates [61].
The unfolding/folding functional cycle of Hsp33 has been elegantly elucidated by Jakob and colleagues ( [49] and Fig. 2D). Under normal physiological conditions, the 'sensor domain' of Hsp33 is stabilized by a Zn 2+ ion which coordinates four highly conserved cysteines. In response to oxidative stress, the stabilizing ion is released and, consequently, the C-terminal domain unfolds. Oxidation of the four Zn-coordinating cysteines acts as an allosteric switch that causes unfolding of the previously folded linker connecting the N-and C-terminal domains [62]. This unfolded linker domain serves as the high-affinity binding site for early unfolding intermediates, while selecting against self-recognition for intrinsically disordered regions within the chaperone, as well as against other cellular IDPs [49].
Unfolding is a means to dramatically decrease the binding affinity between two folded biomolecules. A particularly interesting example of this regulatory mechanism involves the sarcoplasmic reticulum (SR) Ca 2+ -ATPase (SERCA) and the SR integral membrane protein phospholamban (PLN), which regulate cardiac contractility [55]. Activation of SR and plasma-membrane Ca 2+ channels in myocytes causes increased cytoplasmic Ca 2+ concentration and leads to cellular contraction. SERCA, a SR calcium pump, mediates transport of cytoplasmic Ca 2+ into the SR lumen, causing muscle relaxation [63]. PLN binding to SERCA inhibits SERCA-mediated Ca 2+ flux from the cytoplasm into the SR. Phosphorylation of PLN at Ser16 by PKA causes unfolding of domains Ia and Ib, positioned in the cytoplasm and the hydrophilic layer of the SR membrane, respectively. This modification reduces the affinity of PLN for SER-CA and restores SERCA-mediated uptake of Ca 2+ into the SR membrane [55,63]. Through EPR and NMR-based analyses, Gustavsson et al. [55], identified several partially disordered, alternative conformational states that exist in equilibrium with the folded form. The equilibrium distribution of conformational states for the conditionally unfolded species can be regulated by phosphorylation and lipid binding, which determines the binding affinity between SERCA and PLN and regulates cardiac contraction. The Ia domain of PLN is also involved in signal transduction by interacting with a number of binding partners. This function is most likely enabled by the conformational dynamics of this conditionally unstructured domain [55].
Triggers of regulated unfolding
The cellular functions affected by regulated unfolding mechanisms are highly diverse. Furthermore, the extent of disorder induced during signal switching ranges from subtle, local unfolding events [35,37,44,48,64] to unfolding of entire domains [30,55,65,66]. Similarly diverse is the spectrum of molecular triggers that unleash regulated unfolding events.
Environmental stimuli
Changes in chemical environment, such as alteration of pH [30,60], redox conditions [49], exposure to light [22][23][24]67], and metal ion concentrations [30,55], are signals that trigger cells to activate specific regulatory pathways (Fig. 3). These stimuli can affect the physico-chemical properties of proteins, providing a mechanism for coupling them with structural changes (e.g., unfolding) and downstream signaling. For example, oxidative-stress conditions promote disulfide bond formation between Cys residues in Hsp33 and signal activation of the chaperone through conditional unfolding [66] and chelation of stabilizing Ca 2+ ions promotes unfolding and physical expansion of the colon mucus [30].
Chemical modification
The state of foldedness of proteins is also controlled by chemical modifications arising from posttranslational modifications [14,55,64,65,68]. An illustrative example is the cyclic phosphorylation and dephosphorylation of the SERCA/PLN system [55] that modulates the regulated unfolding mechanism responsible for controlling cardiac contractility. Another example is regulation of the Cdk inhibitory activity of p27 by tyrosine phosphorylation, which disrupts the inhibitory conformation and partially activates Cdk activity [14]. A plethora of other posttranslational modifications, including acetylation [44], methylation, ubiquitination, sumoylation, etc., are either known to, or likely to mediate regulated unfolding events within diverse signaling pathways.
Ligand binding
Protein-protein interactions constitute the basis for intracellular signaling. Often, these interactions are triggered by structural rearrangements of one or more of the binding partners, either at the interaction site or at an allosterically regulated site. Several protein re-activation mechanisms employ ligand binding-induced unfolding steps. For instance, the C-terminal domain of inactivated peroxiredoxin must unfold when bound to the repair enzyme sulphiredoxin, in order to allow access to its active site [69]. PUMA binding to BCL-xL induces local unfolding of an allosteric site, thereby signaling release and activation of the tumor suppressor p53 [19]. Local unfolding induced by ligand binding has been observed in mechanisms that regulate the sub-cellular localization of proteins. For example, the C-terminal segment of the influenza virus polymerase unfolds when bound to human importin a5 for efficient nuclear import [35] and the nuclear export signal of CRKL becomes accessible only upon local unfolding of the polypeptide chain, upon self-association into a domain-swapped dimer [37]. Proteins are dynamic entities that sample multiple conformations within their folding landscape [70]. For the protein examples discussed here, intrinsic fluctuations within this landscape are enhanced through regulated unfolding to enable exposure of otherwise occluded binding sites, providing a mechanism for enabling interactions in a tightly controlled manner.
Mechanical force
In addition to chemical modifications and ligand binding, mechanical force is an important regulatory mechanism employed, in particular, in the muscular and vascular systems. Mechanical stress-induced local unfolding of titin in striated muscle is thought to play an important role in regulating its kinase activity [71], while fluid shear stress in blood vessels controls the length and function of the thrombogenic factor VWF, by exposing a buried proteolytic site [48]. Furthermore, trans-endocytosis of the ectodomain of Notch receptor exerts mechanical strain within its auto-inhibitory domain causing strain-induced local unfolding that exposes otherwise occluded sites for proteolytic cleavage, allowing propagation of Notch signals [46,47]. The existence of these diverse triggering mechanisms highlights the broad utilization of regulated unfolding in all kingdoms of life and as a response to widely divergent environmental stimuli.
Concluding remarks
The process of protein unfolding is utilized by all organisms to facilitate amino acid recycling [72] and to transport macromolecules through membranes, by threading them through tight pores [73]. Here we show that all kingdoms of life utilize mechanisms involving regulated protein unfolding to mediate signal transduction. Evolutionary conservation of the protein regions involved in regulated unfolding (e.g., the conserved occluded NES in CRKL [37], tyrosine residues within Cdk inhibitors [74], etc.) highlights the biological importance of this type of signaling mechanism. A theme that emerges from the examples discussed above is that, through various triggering mechanisms, regulated unfolding is a means to alter the dynamic properties of proteins, or segments within them, and, in so doing, alter protein function. Enhanced sampling of unfolded, or less structured, states in response to the triggering stimuli discussed above provides physical mechanisms for proteins to transmit biological signals [26].
Partial or global unfolding of proteins or protein domains facilitates interconversion between isoenergetic, alternative conformational states. Often, the structural rearrangement exposes new hydrophobic interaction surfaces and thus promotes the formation of oligomers [26]. For example, the metamorphic proteins Mad2 and Ltn [26] have evolved alternative folds [75], with one of two folds stabilized via dimerization and at least partial unfolding required for the structural transition between these conformational states [26,28,29]. Furthemore, the form of CRKL that is exported from the nucleus is a domain swapped dimer with an unfolded segment that contains a NES [37]. Oligomerization is a mechanism for enhancing the functional complexity associated with a particular protein sequence [26,75] and this complexity can be further enhanced via regulated unfolding to control transitions between different oligomeric states [26][27][28][29]37,55,68].
Recent advances in NMR spectroscopy methodology [76] and single molecule techniques (e.g., atomic force microscopy [77], single molecule fluorescence [78,79]) have allowed detailed characterization of the molecular mechanisms by which structural fluctuations mediate protein function. For example, NMR relaxation studies have shown that enzymes fluctuate between different sets of structural states at different stages of catalytic cycles [80]. Furthermore, Kay and co-workers have characterized lowly populated unfolding intermediates for several proteins using similar NMR methods [81,82]. In addition, single-molecule FRET techniques identified alterations in the folding pathway of a-synuclein due to mutations associated with Parkinson's disease [83]. Advances in computational methods in studies of protein folding and unfolding, as well as advances in computing power, provide opportunities to understand regulated unfolding mechanisms in atomistic detail. An illustrative example is the mechanism that controls VWF size in arterial thrombosis (reviewed in [84]) which was elucidated though a combination of molecular dynamics [85,86], single molecule experiments [48,87], X-ray crystallography [88] and biophysical assays [86,89]. Together, these approaches will be valuable tools in future studies into the roles of regulated unfolding-from subtle order-to-disorder transitions to large-scale polypeptide unfolding-in protein function. Importantly, the identification of functionally relevant unfolded states requires monitoring dynamics on multiple time-scales which necessitates the use of complementary experimental and computational techniques.
We anticipate that the list of proteins recognized to utilize regulated unfolding will grow, as conformational states identified in biophysical assays as simple folding/unfolding intermediates are shown to be physiologically relevant. Similar to the example of local unfolding and acetylation of p53 in response to DNA binding [44], these intermediates may be stabilized through the types of triggering modifications discussed above. For example, we proposed that the multifunctional protein nucleophosmin (NPM1), a histone chaperone involved in ribosome biogenesis, cell cycle control and tumor suppression [90], may use regulated unfolding of its N-terminal domain from a folded b-sheet rich pentamer to a disordered monomer in order to switch functions and sub-cellular localization [68]. Identification and characterization of functionally relevant unfolded states for other proteins will require addressing several challenges, such as (i) identification of switching triggers through biochemical, structural and cellular investigations, (ii) elucidation of the functional outcome(s) of the regulated unfolding phenomena, (iii) determination of the lifetimes of the unfolded species in an appropriate functional setting, and (iv) elucidation of the mechanisms by which regulated unfolding signals are reset when triggering stimuli are absent.
Finally, our growing knowledge of the broad utilization of regulated unfolding mediated by diverse triggering mechanisms provides opportunities for applications in protein engineering. In fact, mechanisms that couple protein domain folding and unfolding have been previously explored as general designs for biomolecular switches, with mechanical force [91][92][93], Ca 2+ ion binding [94,95] and proteolytic cleavage [96] utilized as input signals, and alteration of protein function as the output signal. Understanding the structural and biophysical underpinnings of regulated unfolding mechanisms will advance our knowledge of multifunctional protein regulation. It is likely that understanding the physical principles of evolutionarily selected mechanisms of regulated unfolding will lead protein design efforts in new directions, with possible applications in the biotechnology and pharmaceutical industries. | 2018-04-03T00:00:38.064Z | 2013-04-17T00:00:00.000 | {
"year": 2013,
"sha1": "e51bbeac674aa12795b97dfa46dd174f0b1724ea",
"oa_license": "CCBY",
"oa_url": "https://febs.onlinelibrary.wiley.com/doi/pdfdirect/10.1016/j.febslet.2013.02.024",
"oa_status": "HYBRID",
"pdf_src": "Wiley",
"pdf_hash": "080ed8df89b0763a5cf76233cc3208a73004c2b8",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
245475763 | pes2o/s2orc | v3-fos-license | Development of Pure Silica CHA Membranes for CO2 Separation
Thin pure-silica chabazite (Si-CHA) membranes have been synthesized by using a secondary growth method on a porous silica substrate. A CO2 permeance of 2.62 × 10−6 mol m−2 s−1 Pa−1 with a CO2/CH4 permeance ratio of 62 was obtained through a Si-CHA membrane crystallized for 8 h using a parent gel of H2O/SiO2 ratio of 4.6. The CO2 permeance through the Si-CHA membrane on a porous silica substrate was twice as high as that through the membrane synthesized on a porous alumina substrate, which displayed a similar zeolite layer thickness.
Introduction
The development of efficient and sustainable CO 2 capture technologies is desired for several reasons. First, carbon dioxide is a common greenhouse gas found in combustion streams. In other words, its production is present in many industrial processes, and its accumulation in the atmosphere is a threat to many bio systems on the planet [1]. Additionally, CO 2 is one of the main components of raw natural gas, and is responsible for pipeline corrosion problems during gas transportation [2]. For these reasons, separation technologies for carbon capture and storage (CCS) have been developed, such as pressure swing adsorption, amine scrubbing, and cryogenic distillation [3,4]. Among these separation techniques, membrane separation has presented itself as one of the most efficient methods, thus receiving increasing attention from the scientific community [5]. Fard et al. [6] reported that the global demand for membranes and membrane modules reached 15.6 billion USD in 2018, and is expected to grow annually by 8%.
Membranes are usually classified into 2 broad classes: polymeric and inorganic membranes. Although losing in terms of reproducibility, inorganic membranes are known for displaying superior thermal, chemical and mechanical stabilities when compared with polymeric membranes [7,8]. Therefore, inorganic membranes are preferentially applied for high temperature gas separation processes [9,10]. Among the materials used for the fabrication of inorganic membranes, zeolites excel as adsorbents due to their narrow and uniform pore size, high surface area, adjustable hydrophilicity and hydrophobicity, ion exchange capacity, and strong acidity [11]. In particular, chabazite (also known as CHA-type zeolite) has been researched for CO 2 separation, due to its eight-membered ring pores of 0.38 nm. For that reason, since the molecular diameters of CO 2 and CH 4 are 0.33 nm and 0.38 nm, respectively, CHA membranes show high CO 2 /CH 4 selectivity [12,13]. Kida et al. [14] reported that a CHA membrane synthesized on an alumina substrate without adding aluminum in the parent gel showed a CO 2 /CH 4 selectivity of 130, with a CO 2 permeance of 4.0 × 10 -6 mol m −2 s −1 Pa −1 . Yu et al. [15] synthesized an industrially relevant CHA membrane, with a length of 50 cm and membrane area of 100 cm 2 , which displayed a CO 2 permeance of 1.6 × 10 -6 mol m −2 s −1 Pa −1 and a CO 2 /CH 4 selectivity as high as 236. Hasegawa et al. [16] prepared a high silica CHA-type zeolite membrane (Si/Al = 18) on a porous α-Al 2 O 3 substrate with N 2 /SF 6 and CO 2 /CH 4 selectivities of 710 and 240, respectively. These high separation performances were explained by the low concentration or absence of aluminum in the parent gels during the synthesis of the CHA membrane. In the zeolite structure, cations are adhered to the negatively charged aluminum sites, which in turn increase the diffusion resistance of CO 2 in the zeolite pores. Therefore, the preparation of pure silica CHA (Si-CHA) membranes is expected to increase CO 2 separation performances. However, the highest achievable Si/Al ratio of CHA crystals in the conventional hydroxide medium is lower than 100 due to competition with other zeolites, such as ITQ-1, SSZ-23, and SSZ-24 [17]. For the parent gels that do not contain aluminum in its composition, CHA membranes have been prepared in fluoride medium, with rather low H 2 O/SiO 2 ratios of 3-6 in order to obtain a successful crystallization [17][18][19]. Only a few research groups were able to synthesize high silica membranes with parent gels of higher H 2 O/SiO 2 ratios. For example, Zhou et al. [20] were able to synthesize a high silica CHA zeolite membrane with a CO 2 /CH 4 selectivity of 480 by using a fluoride and aluminum free parent gel with a H 2 O/SiO 2 ratio of 120.
Si-CHA membranes are usually synthesized on porous alumina substrates in order to increase the mechanical strength of the thin CHA separation layers. However, alumina substrates are dissolved in CHA parent gels due to their alkalinity [21][22][23][24]. One of the solutions to overcome the aluminum dissolution of the porous ceramic substrates is to instead synthesize the membrane on a porous silica substrate without aluminum in its structure. The effects of using porous silica substrates to improve gas permeance were investigated in the preparation of MFI membranes [25][26][27]. Sugiyama et al. [26] were able to obtain a N 2 permeance through a MFI membrane of 3.7 × 10 −6 mol m −2 s −1 Pa −1 with a N 2 /SF 6 permeance ratio of 328. The application of porous silica substrates was effective for MFI membranes. Therefore, in this paper, Si-CHA membranes were crystallized on porous silica substrates. The effects of changing the synthesis conditions were investigated for dense Si-CHA membranes for CO 2 separation. Among these synthesis conditions, the effect of adding seed crystals to the synthesis gel was studied as well. Seeding is one of most important parameters for zeolite synthesis [28]. Kong et al. [17] reported that the presence of seed crystals favors the formation of CHA zeolite more than the presence of the structure directing agent (SDA). Finally, the effects of synthesizing the Si-CHA membrane on a silica substrate were confirmed by comparing the permeation results with those through a membrane synthesized on an alumina substrate.
Synthesis of Si-CHA Crystals
The Si-CHA crystals were synthesized by hydrothermal synthesis based on the former literature [12]. N,N,N-trimethyl-1-adamantylammonium hydroxide (TMAdaOH: 25%, SACHEM) was selected as the structure directing agent (SDA), and Tetraethyl orthosilicate (TEOS: >99%, LS-2430, Shin-Etsu) as the silica precursor. Both compounds were mixed and stirred at 250 rpm overnight, followed by heating until obtaining a dry powder. Then, hydrofluoridric acid (HF: 46.0-48.0%, Wako) and distilled water were added to the dried powder to obtain the parent gel. The composition of the parent gel was SiO 2 :TMAdaOH:HF:H 2 O = 1:0.8:0.8:4.6 (mol/mol). This dried gel was transferred to a PFTE-lined autoclave, where hydrothermal synthesis was carried out at 150 • C for 24 h. The obtained CHA crystals were recovered by vacuum filtration and washed with distilled water. Then, after drying for 24 h, they were pulverized with an automatic mortar for 4 h. Finally, the CHA crystals were calcined in air at 600 • C for 15 h in order to remove the SDA.
Synthesis of Si-CHA Membranes
The Si-CHA zeolite membranes were synthesized on porous silica tubes provided by Sumitomo Electric Industries, Ltd. (Yokohama, Japan) (outer diameter: 10 mm, inner diameter: 6 mm, average pore size: 500 nm, length 30 mm). The silica substrates were coated with the Si-CHA crystals by the dip-coating method, with a Si-CHA seed crystal slurry of 8 g L −1 concentration and pH 2. The composition of the parent gel for the membrane synthesis was SiO 2 :TMAdaOH:HF:H 2 O = 1:0.8:0.8:3.8~5.4 (mol/mol). Moreover, 10 µm Si-CHA crystals were also added to the synthesis gel at varying quantities (0~0.25 wt%). Then, 28 g of the parent gel was smeared onto a seeded substrate, followed by hydrothermal synthesis at 150 • C for 4 h to 16 h in a Teflon-lined autoclave. After the synthesis, calcination was performed in air at 600 • C for 5 h.
In order to investigate the effects of the substrates, an alumina substrate (outer diameter: 12 mm, inner diameter: 8 mm, average pore size: 500 nm, length 30 mm, Noritake Co. (Nagoya, Japan)) was employed. The synthesis procedures were the same as those for the silica substrate.
Characterization
The obtained membranes were characterized by using an X-ray diffractometer (Rigaku (Tokyo, Japan)) from 5 to 40 • for CuKα radiation. The morphologies of the obtained crystals and membranes were observed using a VE-8800 scanning electron microscope (SEM, KEYENCE (Osaka, Japan)). The permeation performances were measured by single gas permeance tests using the probe gases H 2 , CO 2 , N 2 , CH 4 and SF 6 at room temperature. As typically done, the membrane was inserted in a stainless steel module and sealed with silicone O-rings. The selected gas was fed on the outer side of the membrane with a feed flow of 200 mL/min and, after permeating the membrane, flowed to a handmade bubble flowmeter, where the volumetric flow rate and, consequently, gas permeance were determined.
Effects of Synthesis Time
The synthesis time for the CHA zeolite membrane synthesis on the silica substrates was investigated from 4 h to 16 h at 150 • C. The H 2 O/SiO 2 ratio and the added seed crystals were fixed at 4.6 and 0.01 wt%, respectively. Figure 1 shows the XRD patterns for the membranes. The highest diffraction peak at 9.4 • refers to the (100) plane of the CHA and is considered its characteristic peak. All the other diffraction peaks were assigned for the CHA structure, with the exception of a very small peak at 8.1 • . This peak was found in all membranes and refers to the (020) plane of the STT-type zeolite, a common impurity in CHA-type zeolite synthesis [18]. The obtained XRD peaks show that, starting from a synthesis time of 4 h, practically pure CHA layers were obtained, with the membrane synthesized for 16 h displaying, among the obtained membranes, the highest ratio of intensity of CHA and STT characteristic peaks. Specifically, a CHA (9.4 • )/STT (8.1 • ) of 25.3 was obtained, which is equivalent to about 96% Si-CHA purity. CHA (9.4 • )/STT (8.1 • ) ratios of 18 and 24 were obtained for the 4 h and 8 h synthesis, respectively. This shows that Si-CHA zeolite purity is proportional to synthesis time. The characteristic peak intensities at 9.4 • increased by increasing the synthesis time. Therefore, the crystals' sizes are also proportional to the synthesis time. Figure 2 shows the single gas permeances through the obtained membranes. The H 2 /SF 6 permeance ratio through the membrane synthesized for 4 h was 24.26. The Knudsen diffusion ratio of H 2 /SF 6 is 8.5, showing that the membrane synthesized for 4 h was not dense enough. H 2 /SF 6 permeance ratio through the membrane synthesized for 8 h was 196, which is much higher than that for 4 h synthesis. Furthermore, the membrane's CO 2 permeance was of 2.62 × 10 −6 mol m −2 s −1 Pa −1 , with a CO 2 /CH 4 permeance ratio of 62. The membrane's high selectivity can be explained by the molecular sieve mechanism due to the pore size of the CHA structure. The high CO 2 /CH 4 permeance ratio indicates Membranes 2021, 11, 926 4 of 10 that the membrane synthesized for 8 h was dense. The membrane synthesized for 16 h displayed a similar CO 2 /CH 4 permeance ratio of 60. Therefore, this membrane was also dense. However, its CO 2 permeance was only 7.32 × 10 −7 mol m −2 s −1 Pa −1 , which is 71.2% lower than that through the 8 h synthesis membrane. The lower CO 2 permeance can be explained by the thicker CHA layer after the 16 h synthesis. A similar result was obtained by Chew et al. [29], who synthesized on α-alumina a membrane of SAPO-34, a type of zeolite with CHA structure, which displayed an ideal CO 2 /CH 4 selectivity of 56. Figure 2 shows the single gas permeances through the obtained membranes. The H2/SF6 permeance ratio through the membrane synthesized for 4 h was 24.26. The Knudsen diffusion ratio of H2/SF6 is 8.5, showing that the membrane synthesized for 4 h was not dense enough. H2/SF6 permeance ratio through the membrane synthesized for 8 h was 196, which is much higher than that for 4 h synthesis. Furthermore, the membrane's CO2 permeance was of 2.62 × 10 −6 mol m −2 s −1 Pa −1 , with a CO2/CH4 permeance ratio of 62. The membrane's high selectivity can be explained by the molecular sieve mechanism due to the pore size of the CHA structure. The high CO2/CH4 permeance ratio indicates that the membrane synthesized for 8 h was dense. The membrane synthesized for 16 h displayed a similar CO2/CH4 permeance ratio of 60. Therefore, this membrane was also dense. However, its CO2 permeance was only 7.32 × 10 −7 mol m −2 s −1 Pa −1 , which is 71.2% lower than that through the 8 h synthesis membrane. The lower CO2 permeance can be explained by the thicker CHA layer after the 16 h synthesis. A similar result was obtained by Chew et al. [29], who synthesized on α-alumina a membrane of SAPO-34, a type of zeolite with CHA structure, which displayed an ideal CO2/CH4 selectivity of 56. Figure 2 shows the single gas permeances through the obtained membranes. The H2/SF6 permeance ratio through the membrane synthesized for 4 h was 24.26. The Knudsen diffusion ratio of H2/SF6 is 8.5, showing that the membrane synthesized for 4 h was not dense enough. H2/SF6 permeance ratio through the membrane synthesized for 8 h was 196, which is much higher than that for 4 h synthesis. Furthermore, the membrane's CO2 permeance was of 2.62 × 10 −6 mol m −2 s −1 Pa −1 , with a CO2/CH4 permeance ratio of 62. The membrane's high selectivity can be explained by the molecular sieve mechanism due to the pore size of the CHA structure. The high CO2/CH4 permeance ratio indicates that the membrane synthesized for 8 h was dense. The membrane synthesized for 16 h displayed a similar CO2/CH4 permeance ratio of 60. Therefore, this membrane was also dense. However, its CO2 permeance was only 7.32 × 10 −7 mol m −2 s −1 Pa −1 , which is 71.2% lower than that through the 8 h synthesis membrane. The lower CO2 permeance can be explained by the thicker CHA layer after the 16 h synthesis. A similar result was obtained by Chew et al. [29], who synthesized on α-alumina a membrane of SAPO-34, a type of zeolite with CHA structure, which displayed an ideal CO2/CH4 selectivity of 56.
Effect of H 2 O/SiO 2 Ratio of the Parent Gel
The effect of the H 2 O/SiO 2 ratio of the parent gel was investigated. The H 2 O/SiO 2 ratio varied from 3.8 to 5.4. The synthesis time was fixed at 16 h. The parent gel displayed a paste-like state. The viscosity of the parent gel is an important parameter, as the adherence of the paste to the surface of the substrate is necessary for the successful uniform synthesis of the zeolite layer. Figure 3 shows the XRD patterns for the membranes crystallized with different H 2 O/SiO 2 ratios. The intensities at 8.1 • increased when increasing the H 2 O/SiO 2 ra-Membranes 2021, 11, 926 5 of 10 tio of the parent gel. Miyamoto et al. [18] reported that synthesis gels with low H 2 O/SiO 2 ratios tend to initiate the formation of zeolites with lower framework densities. The framework densities of CHA and STT are 15.4 and 17.0 Si/nm 3 , respectively. Thus, the STT structure was more present in the surface of the membranes synthesized by the parent gels of lower silica concentration. A CHA (9.4 • )/STT (8.1 • ) intensity ratio of 29.5 was obtained when H 2 O/SiO 2 = 4.2, the highest calculated from these XRD measurements. The CHA (9.4 • )/STT (8.1 • ) intensity ratio was only 13.8 when H 2 O/SiO 2 = 3.8 due to the low viscosity of the parent gel. Therefore, the gel's optimal H 2 O/SiO 2 ratio is 4.2, as it tends to produce a pure CHA layer, as well as displaying good viscosity.
The effect of the H2O/SiO2 ratio of the parent gel was investigated. The H2O/SiO2 ratio varied from 3.8 to 5.4. The synthesis time was fixed at 16 h. The parent gel displayed a paste-like state. The viscosity of the parent gel is an important parameter, as the adherence of the paste to the surface of the substrate is necessary for the successful uniform synthesis of the zeolite layer. Figure 3 shows the XRD patterns for the membranes crystallized with different H2O/SiO2 ratios. The intensities at 8.1° increased when increasing the H2O/SiO2 ratio of the parent gel. Miyamoto et al. [18] reported that synthesis gels with low H2O/SiO2 ratios tend to initiate the formation of zeolites with lower framework densities. The framework densities of CHA and STT are 15.4 and 17.0 Si/nm 3 , respectively. Thus, the STT structure was more present in the surface of the membranes synthesized by the parent gels of lower silica concentration. A CHA (9.4°)/STT (8.1°) intensity ratio of 29.5 was obtained when H2O/SiO2 = 4.2, the highest calculated from these XRD measurements. The CHA (9.4°)/STT (8.1°) intensity ratio was only 13.8 when H2O/SiO2 = 3.8 due to the low viscosity of the parent gel. Therefore, the gel's optimal H2O/SiO2 ratio is 4.2, as it tends to produce a pure CHA layer, as well as displaying good viscosity. Figure 4 shows the single gas permeances through the membranes synthesized with different H2O/SiO2 ratios. The overall high permeance of the membrane obtained with a parent gel of H2O/SiO2 ratio of 3.8 was due to the low membrane thickness, consequent to the high viscosity of the parent gel. For this membrane, a low H2/SF6 permeation ratio of 5.81 was obtained, due to the non-uniform coating of the parent gel on the substrate prior to crystallization. On the other hand, the SF6 permeance increased drastically when increasing the H2O/SiO2 ratio to 5.0 and over. As a result, high H2/SF6 permeance ratios over 130 were obtained only by the membranes with H2O/SiO2 = 4.2 and 4.6. These membranes showed the highest CHA (9.4°)/STT (8.1°) peak intensity ratios as well, displaying values of 29.5 and 25.3 for the membranes synthesized with the parent gels of H2O/SiO2 ratios of Figure 4 shows the single gas permeances through the membranes synthesized with different H 2 O/SiO 2 ratios. The overall high permeance of the membrane obtained with a parent gel of H 2 O/SiO 2 ratio of 3.8 was due to the low membrane thickness, consequent to the high viscosity of the parent gel. For this membrane, a low H 2 /SF 6 permeation ratio of 5.81 was obtained, due to the non-uniform coating of the parent gel on the substrate prior to crystallization. On the other hand, the SF 6 permeance increased drastically when increasing the H 2 O/SiO 2 ratio to 5.0 and over. As a result, high H 2 /SF 6 permeance ratios over 130 were obtained only by the membranes with H 2 O/SiO 2 = 4.2 and 4.6. These membranes showed the highest CHA (9.4 • )/STT (8.1 • ) peak intensity ratios as well, displaying values of 29.5 and 25.3 for the membranes synthesized with the parent gels of H 2 O/SiO 2 ratios of 4.2 and 4.6, respectively. In order to obtain a uniform crystal layer, the STT phase in the CHA structure is not desirable.
Effect of Adding Seed Crystals to the Synthesis Gel
The amount of seed crystals in a parent gel of H 2 O/SiO 2 = 4.6 was investigated in a 16 h synthesis at 150 • C. In these syntheses, the weight percentage of CHA seed crystals in the parent gel was varied from 0 to 0.25 wt%. Figure 5 shows the surface and the crosssectional images of the SEM observation of the membranes. No CHA layer was observed in the case of the parent gel without any seed crystals. On the other hand, a polycrystalline structure was found in the membranes with 0.01 wt% and 0.05 wt% seed crystals. The surface crystal size was of 1.07 µm when 0.01 wt% seed crystals were added, while a smaller crystal size of 530 nm was obtained when 0.05 wt% seed crystals were added. Since the seed crystals in the parent gel function as crystallization nuclei, the number of CHA crystals should increase by increasing the amount of CHA seed crystals added to the parent gel [30,31]. Therefore, smaller crystals are obtained with higher amounts of seed crystals, since the total amount of zeolite is limited by the total amount of coated parent gel.
Effect of Adding Seed Crystals to the Synthesis Gel
The amount of seed crystals in a parent gel of H2O/SiO2 = 4.6 was investigated in a 16 h synthesis at 150 °C. In these syntheses, the weight percentage of CHA seed crystals in the parent gel was varied from 0 to 0.25 wt%. Figure 5 shows the surface and the crosssectional images of the SEM observation of the membranes. No CHA layer was observed in the case of the parent gel without any seed crystals. On the other hand, a polycrystalline structure was found in the membranes with 0.01 wt% and 0.05 wt% seed crystals. The surface crystal size was of 1.07 μm when 0.01 wt% seed crystals were added, while a smaller crystal size of 530 nm was obtained when 0.05 wt% seed crystals were added. Since the seed crystals in the parent gel function as crystallization nuclei, the number of CHA crystals should increase by increasing the amount of CHA seed crystals added to the parent gel [30,31]. Therefore, smaller crystals are obtained with higher amounts of seed crystals, since the total amount of zeolite is limited by the total amount of coated parent gel. Figure 6 shows the single gas permeances through the membranes synthesized with different concentrations of seed crystals in the parent gel. The H2/SF6 permeance ratio was of 2.37 in the case of the membrane synthesized with no addition of seed crystals. Additionally, H2 permeance was lower than that through the substrate. Judging by the low H2/SF6 permeance ratio, it can be concluded that the membrane was not dense, as the molecular size of SF6 is larger than the pore size of CHA. However, both H2/SF6 permeance ratio and H2 permeance increased with increasing the amounts of seed crystals in the parent gel. As discussed before, the crystal size decreased with increasing the amount of seed crystals in the parent gel, resulting in a thinner and denser CHA layer. However, the H2/SF6 permeance ratio through the membrane prepared with the high seed crystal ratio of 0.25 wt% in the parent gel was only 16.88, lower than that through the membrane prepared with a seed crystal ratio of 0.05 wt%. The CHA layer must have been too thin to function as a dense membrane. Figure 6 shows the single gas permeances through the membranes synthesized with different concentrations of seed crystals in the parent gel. The H 2 /SF 6 permeance ratio was of 2.37 in the case of the membrane synthesized with no addition of seed crystals. Additionally, H 2 permeance was lower than that through the substrate. Judging by the low H 2 /SF 6 permeance ratio, it can be concluded that the membrane was not dense, as the molecular size of SF 6 is larger than the pore size of CHA. However, both H 2 /SF 6 permeance ratio and H 2 permeance increased with increasing the amounts of seed crystals in the parent gel. As discussed before, the crystal size decreased with increasing the amount of seed crystals in the parent gel, resulting in a thinner and denser CHA layer. However, the H 2 /SF 6 permeance ratio through the membrane prepared with the high seed crystal ratio of 0.25 wt% in the parent gel was only 16.88, lower than that through the membrane prepared with a seed crystal ratio of 0.05 wt%. The CHA layer must have been too thin to function as a dense membrane.
Effects of the Substrates
The synthesis procedure of the Si-CHA membrane on silica substrate was optimized in the former sections. In this section, a Si-CHA zeolite membrane was synthesized on an alumina substrate to confirm, by comparison, the effects of synthesizing a Si-CHA membrane on a silica substrate. Figure 7 shows the single gas permeances through the membranes synthesized on different types of substrates. Both membranes show similar H 2 /SF 6 permeance ratios of 71.05 (alumina) and 83.80 (silica). However, the H 2 permeance through the membrane on the silica substrate was of 3.53 × 10 −6 mol m −2 s −1 Pa −1 , which is about twice as high than that through the membrane on the alumina substrate. Aluminum is dissolved from the alumina substrate during CHA synthesis, and the dissolved Al from the substrates can affect the Si/Al ratio of the CHA on the alumina substrate [26]. The membrane synthesized on alumina substrate has displayed a Si/Al ratio of about 5 by an X-ray spectroscopy (EDS) analysis. Cations are found around the Al atoms in the pores of the zeolite structure, and display a form of diffusion resistance [32]. However, this is not the case when synthesizing a zeolite membrane on a porous silica substrate. Therefore, CHA membranes with a Si/Al ratio of infinite can be synthesized on silica substrates to improve permeance. Figure 8 shows the CHA zeolite layer thicknesses of the membranes synthesized on porous silica and alumina substrates. Both membranes displayed similar thicknesses of 10.8 µm and 9.2 µm for the membranes synthesized on silica and alumina substrates, respectively. The membrane synthesized on the alumina substrate was obtained with 170 • C and 70 h, whereas the one synthesized on the silica substrate was obtained with 150 • C and 16 h. Therefore, apart from the fact that Si-CHA membranes synthesized on porous silica substrates are more permeable than CHA membranes synthesized on alumina substrates, silica substrates also display a possibility of synthesizing CHA membranes on them with milder synthesis conditions. Finally, both membranes have shown CO 2 /CH 4 separation potential, displaying CO 2 /CH 4 separation factors over 20.
the substrates can affect the Si/Al ratio of the CHA on the alumina substrate [26]. The membrane synthesized on alumina substrate has displayed a Si/Al ratio of about 5 by an X-ray spectroscopy (EDS) analysis. Cations are found around the Al atoms in the pores of the zeolite structure, and display a form of diffusion resistance [32]. However, this is not the case when synthesizing a zeolite membrane on a porous silica substrate. Therefore, CHA membranes with a Si/Al ratio of infinite can be synthesized on silica substrates to improve permeance. Figure 8 shows the CHA zeolite layer thicknesses of the membranes synthesized on porous silica and alumina substrates. Both membranes displayed similar thicknesses of 10.8 μm and 9.2 μm for the membranes synthesized on silica and alumina substrates, respectively. The membrane synthesized on the alumina substrate was obtained with 170 °C and 70 h, whereas the one synthesized on the silica substrate was obtained with 150 °C and 16 h. Therefore, apart from the fact that Si-CHA membranes synthesized on porous silica substrates are more permeable than CHA membranes synthesized on alumina substrates, silica substrates also display a possibility of synthesizing CHA membranes on them with milder synthesis conditions. Finally, both membranes have shown CO2/CH4 separation potential, displaying CO2/CH4 separation factors over 20.
Conclusions
Si-CHA zeolite membranes were synthesized on novel silica substrates for the first time. The synthesis procedures were optimized to obtain a CO2/CH4 selective membrane.
The membrane has displayed an increase of the separation layer thickness by increasing the synthesis time. CO2/CH4 selectivity was slightly lower in the 16 h synthesis (59) when compared to the membrane synthesized for 8 h (62). However, H2 permeation was 71.2% lower in the case of the 16 h synthesis.
The synthesis gel should display an ideal concentration of water. If the water concentration is too low (H2O/SiO2 < 3.8), the gel becomes too viscous. As a result, a zeolite layer displaying poor uniformity is obtained. On the other hand, a too high water concentration (H2O/SiO2 > 5) results in the synthesis of STT zeolite. In this study, a water concentration of H2O/SiO2 = 4.2 has resulted in the best uniformity of the zeolite layer among the studied range of H2O/SiO2 ratio.
Conclusions
Si-CHA zeolite membranes were synthesized on novel silica substrates for the first time. The synthesis procedures were optimized to obtain a CO 2 /CH 4 selective membrane.
The membrane has displayed an increase of the separation layer thickness by increasing the synthesis time. CO 2 /CH 4 selectivity was slightly lower in the 16 h synthesis (59) when compared to the membrane synthesized for 8 h (62). However, H 2 permeation was 71.2% lower in the case of the 16 h synthesis.
The synthesis gel should display an ideal concentration of water. If the water concentration is too low (H 2 O/SiO 2 < 3.8), the gel becomes too viscous. As a result, a zeolite layer displaying poor uniformity is obtained. On the other hand, a too high water concentration (H 2 O/SiO 2 > 5) results in the synthesis of STT zeolite. In this study, a water concentration of H 2 O/SiO 2 = 4.2 has resulted in the best uniformity of the zeolite layer among the studied range of H 2 O/SiO 2 ratio.
The concentration of Si-CHA seed crystals in the synthesis gel was investigated for the first time in this paper. The most prominent effect observed when adding seed crystals to the synthesis gel was the increase of the zeolite layer's density. The increased number of nuclei must have resulted in the synthesis of smaller Si-CHA crystals. The H 2 /SF 6 ideal selectivity was increased when 0.01-0.05 wt% of seed crystals were added to the synthesis gel.
Finally, regarding the choice of substrate, a clear effect in the overall gas permeance of the membranes was observed. It was possible to obtain an overall permeance twice as high as the same membrane synthesized on a porous alumina substrate by synthesizing the Si-CHA membrane on a silica porous substrate instead. By avoiding the effect of alumina dissolution from the substrate, pore blockage effect by ions and water can be averted.
The membrane with the highest CO 2 /CH 4 separation potential was synthesized on a porous silica substrate with the following conditions: H 2 O/SiO 2 ratio of 4.6 and addition of 0.01 wt% Si-CHA seed crystals in an 8 h synthesis at 150 • C. It is important to note that the synthesis of Si-CHA type zeolite membrane on porous silica substrates is still a novel technique, and still has much room for improvement. | 2021-12-24T06:17:05.028Z | 2021-11-25T00:00:00.000 | {
"year": 2021,
"sha1": "50497ed0de13f0f2f696cc8e1f39ede781bd6c50",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "9fc31717e1f7743d8570d15a8b9e672aec4ffde9",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
268413751 | pes2o/s2orc | v3-fos-license | Beyond the Ivory Tower: Perception of academic global surgery by surgeons in low- and middle-income countries
Interest in global surgery has surged amongst academics and practitioners in high-income countries (HICs), but it is unclear how frontline surgical practitioners in low-resource environments perceive the new field or its benefit. Our objective was to assess perceptions of academic global surgery amongst surgeons in low- and middle-income countries (LMICs). We conducted a cross-sectional e-survey among surgical trainees and consultants in 62 LMICs, as defined by the World Bank in 2020. This paper is a sub-analysis highlighting the perception of academic surgery and the association between practice setting and responses using Pearson’s Chi-square test. Analyses were completed using Stata15. The survey received 416 responses, including 173 consultants (41.6%), 221 residents (53.1%), 8 medical graduates (1.9%), and 14 fellows (3.4%). Of these, 72 responses (17.3%) were from low-income countries, 137 (32.9%) from lower-middle-income countries, and 207 (49.8%) from upper-middle-income countries. 286 respondents (68.8%) practiced in urban areas, 34 (8.2%) in rural areas, and 84 (20.2%) in both rural and urban areas. Only 185 (44.58%) were familiar with the term “global surgery.” However, 326 (79.3%) agreed that collaborating with HIC surgeons for research is beneficial to being a global surgeon, 323 (78.8%) agreed that having an HIC co-author improves likelihood of publication in a reputable journal, 337 (81.6%) agreed that securing research funding is difficult in their country, 195 (47.3%) agreed that their institutions consider research for promotion, 252 (61.0%) agreed that they can combine research and clinical practice, and 336 (82%) are willing to train HIC medical students and residents. A majority of these LMIC surgeons noted limited academic incentives to perform research in the field. The academic global surgery community should take note and foster equitable collaborations to ensure that this critical segment of stakeholders is engaged and has fewer barriers to participation.
Introduction
Academic global surgery (AGS) seeks to improve surgical conditions affecting vulnerable populations, often in resource-poor environments where health access may be limited [1].It involves the integration of clinical outcomes and basic, translational, and health services research into the practice of global surgery [2].With the establishment of the Lancet Commission on Global Surgery in the last decade, academic surgeons in high-income countries (HICs) have increased partnerships with surgeons in low-and middle-income countries (LMICs) [3].These partnerships have progressively depended on reciprocal clinical, research, and educational collaborations.
Many academic global surgeons in both HICs and LMICs rely on ensuing scholarly writings and responsibilities from these research engagements in addition to participation in surgical professional organizations for academic advancement [4].While partnerships with frontline faculty in LMICs have supported the academic careers of HIC academic global surgeons, these perks have not always been reciprocated [5].Such an outcome is due to both institutional pressures and discrepant perceptions between HIC volunteers and local hosts [6].Studies characterizing unidirectional relationships in global surgery have reported these relationships to be neocolonial with a focus on the HIC institution's agenda [6][7][8][9][10].
Understanding the perception of AGS among surgeons in LMICs is thus vital to characterizing their acknowledgment of these current practices in global surgery, as well as their involvement, challenges, and willingness to broaden their responsibilities in their roles as drivers of global surgery.This involves gaining insight into their valuation of the evolving dynamics of the HIC partnership which is expanding to include bidirectional training for surgical trainees and international funding opportunities for research and development in global surgery [2,11].
Our study aimed to explore the perception of AGS and its benefits among surgery, anesthesia, and obstetric care practitioners and trainees from multiple LMICs, while also evaluating perceived institutional support for surgeons in LMICs.The results of our study contribute to a deeper understanding of the local dynamics of academic global surgery, with the potential to drive changes that improve surgical care for vulnerable populations.
Study design, population, and setting
We analyzed the perception towards AGS from a cross-sectional e-survey of surgical residents, fellows, and consultants/specialists in LMICs as defined by the World Bank in 2020 [12].Eligible participants were workers in health facilities (private or government) in the departments of general surgery, surgical subspecialties (neurosurgery, vascular surgery, plastic surgery, trauma surgery, cardiothoracic surgery, urology, surgical oncology, pediatric surgery, otolaryngology surgery, ophthalmologic surgery, orthopedic surgery), obstetrics and gynecology, or anesthesia.Eligibility was irrespective of previous global surgery background and experience.
Survey design
We conducted pilot studies with 40 surgical, anesthetic, and obstetric care providers in LMICs to test the survey for language and comprehension.We pragmatically chose data variables that were objective, easily standardized, and relevant, to minimize missing data and maximize data quality.The survey incorporated adaptive questioning with each page containing approximately 7 to 8 questionnaire items over five pages.We also utilized branching logic based on participant categories.
Data collection
We collected data using an anonymous, self-administered online survey on a secure, password-encrypted Research Electronic Data Capture (REDCap) from 1st February 2022 to 21st March 2022.To spearhead the recruitment process, we enlisted 73 country leads representing 62 LMICs, and in some instances, assigned more than one lead per country before distributing the survey.The country leads utilized a standardized email template and letter prepared by the steering committee of the project team to contact medical schools, hospitals, and professional societies, inviting them to participate in and disseminate the survey.During the study period, we sent these invitations to 89 medical schools, surgical societies, and organizations via email.Our team collaborated with 23 organizations and medical schools across 13 countries to support survey dissemination.Participation in the survey was voluntary, and we encouraged participants to share the survey with their colleagues.
Data analysis
We conducted our analysis using Stata 15 (StataCorp) [13].We compared data between lowincome countries (LICs), lower-middle-income countries (LMCs), and upper-middle-income countries (UMICs) as defined by the World Bank in 2020.We started by using descriptive statistics, employing one-way ANOVA to measure the mean differences among groups.For categorical variables, we utilized the Pearson chi-square test to compare the three groups.A significance level of 0.05 was applied to all tests.To uphold data integrity, we conducted a data cleaning process on the anonymous survey.We identified and removed incomplete and duplicate entries through a systematic approach.This involved leveraging REDCap-generated participant Record IDs to eliminate redundancy, ensuring a more accurate and reliable dataset.
Ethical approval
The study received exemption of Ethical approval from the Institutional Review Board of Mass General Brigham (2021P002088).Every participant in the study formally consented on the first page of the REDCAP survey, where the informed consent section outlined the study's purpose and stressed the voluntary nature of participation.Participants who did not provide consent were unable to proceed to the subsequent pages of the survey.
A total of 54 respondents (13.0%) reported practicing in a rural setting, 291 in a non-rural setting (70.1%), and 70 practiced in both settings (16.9%).Less than half of the respondents (160, 38.6%) had experience working or studying in a medical system outside of their country's medical system.The highest response rate was from practitioners in General Surgery (174, 41.9%), followed by those in Anesthesia/ Intensive/ Critical Care (42, 10.1%).Practitioners in Endocrinologic Surgery (1, 0.2%) and Dental Surgery/OMF (2, 0.5%) had the lowest responses.
In LICs, LMCs, and UMICs, 51 (72.9%), 91 (66.4%), and 110 (53.4%) of respondents, respectively, agreed that they can combine research with clinical practice, but only 195 respondents (47.3% of 416) believe that their institutions consider their research output for promotion (Table 2).Respondents from lower-income countries were more likely to agree that securing funding for research projects is difficult in their countries (91.4% in LICs vs. 86.1% in LMICs vs. 75.2% in UMICs, p = 0.01654).However, there was no significant difference in the number of respondents that agreed that securing funding for research projects is difficult in their institution among the three groups (78.6% in LICs vs. 79.4% in LMCs vs. 73.5% in UMICs, p = 0.3735).Most respondents, 79.3%, believe that collaborating with HIC surgeons in their scholarly works is beneficial, and 82% expressed their willingness to train individuals from HIC institutions in the LMIC setting.
Discussion
This study investigated the perception of academic global surgery (AGS) and its benefits among surgery, anesthesia, and obstetric care workers and trainees from multiple LMICs.Among the 416 LMIC surgeons who participated in the study, 61% reported practicing AGS by integrating research into their clinical practice.However, only 47% believed that their research productivity influenced their career advancement within their institutions.The surveyed surgeons still face challenges in securing research funding and achieving greater research impact, and they expressed a preference for collaboration with HIC researchers to address these issues.While 4.8 billion people reside in LMICs, where many lack access to safe and affordable surgery, the majority of surgical research is predominantly conducted in HICs, leading to research outcomes that are not always relevant or applicable to LMICs [14].However, LMICs contribute to only about 4% of surgical research, underscoring the need to identify barriers that hinder the advancement of academic research in these countries [15].In our study, a majority of respondents (61%) demonstrated the ability to conduct research while fulfilling their clinical responsibilities, highlighting their potential to work towards addressing these needs.Despite facing significant challenges such as overwhelming clinical responsibilities, a shortage of research personnel, insufficient data collection resources, and limited funding, these LMIC surgeons continue to demonstrate a strong interest in pursuing surgical research [16].However, only 45% of global surgery publications listed on PubMed have authors affiliated with LMIC institutions, indicating a gap between these researchers' commitment and their representation in scholarly publications [17].If the global surgery community truly appreciates the contributions of LMIC surgeons in global surgery discourse, it is imperative that we tackle the shared and systemic obstacles they face, to enhance their presence and participation.
Increasing recognition and representation of LMIC academic global surgeons in surgical research may play a vital role in their academic achievements.In countries such as Mexico, Argentina, and South Africa, institutional policies and practices place a higher value on research and development compared to teaching, thereby emphasizing the significance of research for academic advancement [18].Previous literature has indicated that in various other developing countries, particularly within Asian regions, the importance of publishing output in advancing the careers of academic researchers is relatively less prioritized [19].Our study showed that 61% of participants from Asian regions (74 out of 121) report that their institutions prioritize their research output in promotion considerations, in contrast to 47% among the total respondents.This highlights that research output is comparatively less prioritized in promotion considerations in many other LMICs outside of Asia.Countries such as Ethiopia, Cameroon, and Sri Lanka have reported challenges with institutional administrations that exhibit centralized, bureaucratic, and hierarchical structures, which often hinder research endeavors [20].These countries, therefore, do not widely observe the practice of promoting individuals based on meritocracy and surgical research experience [20].While the criteria for recognition within AGS remains uncertain in high-income countries, efforts are being made in these countries to establish comprehensive guidelines for academic progression in the realm of global surgery [15].Although LMICs may contemplate adopting comparable directives to enhance the scholarly impact of LMIC surgeons within the AGS domain, it is important to acknowledge that this endeavor will be constrained by the limited availability of funding resources for surgical research in these nations.
Currently, the funding of surgical research in LMICs, whether from national or international agencies and charitable organizations, significantly influences each country's publication output [19].Our study showed that LMIC surgeons still face obstacles in obtaining national research funding, with a higher proportion of surgeons in lower-income countries experiencing difficulties compared to their counterparts in middle-income countries.Moreover, funding agencies and charitable organizations often dictate the allocation and utilization of their resources, leading to research topics and methodologies that have minimal impact on basic care provision [6].For example, a majority of the international funds are primarily directed towards supporting research in elective and specialized procedures, rather than focusing on emergency and basic surgery [21].Additionally, many HIC institutions also do not acknowledge global surgery as a valid academic discipline [2].Consequently, this hampers the capacity of motivated HIC surgeons to obtain funding for collaborative projects with counterparts in LMICs working to address fundamental needs.However, the system is gradually evolving.Foreign agencies that provide grants to HIC institutions for research in LMICs are revising their management of research funds.There is a growing trend of awarding joint grants to LMIC institutions, with funds allocated to HIC researchers through external contracts [11].This approach enables LMIC institutions to direct these grants toward relevant research topics and methodologies [11].
Exploring partnership between HIC and LMIC surgeons
With the increased partnership between HIC and LMIC surgeons over the past decade, host LMIC institutions express concerns about power imbalances, lack of cultural awareness, and unequal benefits [7].The global surgery community has persistently advocated for fairness and equality in these partnerships, implementing measures to uphold the significance of the autonomous efforts of LMIC surgeons [22].While there has been a notable rise in global surgery publications over the last decade, the majority of authors involved in global surgery academia between 1987 and 2017 were associated with LMICs [17].However, in recently published studies on global surgery, the predominant affiliation of authors is solely with HICs [17].There is, therefore, a recent expectation within the global health community for the inclusion of LMIC counterparts as first or senior authors, aiming to address the increasing disparity [23].Yet, a study conducted by Ghani et al. in 2021 examining global health publications from 876 journals revealed that 30% of these publications from LMICs did not have any local author listed [24].Ravi et al., in the same year, found that 45% of authors of global surgery articles were solely affiliated with LMICs.Among these LMIC authors, 46% were associated with LMCs, 28% with UMICs, and 26% with LICs [17].Thus, it is unsurprising that over 75% of LMIC surgeons hold the belief that the value of their manuscripts can be influenced by coauthorship with HIC counterparts.
Available data explains these LMIC surgeons' perceptions of surgical journals' attitudes toward their independent works.The absence of representatives from LMICs on the editorial boards of many surgical journals results in an inadequate representation of research from LMICs or research led by LMIC authors [25].This circumstance also likely accounts for the limited overall interest of scholarly journals in the field of global surgery [2].However, there is a positive shift in the culture, with academic journals such as the Journal of Surgical Research and Surgery now establishing dedicated categories for global surgery publications [2].Perhaps the editorial boards of these journals will also expand to accommodate representation from LMICs.
An emerging trend within the collaboration between HICs and LMICs involves HIC global surgery participants learning from LMIC surgeons [26,27].This approach is endorsed by the United States National Institute of Health and exemplified by Stawicki et al., who presented a case demonstrating the exchange of operative experience and mentorship to meet accreditation requirements while incorporating specific educational and competency-based objectives for both parties [28].This mentorship enables trainees from HICs to acquire surgical and research skills relevant to resource-constrained settings, which they may not have exposure to in their HIC environment [6].For instance, trainees in the United States are increasingly performing minimal-access surgery, resulting in less experience with open surgery.They would benefit from rotations in LMIC institutions that provide valuable exposure to open surgery and surgical practices in resource-limited settings [11,29].While the perception of LMIC surgeons towards the motivations of HIC trainees for their rotations in LMIC institutions remains uncertain, our study unveils that a significant 82% of LMIC surgeons are inclined to provide training opportunities to medical students, surgical trainees, and residents from HIC institutions.As more programs consider adopting this approach, there could be a need for accrediting bodies like the Accreditation Council for Graduate Medical Education and the Liaison Committee on Medical Education to develop guidelines that help promote fairness in this process for accredited medical programs.
Limitations
Certain limitations are present in this study with regards to our chosen methodology and the responses we acquired.Firstly, the study survey was solely available in English, potentially leading to limited participation from non-English speaking countries and individuals and possible unintended survey response biases among individuals with limited English proficiency.We relied on social media and international collaborators for our survey distribution, restricting our survey's reach to only areas where we had collaborators.Even within countries where collaboration was established, there could be diverse opinions, and the quantity of responses collected may not comprehensively reflect the perspectives of academic surgeons within those specific settings.To further delve into the underlying reasons for the viewpoints held by LMIC academic surgeons, forthcoming research should consider integrating interviews and other qualitative methods.These interviews can focus on investigating collaborations between HICs and LMICs, with particular emphasis on bidirectional training.Subsequent studies should also delve into potential remedies for additional factors that contribute to why LMIC academic surgeons attach greater importance to collaborations with HICs as opposed to independent endeavors.
Conclusions
Although a significant number of LMIC surgeons demonstrate a readiness to participate in academic global surgery, obstacles remain in effectively translating their research achievements into avenues for advancing their careers within their institutions and increasing representation of their research in global surgery discourse.This study highlighted the presence of these challenges, encompassing aspects such as the availability of national research funding for academic global surgery (AGS), which we observed to correspond with the income level of these nations.Additionally, concerns encompass limited institutional funding and an absence of clear pathways for academic progress through AGS research.Consequently, LMIC surgeons tend to lean towards partnering with HICs to mitigate the effects of these hindrances.We hope that the global surgery community will actively address these barriers, fostering sustainable growth in the influence of LMIC representation and contributions.This, in turn, should reduce dependence on collaboration with HICs and elevate the significance of autonomous contributions to academic global surgery by frontline practitioners in LMICs.
Table 1 . Survey respondent demographic and professional characteristics.
*The World Bank income group 2020 definition of LIC was used, which encompasses all countries whose gross national income (GNI) per capita is less than US$1,085.LMC encompasses all countries whose GNI per capita is between US$1,085 and US$4,255.UMIC encompasses all countries whose GNI per capita is between US$4,256 and $13,205.https://doi.org/10.1371/journal.pgph.0002979.t001 | 2024-03-16T05:07:58.958Z | 2024-03-14T00:00:00.000 | {
"year": 2024,
"sha1": "a8a2e4278f62aee210bd449a37e122c5d2c214fc",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "a8a2e4278f62aee210bd449a37e122c5d2c214fc",
"s2fieldsofstudy": [
"Medicine",
"Sociology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
21303315 | pes2o/s2orc | v3-fos-license | Taming the beast: control of APC/C Cdc20 –dependent destruction
The anaphase promoting complex/cyclosome (APC/C) is a large multi-subunit ubiquitin ligase that triggers the metaphase-to-anaphase transition in the cell cycle by targeting the substrates Cyclin B and securin for destruction. APC/C activity towards these two key substrates requires the coactivator Cdc20. To ensure that cells enter mitosis and partition their duplicated genome with high accuracy, APC/C Cdc20 activity must be tightly controlled. Here, we discuss the mechanisms that regulate APC/C Cdc20 activity both prior to and during mitosis. We focus our discussion primarily on the chromosomal pathways that both accelerate and delay APC/C activation by targeting Cdc20 to opposing fates. The findings discussed provide an overview of how cells control the activation of this major cell cycle regulator to ensure both accurate and timely cell division. substrates for anaphase onset are securin and cyclin B. Securin is the inhibitor of separase, the cysteine protease that cleaves a subunit of the cohesin complex that holds sister chromatids together. Cyclin B is the activator of Cdk1, the essential kinase that drives mitotic entry. Therefore, degradation of securin and cyclin B simultaneously results in chromosome segregation and exit from mitosis. is suggested to inhibit the recruitment of the E2 ubiquitin conjugating enzyme Ube2S to the APC/C. In late mitosis, Ser92 phosphorylation
Introduction
During cell division, genome stability depends on tight regulation of anaphase, the mitotic stage in which sister chromatids are separated. Anaphase should only occur after sister chromatids of all replicated chromosomes have correctly attached to opposite poles of the mitotic spindle (Fig. 1A). Progression into anaphase prior to achieving this fully attached state can lead to errors in chromosome segregation and aneuploidy, a hallmark of birth defects and cancer (Holland and Cleveland 2012;Santaguida and Amon 2015;Funk et al. 2016).
In eukaryotes, anaphase onset is triggered by the Anaphase-Promoting Complex/Cyclosome (APC/C), a large E3 ubiquitin ligase (Peters 2006;Pines 2011;Primorac and Musacchio 2013;Barford 2015) (Fig. 1A&B). When the APC/C is active, it promotes the polyubiquitination of its substrates, which leads to their proteasome-mediated degradation. The essential APC/C substrates for anaphase onset are securin and cyclin B. Securin is the inhibitor of separase, the cysteine protease that cleaves a subunit of the cohesin complex that holds sister chromatids together. Cyclin B is the activator of Cdk1, the essential kinase that drives mitotic entry. Therefore, degradation of securin and cyclin B simultaneously results in chromosome segregation and exit from mitosis. specific degrons such as the A-box in Aurora A (Littlepage and Ruderman 2002) and the Obox in Orc1 (Araki et al. 2005) have been described, although the latter was subsequently found to function as a D-box (He et al. 2013). In addition, Cdc20 and yeast Cdh1 interact with proteins containing a motif known as the Phe box or ABBA motif (for Acm1, Bub1, BubR1 and cyclin A) (Lu et al. 2014;Di Fiore et al. 2015;Diaz-Martinez et al. 2015) that can serve as a degron in some cases, such as for cyclin A (Di Fiore et al. 2015). When bound by co-activators, the APC/C forms a bi-partite receptor for D-box substrates that comprises the side of the WD40 barrel and the subunit Apc10/Doc1 (Passmore et al. 2003;Carroll et al. 2005;Kraft et al. 2005;Matyskiela and Morgan 2009;Buschhorn et al. 2011;da Fonseca et al. 2011) (Fig. 1B). In addition, the WD40 barrel serves as the receptor for KEN box and ABBA substrates (Chao et al. 2012;He et al. 2013) (Fig. 1D). In vertebrates, the APC/C functions with two E2 ubiquitin conjugating enzymes: UbcH10/Ube2C, and UbcH5/Ube2D (King et al. 1995;Aristarkhov et al. 1996); in addition, Ube2S participates in ubiquitin chain extension (Garnett et al. 2009;Williamson et al. 2009;Wu et al. 2010). On the other hand, budding yeast utilizes Ubc4 for mono-ubiquitination and Ubc1 for ubiquitin chain extension (Rodrigo-Brenni and Morgan 2007). Inhibitors of the APC/C, such as Emi1 or the mitotic checkpoint complex (discussed below), are known to regulate multiple aspects of APC/C function, including co-activator binding, substrate recognition and activity/binding of the E2 enzymes that act in conjunction with the APC/C to catalyze substrate ubiquitination.
APC/C Cdc20 inhibition during G2
A major target of the APC/C activated by Cdc20 is cyclin B, the activator of the essential mitosis-promoting kinase Cdk1 (Peters 2006;Pines 2011). Thus, APC/C Cdc20 activity must be kept in check during interphase in order to allow sufficient accumulation of cyclin B for mitotic entry.
Cdc20 itself is only synthesized in late S-phase and its levels reach a maximum in mitosis (Weinstein 1997;Prinz et al. 1998;Shirayama et al. 1998), which may partially contribute to limiting APC/C Cdc20 activity in interphase. In addition, interphase APC/C is precluded from binding to Cdc20 by an auto-inhibitory mechanism that is released upon mitotic phosphorylation (see below). However in vitro Cdc20 can efficiently activate interphase, non-phosphorylated APC/C (Fang et al. 1998), suggesting that other mechanisms also contribute to inhibiting Cdc20 before mitotic entry.
An initial candidate was the APC/C inhibitor Emi1 (Rca1 in Drosophila). (Dong et al. 1997;Reimann et al. 2001a;Reimann et al. 2001b;Grosskortenhaus and Sprenger 2002). However, while in vitro Emi1 can inhibit both APC/C Cdc20 and APC/C Cdh1 , its physiological target appears to be Cdh1 (Di Fiore and Pines 2007;Machida and Dutta 2007). An ortholog of Emi, called Emi2 or XErp1, inhibits APC/C Cdc20 to maintain the metaphase II arrest of mature Xenopus eggs (Schmidt et al. 2005;Tung et al. 2005). A role for Emi2 beyond meiotic arrest has been reported in developing Xenopus embryos, where it inhibits APC/C Cdc20 to promote cyclin B accumulation (Tischer et al. 2012). However, a mouse knockout of Emi2 is sterile and exhibits defects in meiotic progression but develops normally (Gopinathan et al. 2017), suggesting that it does not make a major contribution to somatic divisions in other systems. Thus, at present, Cdc20 synthesis and auto-inhibition of the APC/C that is relieved by mitotic phosphorylation are the major mechanisms implicated in keeping APC/C Cdc20 in check in order to allow sufficient building of Cyclin B and mitotic entry.
Cytoplasmic and chromosomal regulation of APC/C Cdc20 activity in mitosis
Upon nuclear envelope breakdown, the APC/C binds to Cdc20 and immediately becomes active towards substrates such as cyclin A and Nek2A (van Zon and Wolthuis 2010). However, cyclin B and securin are only degraded once all chromosomes have attached to spindle microtubules via their kinetochores, the protein assemblies build on their centromere regions to connect to spindle microtubules (Cheeseman 2014;Musacchio and Desai 2017). In addition to forming a dynamic microtubule interface, kinetochores function as signaling hubs where kinase and phosphatase activities are integrated to correct attachment errors and both promote as well as inhibit APC/C Cdc20 activation. A tight connection exists between microtubule attachment at kinetochores and APC/C Cdc20 -mediated degradation of securin and cyclin B, which ensures coordinated segregation of all chromosomes and prevents chromosome loss. Below we discuss both cytoplasmic and kinetochore-based mechanisms that control APC/C Cdc20 activity.
Cytosolic APC/C Cdc20 activation by phosphorylation
Studies in the late 90s and early 2000s showed that APC/C phosphorylation during mitosis was a prerequisite for its activation by Cdc20 (Lahav-Baratz et al. 1995;Peters et al. 1996;Patra and Dunphy 1998;Shteinberg et al. 1999;Golan et al. 2002;Kraft et al. 2003). The mitotic kinases Cdk1 and Plk1 phosphorylate multiple APC/C subunits and this phosphorylation increases binding affinity for Cdc20 (Kraft et al. 2003). However, the biochemical and structural mechanism of this phospho-dependent regulation has only recently been elucidated (Fujimitsu et al. 2016;Qiao et al. 2016;Zhang et al. 2016) (Fig. 3). In brief, the APC/C subunit Apc1 possesses an internal loop that blocks the binding of the C-box of Cdc20 to the APC/C subunit Apc8. Thus, apo-APC/C is normally in an autoinhibited state (Fig. 3A). Phosphorylation of the Apc1 loop by Cdk1 and Plk1 releases it from Apc8 and thereby promotes Cdc20 binding (Fig. 3B). In agreement with this model, mutation or deletion of the Apc1 loop permits Cdc20 binding regardless of APC/C phosphorylation status (Fujimitsu et al. 2016;Qiao et al. 2016;Zhang et al. 2016).
Interestingly, Apc1 phosphorylation is facilitated by an initial priming phosphorylation of the Apc3 subunit by Cdk1, which then recruits Cdk1-Cks complexes to further phosphorylate Apc3 and then Apc1. Moreover, the APC/C has been shown to be a weak substrate for Cdk1 in vitro and in vivo (i.e. it is only phosphorylated once high Cdk1 activity is achieved right before mitotic entry) (Lindqvist et al. 2007;Deibler and Kirschner 2010). These mechanisms may enforce a dependence on high Cdk1 activity and make APC/C Cdc20 activation kinetically lag behind initial Cdk1 activation, which may explain why APC/ C Cdc20 only starts degrading substrates upon nuclear envelope breakdown. While this model has good support from biochemical experiments in Xenopus egg extracts (Fujimitsu et al. 2016;Qiao et al. 2016), it will be important to assess whether phosphorylation of the Apc1 loop represents a conserved mechanism restraining activation of APC/C Cdc20 to mitosis in a cellular context. Interestingly, the interaction between APC/C and Cdh1 does not appear to be significantly affected by APC/C phosphorylation. This may be due to the fact that Cdh1 binds to the APC/C with higher affinity , enabling it to efficiently displace the Apc1 loop from Apc8. This feature may explain the switch from APC/C Cdc20 to APC/C Cdh1 in late mitosis (see below).
Kinetochore-mediated Cdc20 activation through dephosphorylation
As discussed above, Cdk1 activity promotes the interaction between APC/C and Cdc20. However, paradoxically Cdk1/2 proteins also block the binding of both Cdc20 and Cdh1 to the APC/C (Fig. 3B) (Kramer et al. 2000;Yudkovsky et al. 2000). Phosphorylation sites near the N-terminal C-box prevent the interaction of co-activators with the APC/C (Labit et al. 2012;Chang et al. 2015) (Fig. 1C). Phosphorylated Cdc20 is found already in G2 and, in human tissue culture cells, its phosphorylation may be important for the accumulation of cyclins and mitotic entry (Hein and Nilsson 2016). Interestingly, in C. elegans embryos, preventing Cdc20 phosphorylation significantly accelerates anaphase onset , indicating that Cdc20 phosphorylation is an important mechanism restraining APC/ C Cdc20 activity in mitosis. These observations suggest that Cdc20 must be dephosphorylated in order to allow full APC/C activation (Fig. 3B).
In recent work, we showed that Cdc20 dephosphorylation, which contributes to APC/C activation, is promoted by kinetochores in C. elegans embryos (Fig. 4A).
During mitosis, Cdc20 is recruited to kinetochores through its interaction with Bub1 (Di Fiore et al. 2015;Vleugel et al. 2015;Kim et al. 2017), a conserved component implicated in both the spindle assembly checkpoint and chromosome segregation (Bolanos-Garcia and Blundell 2011; Elowe 2011). Bub1, along with its binding partner Bub3, is recruited to kinetochores through the kinetochore scaffold Knl1, which is phosphorylated on repeats in its N-terminus by the kinases Mps1 and Plk1 (London et al. 2012;Shepperd et al. 2012;Yamagishi et al. 2012;Espeut et al. 2015;von Schubert et al. 2015). At its extreme Nterminus, Knl1 possesses "SILK" and RVxF" motifs that recruit the catalytic subunit of protein phosphatase 1 (PP1c) (Liu et al. 2010;Meadows et al. 2011;Rosenberg et al. 2011;Espeut et al. 2012). Our findings suggest that by bringing Cdc20 to the vicinity of Knl1bound PP1, kinetochores catalyze Cdc20 activation by removing the inhibitory phosphorylation in its N-terminus (Fig. 4A). In support of this model, blocking Cdc20 or PP1 recruitment to kinetochores delays anaphase onset, an effect that can be bypassed by mutating the Cdk phosphorylation sites on Cdc20 (Kim et al. 2015;Kim et al. 2017). Notably, a role for kinetochore-localized Bub1-Bub3 in promoting APC/C Cdc20 activation has also been reported in budding yeast . As preventing Cdc20 recruitment to kinetochores does not result in a mitotic arrest, cytosolic phosphatases likely also promote Cdc20 dephosphorylation independently of kinetochores; alternatively, Cdc20 N-terminal phosphorylation may not be sufficient to fully block its binding and activation of the APC/C.
In addition to its regulation by Cdk1/2, human Cdc20 is also inhibited by phosphorylation on Ser92 by Plk1 (Craney et al. 2016;Jia et al. 2016;Lee et al. 2017) (Fig. 1C). This phosphorylation is facilitated by Bub1 and is suggested to inhibit the recruitment of the E2 ubiquitin conjugating enzyme Ube2S to the APC/C. In late mitosis, Ser92 phosphorylation is reversed by PP2A-B56 docked onto either BubR1 or the APC/C itself. While this mechanism has biochemical support (Craney et al. 2016;Jia et al. 2016), its importance in an in vivo context is unclear, given that deletion of Ube2S (Wild et al. 2016) or mutation of Ser92 in Cdc20 (Lee et al. 2017) result in relatively mild effects on mitotic exit.
Kinetochore-dependent APC/C Cdc20 inhibition by the spindle assembly checkpoint
Phosphorylation of the APC/C at mitotic entry that relieves inhibition of Cdc20 binding might explain why degradation of APC/C Cdc20 substrates such as cyclin A and Nek2A begins right at nuclear envelope breakdown, when mitotic kinases are active (van Zon and Wolthuis 2010). However, degradation of cyclin B and securin only occurs after microtubule binding to all kinetochores in order to prevent errors in chromosome segregation (Fig. 2). A large body of work has focused on how chromosomes regulate APC/C Cdc20 to prevent premature cyclin B and securin degradation, which is discussed below.
The spindle checkpoint is the mechanism that inhibits degradation of cyclin B and securin by APC/C Cdc20 in the presence of chromosomes with unattached kinetochores (Fig. 4B). When unattached, kinetochores catalyze the formation of an APC/C Cdc20 inhibitor known as the Mitotic Checkpoint Complex or MCC, composed of BubR1 (Mad3 in yeast and nematodes), Bub3, Mad2 and Cdc20 (Sudakin et al. 2001). The spindle checkpoint has been subjected to extensive mechanistic analysis (for detailed reviews, see Lara-Gonzalez et al. 2012;Jia et al. 2013;Musacchio 2015;Etemad and Kops 2016;Corbett 2017). Here, we briefly summarize current understanding of how kinetochores control formation of the MCC and an interesting intertwining with the kinetochore-based APC/C activation mechanism that acts on Cdc20.
Microtubule attachment silences spindle checkpoint signaling employing at least three different mechanisms. First, microtubules promote the dynein motor-dependent "stripping" of spindle checkpoint proteins from the kinetochore (Howell et al. 2001;Wojcik et al. 2001).
Integration of mechanisms activating and inhibiting APC/C Cdc20 at the kinetochore As mentioned above, unattached kinetochores signal through the spindle checkpoint to inhibit APC/C Cdc20 . However, we have found that kinetochores also promote APC/C Cdc20 activation by removing inhibitory phosphates on the N-terminus of Cdc20 . How can then these opposing functions be reconciled? A key observation is that both mechanisms depend on the recruitment of Cdc20 to kinetochores (Fig. 4). Cdc20 is recruited through Bub1, which possesses a Cdc20-binding "ABBA" motif (Di Fiore et al. 2015;Vleugel et al. 2015;Kim et al. 2017). Notably, this recruitment is highly dynamic with kinetochore-bound Cdc20 exhibiting a half-life of 0.5-2 seconds (Kallio et al. 2002;. Thus, Cdc20 is rapidly fluxing through kinetochores via interaction with Bub1's ABBA motif. Mutation of the ABBA motif on Bub1 not only prevents the kinetochoredependent anaphase promoting function but also abolishes spindle checkpoint signaling (Di Fiore et al. 2015;Vleugel et al. 2015;Kim et al. 2017). Bub1 is critical to recruit the Mad1-Mad2 complex to unattached kinetochores (Klebig et al. 2009;London and Biggins 2014;Moyle et al. 2014;Zhang et al. 2017), although this function is independent of the ABBA motif (Vleugel et al. 2015;Kim et al. 2017). Therefore, recruitment of Cdc20 to the ABBA motif of Bub1 likely promotes formation of the MCC by bringing it in close proximity to active Mad1-Mad2 that is also bound to Bub1 (Fig. 4B). Interestingly Mps1 phosphorylation of the C-terminus of Mad1, which is essential for Mad1-Mad2 activation (Faesen et al. 2017), may also create a binding site for Cdc20 (Ji et al. 2017). Thus Bub1's ABBA motif may help generate a locally high concentration of Cdc20 at kinetochores that, if Mad1-Mad2 is present and phosphorylated, places Cdc20 on the Mad1 C-terminus in close proximity to the conformationally converting Mad2 and promotes formation of the Mad2-Cdc20 complex that matures into the MCC (Fig. 4B).
The above-mentioned data suggests that Cdc20 recruited to kinetochores on a single site has two opposite fates: APC/C activation through Cdc20 dephosphorylation and APC/C inhibition through its incorporation on the MCC . Given that the spindle assembly checkpoint is only active at unattached kinetochores, the choice between these two fates is dependent on the status of kinetochore-microtubule interactions (Fig. 4A&B). At unattached kinetochores, spindle checkpoint signaling would cause Cdc20 to be primarily incorporated onto the MCC to prevent premature APC/C Cdc20 activation, whereas following microtubule attachment, when the spindle checkpoint is silenced, Cdc20 would be primarily dephosphorylated and activated to promote anaphase onset. The switch between these two fates could be further sharpened by PP1c recruitment, which may be promoted or dependent on microtubule attachment (Trinkle-Mulcahy et al. 2003;Liu et al. 2010;Kim et al. 2017). It is possible that Cdc20 dephosphorylation occurs throughout mitosis, regardless of kinetochore-microtubule interactions. Regardless, the responsiveness of checkpoint signaling to microtubule attachment would still shift the balance between the opposing Cdc20 fates.
APC/C Cdc20 inactivation in late mitosis
Once securin and cyclin B are degraded, the APC/C is thought to switch coactivators from Cdc20 to Cdh1 (Fig. 2). APC/C Cdh1 activity in late mitosis is essential for the degradation of Aurora kinases (Floyd et al. 2008). In addition, APC/C Cdh1 is required in G1 for the degradation of cyclins in order to allow the loading of pre-replication complexes onto chromatin for the subsequent S-phase (reviewed in Sivaprasad et al. 2007).
The Cdc20-Cdh1 switch is likely explained by the decline in Cyclin B-Cdk1 activity, enabling phosphatases to dephosphorylate the APC/C and reduce its affinity for Cdc20. At the same time, Cdh1, which is kept inactivated by Cdk-dependent phosphorylation throughout most of the cell cycle, would become dephosphorylated and bind to and activate the APC/C (Peters 2006;Pines 2011). However, some APC/C Cdc20 activity persists in late mitosis and indeed, many late APC/C substrates, such as Plk1, survivin and Cenp-F are reliant on Cdc20 for their degradation (Floyd et al. 2008;Gurden et al. 2010). Regardless, at anaphase onset, Cdc20 itself becomes an APC/C substrate and therefore, by G1, the APC/C is mostly Cdh1-bound.
Final remarks
Since its discovery in the early 90s as the machine that drives mitotic exit (King et al. 1995;Sudakin et al. 1995), the APC/C and its co-activator Cdc20 have been extensively studied. In the last five years, advances in high-resolution cryo-EM, combined with biochemical and cell-based assays have lead to an explosive increase in our understanding of APC/C Cdc20 enzymology and mechanisms of its regulation.
Interestingly, the APC/C is not only required in dividing cells but also plays important roles in differentiated tissues, such as the nervous system (Huang and Bonni 2016). While most of these functions dependent on Cdh1, Cdc20 is expressed in some neuronal types and is required for their differentiation Yang et al. 2009;Kowalski et al. 2014;Watanabe et al. 2014;Mao et al. 2015). These findings highlight the potential for new studies focused on understanding how post-mitotic APC/C functions are regulated. For example, a cyclin-dependent kinase called Cdk5 is present in sensory neurons, where it regulates multiple signaling events (Kawauchi 2014); therefore, Cdk5 may substitute for Cdk1 in neurons to regulate the interaction between APC/C and its co-activators in a manner similar to what has been observed during cell cycle progression (Maestre et al. 2008;Veas-Perez de Tudela et al. 2015). Given that Cdk5 has garnered a significant amount of interest for its role in Alzheimer's disease progression (Fuchsberger et al. 2017), its mechanistic connection with the APC/C in the nervous system is likely to be the focus of future work.
Finally, understanding of APC/C Cdc20 mechanism and regulation has opened the possibility for new therapies targeting the APC/C in cancer (Wang et al. 2015;Zhou et al. 2016). Current treatments employ spindle poisons to activate the spindle assembly checkpoint and induce apoptosis but are limited by cells slipping out of mitosis due to residual APC/C activity (Brito and Rieder 2006;Gascoigne and Taylor 2008). A number of studies have shown that directly inhibiting mitotic exit is a more efficient approach to killing cancer cells (Huang et al. 2009;Manchado et al. 2010). Two small-molecule APC/C inhibitors have been developed, proTAME and Apcin (Zeng et al. 2010;Sackton et al. 2014), which block the interaction between co-activators and the APC/C. When added to cells in combination, proTAME and Apcin efficiently block mitotic exit (Sackton et al. 2014). Once optimized to act in a clinical context, these drugs have the potential to synergize with commonly employed microtubule poisons that activate the spindle checkpoint (Giovinazzi et al. 2013;de Lange et al. 2015) and contribute to improving this widely used chemotherapeutic strategy. (A) Cartoon illustrating the metaphase-to-anaphase transition, which is promoted by APC/ C Cdc20 activity. Microtubules are in yellow, chromosomes in blue and kinetochores in grey. (B) Structure of APC/C Cdc20 bound to a D-box-containing substrate, Hsl1 . The substrate binds to the interphase between Cdc20 and the APC/C subunit Apc10 (adapted from Corbett 2017). (C) Schematic illustrating the domains in human Cdc20. The C-box, KILR and IR tail motifs contribute to APC/C binding, whereas the WD40 domain is involved in substrate recognition. Inhibitory Cdk1 phosphorylation sites are shown in red, whereas S92, which is phosphorylated by Plk1, is in orange. Note that the KILR motif is also the Mad2 interacting motif. (D) Structure of the WD40 domain of S.cerevisiae Cdh1 bound to an inhibitor, Acm1 (He et al. 2013). The structure shows the interaction sites for the three APC/C degrons: D-box, KEN box and ABBA motif (adapted from Corbett 2017). During mitosis, Cdc20 is recruited to kinetochores by Bub1/Bub3, which is bound to phospho-Knl1. (A) When kinetochores are attached by microtubules, kinetochores promote Cdc20 de-phosphorylation by kinetochore-localized PP1c, which allows its activation. Cdc20 may also be dephosphorylated at the cytosol, likely through PP2A-B56. (B) When microtubules are unattached, signal from the spindle assembly checkpoint catalyzes the incorporation of Cdc20 into the mitotic checkpoint complex (MCC), which binds and inhibits APC/C Cdc20 activity. See text for more details. | 2018-04-03T04:38:13.109Z | 2017-01-01T00:00:00.000 | {
"year": 2017,
"sha1": "dd5325e1a4e63bf1bbdc5c13b7574763e256f7c3",
"oa_license": "CCBYNC",
"oa_url": "http://symposium.cshlp.org/content/82/111.full.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "9ffdd9df3abb24bd89803d4366bb56877f951425",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
6712760 | pes2o/s2orc | v3-fos-license | Evolution of Trypanosoma cruzi: clarifying hybridisations, mitochondrial introgressions and phylogenetic relationships between major lineages
Several different models of Trypanosoma cruzi evolution have been proposed. These models suggest that scarce events of genetic exchange occurred during the evolutionary history of this parasite. In addition, the debate has focused on the existence of one or two hybridisation events during the evolution of T. cruzi lineages. Here, we reviewed the literature and analysed available sequence data to clarify the phylogenetic relationships among these different lineages. We observed that TcI, TcIII and TcIV form a monophyletic group and that TcIII and TcIV are not, as previously suggested, TcI-TcII hybrids. Particularly, TcI and TcIII are sister groups that diverged around the same time that a widely distributed TcIV split into two clades (TcIVS and TcIVN). In addition, we collected evidence that TcIII received TcIVS kDNA by introgression on several occasions. Different demographic hypotheses (surfing and asymmetrical introgression) may explain the origin and expansion of the TcIII group. Considering these hypotheses, genetic exchange should have been relatively frequent between TcIII and TcIVS in the geographic area in which their distributions overlapped. In addition, our results support the hypothesis that two independent hybridisation events gave rise to TcV and TcVI. Consequently, TcIVS kDNA was first transferred to TcIII and later to TcV and TcVI in TcII/TcIII hybridisation events.
Trypanosoma cruzi, the etiological agent of Chagas disease, affects several million people around the world. The major phylogenetic subdivisions of T. cruzi were widely analysed by Miles et al. (1977Miles et al. ( , 1978, who described different zymodemes by multilocus enzyme electrophoresis (MLEE). A few years ago, six different discrete typing units (DTUs) were clearly defined for T. cruzi based on different genetic markers (Zingales et al. 2012). These DTUs were termed from TcI to TcVI (Zingales et al. 2009). Recently, an additional DTU that is mainly associated with bats was proposed and named TcBat (Marcili et al. 2009a). The relationships between these DTUs were explained by several models, but these models are contradictory on several points , Westenberger et al. 2005, de Freitas et al. 2006, Flores-Lopez & Machado 2011. Consequently, the origins of different DTUs and their inter-relationships remain controversial. In this paper, we analysed our own DNA sequence data of T. cruzi and data published by others to clarify the relationships between different DTUs. In addition, we discuss different evolutionary scenarios for T. cruzi and propose a model for the origin of each DTU.
MATERIALS AND METHODS
Analysed sequences -In a previous paper about multilocus sequence typing (MLST) for T. cruzi, we analysed 13 housekeeping gene fragments by simple neighbourjoining (NJ) analysis with the goal of obtaining a standardised MLST method for DTU assignment (Diosque et al. 2014 cruzi marinkellei as an outgroup. Sequence data of the selected targets for T. cruzi marinkellei were obtained from TriTrypDB (available from: tritrypdb.org) under the following accessions: TcMARK_CONTIG_2686, TcMARK_CONTIG_670, TcMARK_CONTIG_1404, Tc_MARK_2068, Tc_MARK_3409, Tc_MARK_5695, Tc_MARK_9874, Tc_MARK_515, Tc_MARK_4984, Tc_MARK_5926, Tc_MARK_8923, TcMARK_ CONTIG_1818 and Tc_MARK_2666. In addition, sequences analysed by Westenberger et al. (2005) corresponding to loci 1F8 calcium-binding protein, histone H1, histone H3 and heat-shock protein 60 (HSP60) were downloaded from GenBank. The accessions for these se-Data analysis -Alignments were produced with MEGA 6.0 software (Tamura et al. 2013) using default parameters. Regions with gaps in the alignment were excluded from the analyses. Concatenation of CytB and COII-Nd1 fragments was made using MLSTest 1.0 (Tomasini et al. 2013). A five-nucleotide gap present in the sequences of three strains in the COII-Nd1 alignment was coded as "G" for present and "A" for absent to be considered in the phylogenetic analysis. Sequences obtained in our previous paper (Diosque et al. 2014) were concatenated before performing most of the phylogenetic analyses. To evaluate congruence among different loci and suitability for concatenation, we performed a BioNJ-ILD test (Zelwer & Daubin 2004) with 1,000 random permutations. NJ analyses were made with MLSTest software using uncorrected p-distances and considering heterozygous sites as average states. One thousand bootstrap replications were used to evaluate branch support. Maximum likelihood (ML) analyses were conducted with MEGA 6.0 software. The best model for each analysis was selected using corrected Akaike information criterion implemented in jMODELTEST software (Posada 2008). Bayesian analyses were run in MrBayes v.3.1 (Ronquist & Huelsenbeck 2003). Metropolis-coupled Markov chains (MCMCs) with Monte Carlo simulation were run until likelihoods remained stationary and the two independent runs converged after one million generations. By sampling every 100th generations from the two independent runs in MrBayes and discarding the first 25% of the trees as burn-in, 50% majority-rule consensus phylograms were constructed. Molecular clock and species tree inference were implemented in BEAST package v.2.1 . First, strict, relaxed lognormal and exponential clock models were analysed for each locus considering a model of coalescent constant population. The Bayesian inference was made with MCMC chains of 4 x 10 7 states (or 1 x 10 8 states if convergence was not reached) and sampling trees every 5,000 states. Relaxed exponential and strict clocks were compared using Bayes factor (BF), which was calculated using Tracer software with 1,000 random bootstrap replications to estimate marginal likelihood. Second, a Bayesian co-estimation of the species tree and molecular clock parameters was made for the loci analysed by Diosque et al. (2014) using a STAR-BEAST analysis. Third, a calibration point was considered in the analysis for those loci whose homologous sequences were present in Trypanosoma brucei strain TREU427 genome and that were informative about DTU relationships. To calibrate the clock-rate estimations, a normally distributed prior of the divergence time between T. brucei and T. cruzi sequences with a mean of 100 million years ago and standard deviation of 2.0 was imposed as previously suggested . Clock models were unlinked and the implemented model for each locus was selected according to the BF analysis for each gene fragment. The population function in multispecies coalescent parameters was set to linear with a constant root. An MCMC chain of 250 million iterations was run, with parameters and trees sampled every 5,000 iterations and removal of the first 10% of states as burn-in. Logfiles were checked for sufficient effective sampling sizes using TRACER v.1.5 .
Because the inclusion of genotypic data of hybrid DTUs (TcV and TcVI) can lead to bias in the phylogenetic analyses, we first obtained patterns for non-hybrid lineages (TcI to TcIV) based on the MLST allelic profiles of sequences analysed by Diosque et al. (2014). Next, six hypothetical TcII/TcIII hybrid strains with heterozygous profiles were included in the analysis. A distance matrix was generated based on the number of different alleles between strains. In addition, the distance between heterozygous and homozygous genotypes at each locus was considered 1 if no alleles were shared and 0.5 if one allele was shared. When two heterozygous genotypes were identical, the distance was considered 0. NJ analyses using the PHYLIP package (Felsenstein 2005) were performed based on the distance matrices.
The NJ method was also implemented to evaluate phylogeny of online available CytB sequences. In addition, the same method was used to analyse sequences published by Westenberger et al. (2005) and an outgroup sequence. Branch support was evaluated using 1,000 bootstrap replications.
The allele sequences for TcV and TcVI strains published by Diosque et al. (2014) were inferred for each one of the 13 loci with the PHASE algorithm implemented in DNAsp (Librado & Rozas 2009). We analysed 10,000 iterations sampling every each 100 states and discarding the first 1,000 as burn-in.
RESULTS AND DISCUSSION
TcI, TcIII and TcIV form a monophyletic group -Based on combined data analysis of previously published information, we propose that TcI, TcIII and TcIV form a monophyletic group. In addition, we will review and discuss various models describing the relationships between the TcI, TcII, TcIII and TcIV DTUs.
First, we analysed sequence data from 18 T. cruzi reference strains (Supplementary Table I, 1st 18 strains) and the T. cruzi marinkellei outgroup to address the phylogenetic relationships between the TcI, TcII, TcIII and DTUs. We did not include the TcV and TcVI strains because there is sufficient evidence identifying them as TcII/TcIII hy-brids (Brisse et al. 1998, Sturm et al. 2003, Westenberger et al. 2005. Thirteen loci [described in Diosque et al. (2014)] (see also the "Analysed sequences" section in Materials and Methods) were analysed by different phylogenetic methods. We did not detect major incongruences between loci that allowed concatenation (bioNJ-ILDp = 0.855). The resulting phylogeny is shown in Fig. 1 (left tree). Two major clades were observed. The first clade clustered TcII strains, whereas the second branch clustered TcI, TcIII and TcIV DTUs. Both major branches of the tree have maximum statistical support NJ, ML and Bayesian inference (branch values on left tree in Fig. 1). The analysis for each locus showed that the TcI-TcIII-TcIV clade was observed in nine out of the 13 gene trees according to the ML or NJ methods (data not shown). These results provide strong evidence that TcI, TcIII and TcIV cluster in a monophyletic group.
We obtained certain topological incongruences among the trees of each locus (data not shown) and thus we performed a Bayesian inference of the species tree based on multilocus sequence data using a STAR-BEAST analysis. This method considers coalescent models and is an alternative method that allows us to infer the species tree, but avoid possible bias due to concatenation of sequences. The obtained species tree corroborated the observed clustering of TcI, TcIII and TcIV with high Bayesian probability (BP) (Fig. 1, right tree). Branch values represent statistical support for 1,000 bootstrap repetition in a neighbour-joining analysis (1st value), 1,000 bootstrap repetitions for ML (2nd value) and posterior probability in Bayesian inference using MrBayes software (3rd value). Right, most probable topologies visualised in Densitree 2.1 to illustrate the statistical uncertainty of the species tree estimation. Greater topological agreement is visualised by a higher density of trees, whereas uncertainty in the height and distribution of nodes are represented by increased transparency. Most common topology is shown in blue and the second most common topology is shown in red. Solid blue lines represent the consensus tree and node values indicate posterior probability. Machado and Ayala (2001) were the first to propose the TcI-TcIII-TcIV clade. They analysed sequence data of two nuclear genes (dhfrs and TR) and one maxicircle region (including the genes COII and Nd1). In this study, Machado and Ayala (2001) also observed clustering of the TcI, TcIII and TcIV DTUs based on the three analysed fragments. Although the use of just three genomic regions may not be representative of the whole genome, this was the first evidence of the TcI-TcIII-TcIV clade. Subsequently, Flores-Lopez and Machado (2011) analysed the sequences of 31 nuclear loci and one maxicircle locus in seven reference strains. They analysed the tree topology for each locus and observed the TcI-TcIII-TcIV cluster at 24 out of the 32 loci. The analysis of the concatenated sequences clearly showed the same cluster with high statistical support. Although seven strains may be considered a low number of strains, these results strongly agree with what we observed.
Unsupported models of inter-DTU relationships -Additional models have been proposed to explain the relationships between TcI to TcIV DTUs. These models do not agree with the clustering of TcI-TcIII-TcIV. Brisse et al. (2000) were the first to propose a division of T. cruzi into six lineages. They also analysed the phylogenetic relationships among these different DTUs with MLEE and random amplified polymorphic DNA (RAPD). Specifically, they analysed 22 loci by MLEE and 20 different primers by RAPD. Two major lineages were observed for both markers with high bootstrap support. The first lineage corresponded to TcI and the second one corresponded to a cluster of TcII to TcVI (previously called TcIIa to TcIIe). However, a major concern about the phylogenetic analysis made by Brisse et al. (2000) is the inclusion of genotypic data from TcV and TcVI. Considering the hybrid status of TcV and TcVI, there may have been an artefact in the tree inference because genotypic data of hybrids was included in the analysis. As we do not have MLEE data available for T. cruzi, we conducted a simple analysis to test the hypothesis of a biased phylogenetic inference. Based on the sequences of the 13 gene fragments analysed by Diosque et al. (2014), we generated MLST allelic profiles for strains from TcI to TcIV (Supplementary Table I, strains 1-18). The NJ algorithm revealed two major clades: TcI-TcIII-TcIV and TcII (Fig. 2, left). Additionally, we included six hypothetical hybrid strains in the analysis. These "hybrid" strains have allelic profiles compatible with a hybridisation event between TcII and TcIII (i.e., TcII = allele1, TcIII = allele2 and hybrid strains = allele1/allele2). The NJ indicated two major clusters, but TcIII did not cluster with TcI. Instead, TcIII strains clustered with TcII and the hybrids (Fig. 2, right). This simple example clearly shows that genotypic data of hybrid DTUs should be cautiously considered to avoid the inference of a biased phylogeny. Westenberger et al. (2005) proposed an alternative evolutionary framework for T. cruzi. This alternative model proposes that TcI and TcII are ancestral lineages and a first hybridisation event occurred between these DTUs. In addition, they proposed that the hybrid descendant underwent a genomic loss of heterozygosity and/ or recombination between parental alleles. This genomic process would have formed the TcIII and TcIV DTUs. Westenberger et al. (2005) presented evidence supporting this model. In four out of nine gene sequences they observed that the genetic distance from TcIII and/or TcIV to TcII was shorter than that to TcI. In fact, five loci showed the inverse pattern. In addition, they proposed that TcIII and TcIV have mosaic patterns combining different fragments of TcI and TcII sequences. In their analyses, Westenberger et al. (2005) did not include an outgroup. In the absence of an outgroup it is not possible to determine whether a character is derived or ancestral. Unfortunately, relationships among DTUs cannot be clearly addressed under this scenario of uncertain ancestry. To clarify the relationships between the TcI to TcIV DTUs we reanalysed several loci examined by Westenberger et al. (2005), particularly those that were proposed to provide evidence of clustering of TcII with TcIII and/or TcIV. In addition, we included an outgroup sequence corresponding to T. cruzi marinkellei for each locus. Finally, we also evaluated the presence or absence of mosaic patterns. Apparent mosaic patterns were observed before including the outgroup sequence (Fig. 3, sites denoted with an x-mark). However, we did not observe any mosaic for any locus when the outgroup was included in the alignment (Fig. 3). Seven informative sites (denoted with a plus sign in Fig. 3) favoured the clustering of TcIII and/or TcIV with TcI. Instead, just one polymorphism clustered TcII with TcIV and one polymorphism clustered TcII-TcIII-TcIV. These two last sites were located at different loci; thus, homoplasy is the most parsimonious explanation for their existence. We also analysed phylogenetic trees for these four loci. The left tree correspond to a neighbour-joining tree based on a simulated multilocus enzyme electrophoresis dataset. This dataset was based on multilocus sequence typing allelic profiles of 13 loci corresponding to discrete typing units TcI (green boxes), TcII (yellow boxes), TcIII (blue boxes) and TcIV (red boxes). The right tree shows a biased topology due to including of hypothetical hybrid profiles resulting of TcII and TcIII hybridisation. It can be observed that TcIII does not cluster with TcI and TcIV as in the left tree.
H1 and 1f8 genes showed clear clustering of TcI-TcIII-TcIV with strong support (Supplementary Figure). In contrast, H3 and HSP60 showed clusters that were incompatible with TcI-TcIII-TcIV. However, these clusters showed low bootstrap support (< 70%), suggesting a low phylogenetic signal to address inter-DTU relationships (Supplementary Figure).
Consequently, this reanalysis of Westenberger's et al. (2005) data including an outgroup revealed that the analysed TcIII and TcIV sequences have no mosaic patterns. In addition, this reanalysis supports the clustering of TcIII and TcIV with TcI. These results highlight the usefulness of using one or more outgroup strains in phylogenetic analyses of T. cruzi strains.
de Freitas et al. (2006) proposed the three ancestor model for the evolution of T. cruzi. They analysed several strains of TcI, TcII, TcIII, TcV and TcVI. However, few strains of TcIV were analysed and this DTU was not considered in the model. Sequences from three maxicircle loci (COII, Nd1 and Cyt B) and five microsatellite loci were analysed. They proposed the existence of at least three ancestral lineages (TcI, TcII andTcIII). However, no outgroup was included in this study and thus they could not define the relationships among these three ancestors. Machado and Ayala (2001) showed that for the COII-Nd1 locus [which was also analysed by de Freitas et al. (2006)], the TcI-TcIII-TcIV cluster is clearly observed. Consequently, we also analysed 97 CytB sequences that are available in GenBank and included several TcIV strains and outgroup sequences corresponding to T. cruzi marinkellei and Trypanosoma vespertilionis. We also observed that cytB phylogeny strongly supported the clustering of the TcI, TcIII and TcIV DTUs (bootstrap = 98.9) (Fig. 4). Consequently, mitochondrial loci analysed by de Freitas et al. (2006) also support the TcI-TcIII-TcIV cluster.
TcI and TcIII are sister clades -We collected evidence from nuclear genome data showing that TcI and TcIII share a common ancestor. First, there was strong support for this cluster (NJ bootstrap = 94, ML bootstrap = 99 and BP = 1) according to the 13 loci phylogeny observed in Fig. 1 (left). Topologies showing the TcI-TcIII cluster were the most frequently resolved type among the 13 loci analysed (data not shown). Six and four loci showed TcI-TcIII clusters for individual gene trees inferred by NJ and ML, respectively (data not shown). In contrast, four (NJ) and three (ML) individual gene trees were incompatible with this cluster (data not shown). The remaining topologies (3 for NJ and 6 for ML analysis) were unresolved about the TcI-TcIII-TcIV relation- Fig. 3: apparent mosaic patterns observed by Westenberger et al. (2005). Polymorphic sites for 1F8 calcium-binding protein, histone H1, histone H3 and heat-shock protein 60 (HSP60) loci analysed by Westenberger et al. (2005) plus the outgroup sequences. Coloured columns show polymorphic sites with information on clustering of different strains (only parsimony-informative sites are shown). Green bases represent a derived character, whereas yellow bases indicate an ancestral feature. Note that excluding the outgroup, the sites marked with an X wrongly appear to cluster TcIII and/or TcIV with TcII. In contrast, positions denoted with + show clustering of TcIII and/or TcIV with TcI according to the outgroup. Positions denoted with an "o" are clustering TcIV and/or TcIII with TcII. In consequence, excluding the outgroup gives an apparent mosaic which is not real.
ships. The low number of loci indicating clustering of TcI-TcIII suggests that both lineages rapidly diverged after the TcI-TcIII ancestor was separated from that of TcIV. The species tree obtained by Bayesian inference also strongly supported this clustering (Fig. 1, right). However, the TcIII-TcIV cluster observed for few loci may suggest incomplete lineage sorting, but additional data are required to confirm this hypothesis. Homoplasy and lateral gene transfer are alternative hypotheses.
Additional evidence of the TcI-TcIII clustering was provided by Machado and Ayala (2001). They observed that the dhfrs and TR loci clustered both DTUs together. In addition, the same pattern was observed for the GPI locus ). Flores-Lopez and Machado (2011) observed the TcI-TcIII cluster on the phylogeny of 32 concatenated loci (bootstrap = 72, Bayesian support = 100). In addition, 11 out of 24 topologies that supported TcI-TcIII-TcIV clustering also supported clustering of TcI-TcIII. Just six topologies were incompatible with the TcI-TcIII clustering (3 showed TcI-TcIV clustering and 3 showed TcIII-TcIV clustering). Finally, the H1 and H3 loci shown in Supplementary Figure also support the clustering of TcI and TcIII. Fig. 1 shows considerable distance between the CanIII strain (from Brazil, TcIV S ) and TcIV strains from North America (TcIV N ). Eleven out of the 13 analysed loci clustered the TcIV N strains separately from the TcIV S strain. In addition, the cytB analysis (Fig. 4) showed that TcIV N was clearly separated from TcIV S sequences, which was also observed by others (Brisse et al. 2003, Marcili et al. 2009a, b, Ramirez et al. 2011. Evidence for this split was previously described by different makers: MLEE and RAPD , rDNA promoter region (Brisse et al. 2003), SSU rDNA (Marcili et al. 2009b), Dhfrs sequence (Roellig et al. 2013), GPI sequence ) and multilocus analyses , Messenger et al. 2012.
TcIV is divided into two main sub-clusters: TcIV S and TcIV N -
Multiple introgression events from TcIV S to TcIII explain the TcIII kDNA origin -As we proposed, TcI and TcIII form a monophyletic group according to the nuclear phylogeny. However, mitochondrial data showed clustering of TcIII with TcIV S through an analysis of the COII-Nd1 locus (Machado & Ayala 2001, cytB (Marcili et al. 2009a, b) and MLST of kDNA (kMLST) (Messenger et al. 2012). These results support a mitochondrial introgression of TcIV S into the TcIII lineage. There are several pieces of evidence indicating that mitochondrial introgression currently occurs in T. cruzi and DTU TcIV may be the kinetoplast donor. Messenger et al. (2012) reported two strains that closely clustered with certain TcI strains according to 25 microsatellite loci, but they clustered with TcIV S according to kMLST. These authors proposed a recent event of mitochondrial introgression of TcIV S into the TcI genome. In addition, Roellig et al. (2013) observed eight events of introgression in North American T. cruzi isolates. In these cases, strains with a TcI nuclear genotype clustered with TcIV N according to the analysis of the COII-Nd1 kDNA fragment. The same pattern was observed for an isolate from Bolivia (GPI genotype = TcI, Nd1 genotype = TcIV S ) (Barnabe & Breniere 2012). These results suggest that mitochondrial introgression is not an exceptional phenomenon in T. cruzi and it appears occur more frequently from TcIV to other lineages.
Based on the COII-Nd1 sequence, Lewis et al. (2011) proposed that multiple introgression events might have occurred between TcIII and TcIV S . Here, we collected evidence supporting the occurrence of multiple events of introgression in the evolutionary history of TcIII. If only one introgression event occurred into an ancestral TcIII, TcIII strains should be clustered together in a sister clade to TcIV S when kinetoplast sequences are analysed. However, we observed at least two clusters grouping TcIII and TcIV strains in analysis of the cytB locus (Fig. 4). Consequently, we analysed a set of 11 strains corresponding to the TcIII, TcIV S , TcV and TcVI DTUs (Supplementary Table II) for three mitochondrial loci (Nd1, COII and CytB) with available sequences. We also included a TcIV N sequence as an outgroup. We observed that the TcIII-TcV-TcVI strains did not cluster into a single branch (Fig. 5). Instead, the TcIII-TcV-TcVI strains clustered into three different and strongly supported branches (Fig. 5). This observation may not be explained by a single introgression event and thus must have been caused by several.
There are a few explanations for the observed incongruence among nuclear and mitochondrial phylogenies. Incomplete lineage sorting is an unlikely explanation. Under incomplete lineage sorting hypothesis, because genetic exchange should have been at least of moderate frequency for the TcI/TcIII/TcIV ancestor. In addition, kDNA should have diverged into three sequence groups (TcI, TcIV S -TcIII and TcIV N ) before the separation of TcI-TcIII and TcIV. This hypothesis accounts for the observed nuclear-mitochondrial incongruence. However, under the incomplete lineage sorting hypothesis, a large distance between TcIII and TcIV strains is expected because kDNA diverged before the separation of the TcI-TcIII-TcIV cluster. Instead, the genetic distances between some TcIII and TcIV strains (Fig. 5) are relatively short (i.e., just one differential SNP is observed between M6241-TcIII and Saimiri3-TcIV S ). Another hypothesis is that hybridisation events between TcIII and TcIV S were followed by several backcrosses of the hybrid strain with TcIII strains. In addition, because all TcIII strains analysed have a dif- ferent TcIV S kDNA, it is expected that introgression occurred during TcIII lineage expansion and not just at the origin of the lineage. It is also likely that TcIV was already widely distributed before TcIII expansion (Fig. 1, Supplementary Table III). This scenario of mitochondrial introgression during a species expansion was theoretically analysed few years ago. Currat et al. (2008) proposed a demographic neutral model that predicts that when one species invades an area already occupied by a related species, asymmetrical introgression may occur mainly from the local species towards the invader. Asymmetrical mitochondrial introgression was observed for several animal and plant species (Currat et al. 2008) and even in algae (Neiva et al. 2010). In addition, the model also predicts that introgression should be more frequent for DNA fragments with lower intra-species gene flow. In this sense, mitochondrial introgression is more probable than nuclear introgression in organisms with uniparental inheritance of mtDNA because of the lower gene flow among populations for the mitochondrial genome (Du et al. 2011). kDNA is of uniparental inheritance in T. cruzi hybrids, hence the kDNA should have lower interpopulation gene flow than a biparentally inherited locus if genetic exchange was of at least moderate frequency. Consequently, if the genetic exchange had a moderate frequency at least at the expansion front of TcIII, the model proposed by Currat et al. (2008) may perfectly explain the multiple asymmetrical mitochondrial introgression events observed for TcIII. Although true sexual mechanisms (meiosis-dependent) have not yet been described for T. cruzi and preponderant clonality is widely accepted (Tibayrenc & Ayala 2013), population data suggest that frequent genetic exchange may occur in certain restrained populations (Ocana-Mayorga et al. 2010, Baptista et al. 2014. Alternatively, an unconventional mechanism of mitochondrial transfer may explain kDNA transfer, although no such mechanism has been described for any organism thus far. Whatever the mechanism of introgression, if TcIII was at expansion, the allele surfing hypothesis (Klopfstein et al. 2006) (called here kDNA surfing) may be a good explanation for fixing the introgression. The surfing hypothesis proposes that a rare allele originated on the edge of a wave of expansion may be propagated by the wave reaching high frequencies or even fixation far away from its origin. In this sense, the introgressed kDNA may have been propagated by the wave of expansion and lead to it being fixed for the whole TcIII DTU. The multiple introgressions observed for TcIII may be explained by this model and positive selection may not be invoked (although it may be implicated).
Any introgression hypothesis requires at least some overlap between the ecological niches of both DTUs. Although different ecological niches have been proposed for TcIV S (arboreal ecotope) (Marcili et al. 2009b, c) and TcIII (terrestrial ecotope) , Marcili et al. 2009b, an overlap of these niches is possible. In fact, Pastrongylus geniculatus (the main vector of TcIII in terrestrial mammals) has been reported into the arboreal ecotope in the Amazonia and even infected by TcIV (Marcili et al. 2009b). In addition, TcIV specimens have been documented to infect nine-banded armadillos (Dasypus novemcinctus) at least in North America (Yeo et al. 2005, Roellig et al. 2013).
An alternative to the kDNA transfer from TcIV S to TcIII is introgression occurring in the opposite direction (from TcIII to TcIV S ). For this hypothesis to be plausible, TcI kDNA must have diverged before the separation of TcIV from the TcI-TcIII-TcIV ancestor (incomplete lineage sorting) and subsequently, multiple introgressions must have occurred from TcIII to TcIV S . However, the most recent common ancestor (MRCA) for TcIII-TcIV kDNA should have occurred before the divergence of TcI-TcIII-TcIV. Considering the relatively short distance between TcIII-TcIV S to TcIV N in relation to inter-DTU relationships (Fig. 4), it is unlikely that the kDNA of both groups coalesced previous to the TcI-TcIII-TcIV divergence. Consequently, directional transfer from TcIV S to TcIII is more likely.
Finally, if TcIV S transferred its kDNA to TcIII, this last lineage transferred the TcIV S kDNA to the hybrid DTUs TcV and TcVI. -Westenberger et al. (2005) proposed a single hybridisation event for the origin of the TcV and TcVI DTUs. Their model proposes that after the hybridisation event between TcII and TcIII, the hybrid lineage diverged into the current DTUs TcV and TcVI. This was the most likely hypothesis according to their data. However, several data suggest that two independent hybridisations occurred between TcII and TcIII. de Freitas et al. (2006) were the first to propose that two independent hybridisation events gave rise to TcV and TcVI, based on the extensive differences between TcV and TcVI haplotypes. In addition, if the hypothesis of a single hybridisation event were correct, TcV and TcVI would be expected to cluster together in a branch (Fig. 6A). Instead, the occurrence of at least two hybridisation events is supported by the clustering of one hybrid with its parental for any allele (Fig. 6B). Machado and Ayala (2001) analysed the COII-Nd1 fragment sequence and observed for DTU TcVI that TcIII-like alleles clustered with TcIII Fig. 6: examples of haplotype topologies which are compatible with a single hybridisation event between TcII and TcIII (A) and incompatible with the hypothesis of a single hybridisation event (B). Arrows indicate when hybridisation events could have occurred in the haplotype history. Note that for A the TcV and TcVI haplotypes diverged after hybridisation event whereas in B the haplotypes diverged before hybridisation events. It is important to consider that the topology A is also compatible with multiple hybridisation events (particularly when the sampled TcIII strain is distantly related to the parental TcIII strain involved into the hybridisation). The same example applies for TcII-TcV-TcVI haplotype history. strains instead of TcV (the same pattern exemplified in Fig. 6B). In addition, we analysed haploid sequences (inferred by PHASE) of 16 reference strains from the TcII, TcIII, TcV and TcVI DTUs (Supplementary Table I , strains 7-15 and 19-25). The TcV-TcVI cluster was observed only for two loci (Rb19 and Rho1), whereas clustering incompatible with the TcV-TcVI group was observed in six loci. Incompatibilities in one out of these six loci may be attributed to intralocus recombination in TcV (Rb19). However, the remaining five loci (CoAR, Met-II, MPX, Sod-B and Sttpf-2) clearly showed topologies similar to Fig. 6B, which provides evidence against a single hybridisation event (data not shown). These results are in agreement with the work of Flores-Lopez and Machado (2011). They showed that TcV and TcVI do not form a monophyletic group for TcII-like alleles (TcV clustered with TcII; branch support: bootstrap = 90, BP = 1). We reviewed individual topologies for 30 loci analysed by Flores-Lopez and Machado (2011) and observed that 50% were incongruent with the clustering of TcV and TcVI. Instead, just four topologies grouped TcV and TcVI in a monophyletic branch. Finally, Lewis et al. (2011) observed for 28 microsatellite loci that most of microsatellite alleles that discriminated between TcV and TcVI were also present in parental DTUs. If those alleles originated by divergence after the hypothetical TcV/TcVI ancestor, the occurrence of the same alleles in parental strains would require several homoplasy events which is a less parsimonious hypothesis. Consequently, the hypothesis of independent events is more parsimonious than the hypothesis of repeated homoplasy.
TcV and TcVI are hybrids originated from independent hybridisations events between TcII and TcIII
About phylogenetic position of TcBat -Recently, a bat-associated lineage has been described based on cytB and a few nuclear genes (Marcili et al. 2009a, Pinto et al. 2012. This lineage was proposed to be closely related to TcI (Marcili et al. 2009a, Pinto et al. 2012. In this sense, additional markers such as nuclear MLST and kMLST will help to confirm this phylogenetic posi-tion of TcBat. Interestingly, Guhl et al. (2014) proposed that this group is an ancestor for all DTUs, based on four kDNA fragments and four nuclear loci. They only showed a phylogenetic tree of CytB showing this basal position. In contrast, we observed that TcBat does not have a basal position based on an analysis of CytB (Fig. 4) and our observation is in agreement with results of Marcili et al. (2009a) and Pinto et al. (2012). In addition, branch lengths and branch support were not reported by Guhl et al. (2014) to support the accuracy of the phylogenetic inference. This conclusion may be biased due to an incorrect selection of the model used in Bayesian inference. They implemented a strict molecular clock for the cytB loci although the p value reported by them (using the likelihood ratio test) rejected it. Unfortunately, no sequence for any loci was uploaded to GenBank and we could not repeat their analyses.
Estimating dates for T. cruzi evolutionary history -The first paper dating the age of T. cruzi proposed an ancient origin for the parasite (Briones et al. 1999). The MRCA for T. cruzi and T. cruzi marinkellei was dated at approximately 200-475 million years ago and the MRCA of the T. cruzi lineages was dated at 33-88 million years ago. However, most recent papers questioned the ancient origin hypothesis and proposed that the origin was very recent (Flores-Lopez & Machado 2011. We estimated divergence times for the phylogeny of T. cruzi by analysing nine out of the 13 MLST loci using BEAST software. A relaxed clock was favoured for eight/nine loci according the BF (> 0.5) (Supplementary Table III). Divergence times were considerably higher (Supplementary Table III) than was recently reported for different splits observed in the phylogenetic tree of T. cruzi (Flores-Lopez & Machado 2011). However, divergence times had high confidence intervals, which reveal high uncertainty for age estimation. The high intervals may be due to the low information level for each single locus. Consequently, we performed a STAR- BEAST analysis to combine information on different loci and make a joint estimation of the species tree and divergence dates. A similar topology to Fig. 1 was observed for inter-DTU relationships and we confirmed monophyly for clusters TcI-TcIII-TcIV and TcI-TcIII. Divergence times for inter-DTU relationships are shown in Fig. 7.
The T. cruzi evolution model -The proposed model is shown in Fig. 8. According to our analyses, the T. cruzi ancestor was separated from T. cruzi marinkellei approximately five-seven million years ago . This ancestor diversified approximately one-three million years ago into two different groups: TcII and TcI-TcIII-TcIV. TcIV separated first from the latter clade and, after this separation, TcIV diverged into two geographically differentiated groups (TcIV S and TcIV N ). Subsequently, TcI-TcIII was divided into two different clades (0.37-1 million years ago). Incomplete lineage sorting may explain the existence of some topologies clustering TcIII and TcIV, although additional genes should be analysed to confirm this. After the TcI-TcIII split, TcIV S transferred the kinetoplast to TcIII by an unknown mechanism of mitochondrial introgression. According to the proposed model, multiple introgression events occurred after the split of TcI-TcIII clade and the TcIV S kDNA surfed on the expansion wave of TcIII, which became fixed in the modern TcIII. In addition, the model of asymmetrical introgression for a range-expanding population may fit well to the observed kDNA pattern, although further data should be collected to test this hypothesis. Finally and most recently, two independent hybridisation events between TcII and TcIII gave origin to the TcV and TcVI DTUs. Both of them are carriers of TcIV S kDNA. | 2016-05-12T22:15:10.714Z | 2015-03-24T00:00:00.000 | {
"year": 2015,
"sha1": "81c85354bb7a39b650607873c34c08767a1898e7",
"oa_license": "CCBY",
"oa_url": "http://www.scielo.br/pdf/mioc/v110n3/0074-0276-mioc-0140401.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "81c85354bb7a39b650607873c34c08767a1898e7",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
235863478 | pes2o/s2orc | v3-fos-license | An integrated assessment of the quality of teaching Mathematics in school
This article presents a diagnostic criteria-based methodology for a systematic and complex assessment of the quality of teaching Mathematics in school. This methodology may help assess the prospects of applied approaches to teaching Mathematics as well as monitor it and describe in terms of ensuring the high quality of general education in Mathematics. We used Kolmogorov’s “convolution of qualities” as a basis for developing this methodology for the systematic and complex diagnostics of mathematical education in school. This methodology is used to assess the quality and optimization of complex objects in mechanics, chemical industry, economics, and higher education. The suggested integral assessment (systematic and complex diagnostics) was successfully applied to evaluate the quality of teaching Mathematics in schools of the Volgograd region. We defined three levels of mathematical education quality: discrete (minimal and restricted level), fragmentary (average functional level), and integrated and comprehensive (a rather high level).
Introduction
In Russia, the sociocultural, political, social and economic changes caused by globalization have triggered the need for changing the requirements to teaching Mathematics in school. The mathematical competence is believed to be helpful to comprehend modern world-scale information technology and ensure the social mobility of an individual. Nowadays, more and more people start to see the high quality education, particularly in Mathematics, as a means for social and professional growth. Teaching Mathematics in school depends on several factors: 1) the social-sector procurement, which is stipulated in legal documents, as well as social and personal needs; 2) the conditions that promote personal inclinations and needs in terms of studying Mathematics; 3) the possibilities to implement professional skills of teachers of Mathematics; 4) the possibility to implement educational, methodological, and scientific innovations in teaching as well as the possibility to use the school's resource potential. It is believed that every school has its own models and procedures for teaching Mathematics. In its modern sense, the quality of teaching Mathematics does not exclusively amount to obtaining the level of knowledge stipulated in the learning standards; it is also about the successful performance of school departments responsible for building competence.
These days, the school authorities understand the need to organize the study process in the most appropriate way. That is why the systematic and complex diagnostics of teaching Mathematics in school has become highly relevant. In addition, running diagnostics regularly ensures the cyclical nature of the study process. In this way, new goals are set on a new, higher level.
There is a great body of research concerning the issue of the quality assessment of teaching Mathematics. Unfortunately, the suggested systems for monitoring and assessment fail to be unified or comprehensive. Usually, the quality of teaching is assessed under the framework of student rankings that include the results of school leavers' academic achievements (for example, in Russia, there is mass testing after the 9th grade that is compulsory for everyone) [1].
S. Yu. Sergeeva and Ye. D. Obrevko suggest using a process approach to the assessment of the quality of teaching Mathematics with regard to the three criteria groups: the quality of the study process, the quality of the results of the study process, the quality of the study conditions [2]. V. A. Yasvin, S. N. Rybinskaya, S. A. Belova, and S. Ye. Drobnov suggest using a complex ranking to the assessment of the quality of teaching Mathematics. This ranking will include the academic results according to the taught subjects and the indicators of schools' study conditions and potential for development [3].
Recently, the rank analysis has gained much attention in terms of the monitoring and assessment of the quality of education. This method is discussed in the works of R. V. Gurina and V. V. Bedash [4].
The question of how to assess the quality of teaching Mathematics in school is still highly relevant although there is a significant advancement in the research in this area.
Materials and Methods
Since we have to run diagnostics on the education system that has a complex structure, there can be some difficulty in developing a diagnostic criteria-based methodology. In this article, we suggest a methodology for an integrated assessment of the quality of teaching Mathematics in school. We used "a convolution of qualities" as a basis for developing this methodology [5]. The methodology includes the following provisions: ⎯ The quality of teaching Mathematics is assessed according to a set of criteria. These criteria form a complex hierarchical system [6]; ⎯ Separate qualities form the general quality. Separate qualities are convolved into a single criterion according to Kolmogorov-Nagumo averages [5].
⎯ Here, the average is regarded as a function with values always belonging to the interval that is taken by a certain set of argument values. Introducing a normal interval sets a common scope for all separate qualities. The use of averages helps present a single quality in terms of this scope.
⎯ This methodology is used to assess the quality and optimization of complex objects in mechanics, chemical industry, economics, and higher education [7,8].
⎯ According to the approach that we used to identify the criteria, there are four directions in the assessment of education quality: ⎯ matching the objectives (correspondence of the level of knowledge and abilities as well as the physical and moral development of school leavers) and results (as a means to achieve objectives); ⎯ the content of the Mathematics curriculum that ensures personal development; ⎯ the nature of the study process that complies with the modern demands of such sciences as philosophy, psychology, pedagogy, and Mathematics teaching methodology; ⎯ providing the conditions to ensure the achievement of the objectives in teaching Mathematics [9].
We have defined the criteria according to the aforementioned directions. Thus, we have determined the indicators according to these four criteria. The identified criteria are to some extent interrelated. They can have a low or high indicator, and this will be reflected in the integral assessment of the quality of teaching Mathematics. Each of the four criteria consists of indicators. This helps assess the quality of education accurately and comprehensively. The indicators are rated on a five-point scale with integers from 0 to 4. With a hierarchical graph, we can analyze the qualities' indicators to create a unified criterion for the quality assessment ( Figure 1). It is possible to express the relations between indicators and a unified quality provided the two following conditions are met: ⎯ the common scale for all indicators and a unified quality is normalized ([0, 1] interval); ⎯ average functions are used to create a unified criterion.
The integrated assessment of the quality of teaching Mathematics in school can be represented in the form of a four-level hierarchical system. The integrated assessment of the quality of teaching Mathematics in school is defined by assessing the four integrated qualities of the system: Q -an integrated quality indicator Q = Q (Q1, Q2, Q3, Q4). Q is combined of the following criteria: Q1 -scientific and objective criterion; Q2 -action and organization criterion; Q3 -content and quality criterion; Q4 -quality and workspace criterion.
The quality of the model for teaching Mathematics in school (integrated assessment) Q is determined by the convolution of normalized criteria of the education system quality. Those criteria are determined by a convolution of the normalized indicators (four-level convolution): q42, q43, q44).
The quality assessment of the scientific and objective criterion: The quality assessment of the action and organization criterion: The quality assessment of the content and quality criterion: The quality assessment of the quality and workspace criterion: According to each criterion, we assessed the system sensitivity as a normalized unified function in terms of different parameters. The selected parameters ensure the function's appropriate sensitivity to changing the system's input parameters. This means that the model describes the real situation in the correct way.
The suggested systematic and complex assessment of the quality of teaching Mathematics in school was developed with the software package VisSim; the system's hierarchy was represented with interconnected MathCad blocks.
The suggested methodology and the automated program help quickly assess the mathematical education in school according to various indicators. In the future, the aforementioned systematic and complex diagnostics can be easily expanded by introducing new parameters to the system. Adding a new block (which optimizes the function) will help find optimal values of the parameters for decision-making. Moreover, this methodology will help assess the potential of mathematical education as well as to describe it and monitor in terms of ensuring the quality of education.
As an integrated approach, the mathematical education has the following levels of integrity: 1) non-integrated; 2) integrated; 3) wholly integrated [10]. Thus, we defined three levels of mathematical education quality: discrete (minimal and restricted level), fragmentary (average functional level), and integrated and comprehensive (a rather high level).
In the discrete level of mathematical education, its components are fragmentary and autonomous. There is no or little consistency in aims, values, approaches, and external connections within the education structure. The education patterns are monotonous. Moreover, the education process is cyclical.
In the fragmentary level of mathematical education, the system's elements are interconnected only locally. There is no systematical approach since adopting aims and other aspects of mathematical education is sporadic. Teachers and management take part in the organizational process only episodically.
In the integrated and comprehensive level of mathematical education, the approach to education is complex, comprehensive, and well-organized. There are di-verse education patterns that are relevant in terms of the current social and economic context.
The aforementioned levels are interconnected. Each of the subsequent levels includes the features of prior levels but improves them.
This criteria and diagnostic methodology for assessing the quality of mathematical education in school provides detailed characteristics of the educational initiatives in developing and implementing the mathematical curriculum. This is crucial for defining pedagogical goals and educational content as well as for selecting methods and technics to ensure high-quality education.
We have studied the quality of teaching Mathematics in schools in Volgograd and the Volgograd region. We have taken the academic year as a point of reference.
We have defined the objectives of the systematic and complex diagnostics: ⎯ to define the quality levels of mathematical education in terms of quantitative and qualitative indicators; ⎯ to represent the performance in terms of quantitative and qualitative indicators; ⎯ to conduct a comparative analysis of the quality levels of mathematical education according to the type of the institution (lyceums, gymnasiums, secondary schools, urban schools, and rural schools); ⎯ to define problems that teachers of Mathematics have to deal with, and to discuss possible solutions. New paragraph: use this style when you need to begin a new paragraph.
⎯ In this diagnostics, we complied with the following requirements: ⎯ the "pedagogical" nature of the diagnostics: not only did we aim to obtain certain data but also to analyze them and use further; ⎯ feasibility: methodology does not require time-consuming processing; ⎯ objectivity ensured by expert assessment; ⎯ relevance of the applied methodology reflecting various conditions and factors that affect mathematical education in school.
In the Volgograd region, the mathematical curriculum for 5th-9th grades is to be taught in a five-year time frame. For 10th-11th grades, it is two years. Mathematics is taught at basic, specialized (10th-11th grades), and advanced (10th-11th grades) levels. According to the specific features of the implemented programs, the teacher can decide on lesson plans and the number of hours for every topic. The schools can also introduce additional hours. In the Volgograd region, there are 1932 teachers of Mathematics in secondary school, 397 teachers have the highest qualification, 907 teachers have the first qualification, 225 teachers have a qualification correspondent to their position; 1785 teachers have a degree in higher education (92%). There are 1080 teachers of Mathematics in higher school, 302 teachers have the highest qualification, 544 teachers have the first qualification, 80 teachers have a qualification correspondent to their position; 1052 have a degree in higher education (97%). In Volgograd, every second teacher has the highest qualification. In this diagnostics, we take into account that teachers of Mathematics have taken courses in developing an innovative mathematical curriculum at the Volgograd State Academy of Postgraduate Education (VGAPO) [11,12]. As a result, we have obtained a unified indicator of the quality of the mathematical education Q = 0,52 (52%) in schools in the Volgograd region. Thus, we can state that the level of mathematical education is not sufficient enough.
The results of the Russian State Exam on Mathematics seem to support this claim. The quality of knowledge is 42-42%. In recent years, the level of school leavers' mathematical education amounts to basic knowledge.
Conclusions
The diagnostics of the quality of mathematical education in schools in the Volgograd region has helped define the reasons inhibiting the goals stated in the Federal Standards [13] and Concept for the mathematical education [14].
According to the results, we have made the following conclusions. School teachers still have a tendency to avoid changes coming from the outside. The majority of teachers seem not to be ready to implement innovative methods and techniques; they tend to use unified approaches and reproductive activities that do not let the students unleash their potential. Moreover, teachers do not seem to act cooperatively and work as a team to develop creative projects.
There are several reasons that inhibit the quality of mathematical education. For example, the authoritarian management style, mismanagement of resources, and disbalance in interests of actors in the educational system. In order to eliminate these problems, the following actions can be taken: to shift from the dominant management style to improve the quality of education; to change the teachers' attitude to the aim set in the curriculum; to shift from the traditional pedagogical environment to the innovative one; to develop professional competence in developing integral educational process according to the modern realia.
The suggested procedures for the quality assessment of teaching Mathematics in school can be used as a methodology for further studies. | 2021-07-16T00:06:07.017Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "deffc32f8a61c307466f7645ef1e1e9efcb735b0",
"oa_license": "CCBY",
"oa_url": "https://www.shs-conferences.org/articles/shsconf/pdf/2021/12/shsconf_sahd2021_03039.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "9047a3a31168b7997a74f286e008a34b2438ac17",
"s2fieldsofstudy": [
"Mathematics",
"Education"
],
"extfieldsofstudy": []
} |
37273099 | pes2o/s2orc | v3-fos-license | Gastrointestinal Stromal Tumor of the Rectovaginal Septum, a Diagnosis Challenge
Gastrointestinal stromal tumors (GISTs) are the most common mesenchymal tumors of the gastrointestinal tract. These are rare tumors representing approximately 0.1-3.0% of all gastrointestinal cancers and approximately 5% of all soft tissue sarcomas (Reid et al., 2005; Fletcher et al., 2002). Due to their similar appearance by light microscopy, GISTs were previously thought to be smooth muscle neoplasms, and most were classified as leiomyosarcoma (Reid et al, 2005). (Table 1) The precise cellular origin of these tumors has been proposed to be the interstitial cell of Cajal, an interstitial pacemaker cell (Connolly et al., 2003). It is important to differentiate between GISTs, which constitute approximately 80% of gastrointestinal mesenchymal tumors, and the less common gastrointestinal non-epithelial neoplasms, leiomyoma, leiomyosarcoma (10-15% of mesenchymal tumors), schwannomas (5%), and other malignant disorders. Nearly all GISTs (90-100%) display strong immunohistochemical staining for kit (CD 117), and this can be used in their differential diagnosis and positive identification. Smooth muscle neoplasms, and neurogenic tumours (schwannoma) typically do not show a positive expression of CD117, but can be distinguished from GISTs by histological and clinical means. It is recommended that CD117 immunostaining should be performed to facilitate the diagnosis of GISTs for spindle cell or epithelioid tumors arising the gastrointestinal tract. Diagnosis, however, should not be based purely on CD117 expression. The diagnosis of CD117 negative GISTs should only be made with extreme care. If there is evidence of desmin or S-100 expression and the tumor is not associated with the gut wall then a diagnosis of a kit negative GIST should not be made. Mutations of kit are common in malignant GISTs and lead to constitutional activation of tyrosine kinase function, which causes cellular proliferation and resistance to apoptosis. It is also important the stain for the myeloid stem cell antigen CD 34 in 53% to 71% of cases. (Connolly et al., 2003; De Matteo et al., 2000; Saund et al., 2004, Yamamoto et al., 2004).
Introduction
Gastrointestinal stromal tumors (GISTs) are the most common mesenchymal tumors of the gastrointestinal tract. These are rare tumors representing approximately 0.1-3.0% of all gastrointestinal cancers and approximately 5% of all soft tissue sarcomas (Reid et al., 2005;Fletcher et al., 2002). Due to their similar appearance by light microscopy, GISTs were previously thought to be smooth muscle neoplasms, and most were classified as leiomyosarcoma (Reid et al, 2005). (Table 1) The precise cellular origin of these tumors has been proposed to be the interstitial cell of Cajal, an interstitial pacemaker cell (Connolly et al., 2003). It is important to differentiate between GISTs, which constitute approximately 80% of gastrointestinal mesenchymal tumors, and the less common gastrointestinal non-epithelial neoplasms, leiomyoma, leiomyosarcoma (10-15% of mesenchymal tumors), schwannomas (5%), and other malignant disorders. Nearly all GISTs (90-100%) display strong immunohistochemical staining for kit (CD 117), and this can be used in their differential diagnosis and positive identification. Smooth muscle neoplasms, and neurogenic tumours (schwannoma) typically do not show a positive expression of CD117, but can be distinguished from GISTs by histological and clinical means. It is recommended that CD117 immunostaining should be performed to facilitate the diagnosis of GISTs for spindle cell or epithelioid tumors arising the gastrointestinal tract. Diagnosis, however, should not be based purely on CD117 expression. The diagnosis of CD117 negative GISTs should only be made with extreme care. If there is evidence of desmin or S-100 expression and the tumor is not associated with the gut wall then a diagnosis of a kit negative GIST should not be made. Mutations of kit are common in malignant GISTs and lead to constitutional activation of tyrosine kinase function, which causes cellular proliferation and resistance to apoptosis. It is also important the stain for the myeloid stem cell antigen CD 34 in 53% to 71% of cases. (Connolly et al., 2003;De Matteo et al., 2000;Saund et al., 2004, Yamamoto et al., 2004. Schwannoma -+ --+ Table 1. Immunohistochemical schema for the differential diagnosis of spindle cell tumors of the gastrointestinal tract. (Reid et al., 2005).
Site
Percentage Stomach 60-70% Small intestine 20-30% Oesophagus, mesentery, omentum, colon and rectum 10% Table 2. Site of GISTs. (Reid et al., 2005) They are rare before the age of 40 years and very rare in children, with a median age of 50-60 years. Some data show a slight male predominance. (Reid et al., 2005). The symptoms (Table 3) (Nickl et al., 2004;Reid et al., 2005, Saund et al., 2004 of GISTs are non-especific and depend on the size and location of the lesion. Small GISTs (2cm or less) are usually asymptomatic and are detected during investigations or surgical procedures of unrelated causes. The vast majority of these are low-risk for malignancy. The most common symptom is gastrointestinal bleeding which is present in approximately 50% of patients. Systemic symptoms such as fever, night sweats, and weight loss are common in GISTs and very rare in other sarcomas. Patients with larger tumors may experience abdominal discomfort or develop a palpable mass. Up to 25% of patients present with acute haemorrhage into the intestinal tract or peritoneal cavity from tumor rupture. Symptomatic oesophageal GISTs typically present with dysphagia, while gastric and small intestinal GISTs often present with vage symptoms leading to their eventual detection by gastroscopy or radiology. Most duodenal GISTs occur in the second part of the duodenum where they push or infiltrate into the pancreas. Colorectal GISTs may manifest with pain and gastrointestinal obstruction, and lower intestinal bleeding. Rectal tumors are usually deep intramural tumors.
Symptoms
Incidence Abdominal pain 20-50% Gastrointestinal bleeeding 50% Gastrointestinal obstruction 10-30% Asymptomatic 20% Table 3. Symptoms of GIST at diagnosis. (Reid et al., 2005) www.intechopen.com The pathogenesis of GISTs has been established by the observation that kit is highly expressed and mutated in almost all tumors (Taniguchi et al., 1999). The use of antibodies to kit, as part of an immunohistochemical panel and in combination with traditional histological and clinical examinations, means that it is possible to distinguish clearly GISTs from other gastrointestinal tract tumours. In addition, the tyrosine kinase inhibitor imatinib mesilate (Gleevec™) represents a major breakthrough in the treatment of GISTs, as it has significant antitumour activity in these neoplasms, which are generally resistant to cytotoxic chemotherapy (Zalupski et al. 1991;Ronellenfitsch et al., 2008). A second targeted tyrosine kinase inhibitor, sunitinib malate (Sutent™), has been approved for the treatment of imatinibresistant gastrointestinal stromal tumors. (Raut et al., 2007). Surgical resection is the principal treatment for GISTs. Evaluation of the resecability of a GIST is determined by the surgeon and depends on the stage and the individual patient's fitness for surgery. The primary goal of surgery is complete resection of the disease with avoidance of tumor rupture. Care is necessary as GISTs are often soft and fragile, and tumor rupture may seed implants in the peritonel cavity and liver. A wide local resection with macroscopic removal of the entire tumor to achieve microscopic clearance is recommended. An adequate cancer margin is considered to be 2cm (Reid et al.,2005) but this is not always possible. It is recommended that all patients should be followed up. Observation is the current standard of care after complete resection of a primary tumor. Following initial assessment, high risk tumors should have computed tomography (CT) every 6 months for 3 years. However, in all cases, if symptoms become evident an early CT may be appropriate. Regardless of risk, clinic review should be indefinite, as these tumors may recur several years after apparently curative resection. There was no currently accepted adjuvant therapy regimen before Gleevec™ was approved.
Gleevec™ is generally well tolerated at doses up to 800mg/day. Toxicities include nausea and vomiting, diarrhoea, myalgia, skin rash and occasional neutropenia (Table 4). Although frequent, these toxicities rarely require withdrawal of Gleevec™.
Blood and lymphatic system disorders
Neutropenia, thrombocytopenia, anaemia Nervous system disorders Headache
Skin and subcutaneous tissue disorders
Periorbital oedema, dermatitis /eczema /rash Musculoskeletal, connective tissue and bone disorders
Muscle spam and cramps, musculoskeletal pain including arthralgia General disorders and Administration site conditions
Fluid retention and oedema, fatigue Table 4. Very common (>1/10) adverse reactions with imatinib mesylate (Reid et al., 2005)
Gastrointestinal stromal tumours of the rectovaginal septum
GISTs located out of the gastrointestinal tract (Extragastrointestinal stromal tumors, EGISTs) are very uncommon; and those that arise in the rectovaginal septum are highly infrequent entities, that pose a challenge due to the lack of diagnostic suspicious (Ceballos et al., 2004;Hellan et al., 2006;Lam et al., 2006;Marcos et al., 2010;Mussi et al., 2008;Nagase et al., 2007;Nasu et al., 2004;Takano et al., 2006;Tooru et al., 2001;Valera et al., 2008;Weppler et al., 2005;Zang et al., 2009). The main differential diagnosis of EGISTs of the vagina and rectovaginal septum is leiomyoma and leiomyosarcoma. Like GISts, both leiomyoma and leiomyosarcoma are rare primary lesions of the vagina. Histologically, leiomyomas and leiomyosarcomas are usually composed of spindle cells that are arranged in fascicles. In contrast to GISTs, which have very fibrillary, pale pink cytoplasm, smooth muscle tumors have dense, brightly eosinophilic cytoplasm. In addition, leiomyosarcomas tend to exhibit pleomorphism, which is unusual in GIST; smooth muscle tumors are immunoreactive for smooth muscle actin and desmin and are negative for kit (CD117). Like GISTs, smooth muscle tumors can be positive for CD34. Epithelioid smooth muscle tumors can mimic both epithelioid GIST and carcinoma, which are the most likely soft tissue neoplasm to arise in this location. Carcinomas are usually strongly positive for cytokeratins, whereas GISTs rarely express this antigen. Nerve sheath tumors, especially schwannomas are diffusely and strongly positive for S-100 protein and negative for kit. Aggressive angiomyxoma is rare but tends to occur in the deep soft tissues of the vulva and vagina. In contrast to GIST, these lesions are always paucicellular, contain myxoid stroma and a prominent vascular pattern, are positive for actine, desmin, estrogen receptor, and progesterone receptor, and negative for kit.
Angiomyofibroblastoma is another spindle cell lesion that enters into the differential diagnosis. They are typically located in the superficial soft tissues and are variably cellular with a prominent vascular pattern. They are negative for kit and positive for actin, desmin, estrogen receptor and progesterone receptor. Also, dermatofibrosarcoma protuberans can arise in the vulvovaginal region; they are uniformly positive for CD34, but can be distinguished from GIST because are negative for kit. Because of their malignant potential and recent advances in the management of GISTs with imatinib mesylate (Gleevec™) (De Matteo et al., 2007;Park et al., 2008;Verma et al., 2009), it is imperative that these tumors are diagnosed correctly despite of the similarity in their structure and size. Conventional radiotherapy and chemotrapy are useless in the treatment of these tumors, thus this fact makes more important the misdiagnosis of these masses. However, the current definitive treatment for GIST, including EGIST, is surgical. In this chapter, we describe a recent case of EGIST located in the rectovaginal septum, and a rewiev of the recent literature in this field.
Case report
We report the case of a 75-year-old woman with a GIST tumor in the rectovaginal septum. She consulted because of unpleasant feelings in the vagina and constipation that had started few months ago. Colonoscopy (Fig. 1.) revealed a probably submucosal tumor of 4cm in the anterior wall of the lower rectum, but it could not confirm the origin (gynecological or gastrointestinal). The patient was admitted in our Department for close examination and eventual treatment.
During the physical examination we saw a tumour of about 5 cm that stunned in the posterior wall of the vagina; pelvic exmination revealed (Fig. 2.) a 5cm hard, wellcircumscribed heterogeneous tumor, with a clear border in the rectovaginal space. The transvaginal ultrasound showed a normal atrophic internal genitalia; and a tumor welldelimited of about 4-5 cm; the mass appearance was solid with high vascularization. Nuclear magnetic resonance imaging (NMRI) from the abdomen and pelvis was performed, www.intechopen.com and it showed (Fig. 3, 4 and 5.) the origin of the tumor in the anterior wall of the lower rectum. Tumor markers as carcinoembryonic antigen (CEA), alpha-fetoprotein (AFP) and CA 19.9 were negative. Transvaginal biopsy was performed and the specimen was histologically diagnosed as gastrointestinal stromal tumour (c-Kit +) of intermediate risk of malignancy. A transvaginal tumor enucleation was performed with the colaboration of the Department of Surgery, and also perineal reconstruction was done (Fig. 6, 7, 8 and 9). www.intechopen.com surgey, adyuvant treatment with imatinib mesylate was established without important toxicity (Gleevec™, 400mg per day in an oral dose). After one year following this treatment, the patient was disease free and now, 3 years later, is following rutinary controls with no evidence of disease.
Discusion
EGISTs comprises about 5-7% of all GISTs. The majority of them have involved the mesentery, omentum, and retroperitoneum. Only eleven cases of them presenting as a vaginal mass have been reported previously from 2004 (Table 5). Because EGIST locates in the pelvic cavity, particularly adjacent to the female genital tract, the patient's chief complaints may be compression to local organs leading to symptoms such as urinary frequency and constipation, or a mass with no symptoms. This coincides with the case we presented, in wich the main symptom was a sensation of vaginal mass and constipation.
Other alterations that appear with more frequency as a vaginal mass are the cyst (Gartner's duct cysts, Mullerian cysts, bartholin's gland cysts) or recto-vaginal septum endometriosis. The age of diagnosis, 75 years is the elderly of the cases reported. In the eleven cases published the median age was 55. The combined vaginal and rectal examination is essential in the diagnosis of recto-vaginal masses to determine the size, mobility and consistency of the tumor. In our case we found a soft, cystic, multi-lobed and not fixed tumor, but in other reported cases, the consistency was hard, that may be related to the size and degree of malignancy.
Transvaginal ultrasound is the most widely used imaging test to complete the diagnosis. A solid mass with low eco levels, similar to the uterine fibroid, is the most characteristic ultrasonographic data. NMRI and CT scans can help to determine the origin, size and relationships of the mass and the overall assessment of the pelvis. In our case, NMRI confirmed the origin of mass in the wall of the rectum. Histollogically, EGIST often presents as spindle cells and therefore might be excluded from the differential diagnosis of spindle-cell neoplasms and could be confused with the more common leiomyoma or leiomyosarcoma (Lam et al., 2006, Mettinen et al., 2001. Some authors reported that immunohistochemistry with antibodies against c-kit protein (CD 117) and CD 34 is reliable and valuable for diagnosis of EGIST (Connolly et al., 2003;De Matteo et al., 2000;Saund et al., 2004). GIST typically expresses CD117, often CD34 and sometimes SMA and S-100, but its expressions vary depending on different sites. Since the incidence of rectal-vaginal GIST is much lower than that of GIST in the stomach or small intestine, the clinicopathological profiles have not yet been accurately characterised, and there is therefore the tendency to validate the same prognostic factors for the latter as for such tumors at other sites, particularly gastric GIST. The most important and easily applicable histological criteria for prediction of GIST are its size and mitotic rate. A rate of ≤ 5 mitoses per 50 HPF is commonly used as a limit for a tumor with expected benign behaviour, and according to a large study, this can discriminate between benign and malignant tumors, especially gastric GIST. (Miettinen, M. et al, 1999). Tumors of 2 cm in diameter are generally expected to behave in a benign fashion. Tumors of 5 cm -10 cm in diameter have a better prognosis tan those of > 10 cm in diameter. Degrees of cellularity and atipia have also been suggested as useful criteria. It is generally agreed that complete surgical resection with negative tumor margins is the principal curative procedure for primary and non-metastatic tumors, particularly for those at a low risk. Neoadjuvance with imatinib mesilate (Gleevec™) may enhance the resectability of inoperable malignant GIST and may allow for optimal surgical timing. Therapy with imatinib is also used in the adjuvant post-operative treatment of tumors at a high risk or in cases of incomplete surgical resection. In five of the eleven cases surgery was the treatment done as first option, in two more cases (like the one we report) tumor escision and treatment with imatinib mesylate was established. Only in one case, there was evidence of metastatic disease, and treatment with imatinib mesylate was the therapeutic choice. In ten of the eleven cases reported, the diagnosis of the tumor was before evidence of metastatic disease, probably because the location in the rectovaginal septum affords an early detection. (Zang, 2009).
Author
As lymph nodes metastasis occurs infrequently (<10%), extensive lymphadenectomy need not to be done (Miettinen, M. et al, 1999). But despite complete resection with pathologically confirmed negative margins, the majority of tumors recur. In the eleven published cases, five recurred from several months to ten years after primitive treatment. While the majority of patients initially benefit from tyrosine kinase inhibitors, it is now clear that resistance commonly develops. Indeed, the median time to progression on imatinib mesylate is 2 years (De Matteo et al., 2007).
In our case, the final histological study revealed a gastrointestinal stromal tumor of high risk of malignancy (16 mitotic figures per 50 high power fields). We used Gleevec™ as neoadjuvant therapy. The patient showed good tolerance to the drug. Evolution has been very favorable, almost three years have passed and our patient is alive and free of disease. The lack of a large series of patients under long-term follow-up observations makes it difficult to assess the necessary extent of surgical resection and the indication for treatment with imatinib.
Conclusion
In the past years there have been significant developments in the understanding of GISTs and their response to therapy. Many questions remain unanswered and new issues have arisen as the benefits of imatinib mesylate therapy are revealed. EGISTs that present as gynecologic masses are rare but may be more common than is currently recognized. Misdiagnosis may lead to an inappropriate therapy because conventional radiotherapy and chemotherapy are not effective in the treatment of GISTs, whereas imatinib mesylate (Gleevec™) has a proven role in managing these tumors. Thus, it is important and necessary to consider EGISTs in the differential diagnosis of mesenchymal neoplasms in the vulvovaginal-rectovaginal septum. The most common symptom is due to compression of adjacent organs, discomfort, feeling of lump, dyspareunia or constipation The differential diagnosis is done with leiomyomas and vaginal cysts (Gartner's duct cysts, Mullerian cysts, bartholin's gland cysts). GIST typically expresses CD117, often CD34 and sometimes SMA and S-100, leading to the definitive diagnosis in the biopsy samples. The prognosis is determined by the size and mitotic count. Treatment relies on surgical excision of the tumor, and imatinib mesylate has shown efficacy as neoadjuvant and adjuvant monotherapy.
Acknowledgment
We want to thank the Surgery and Pathology Department of our hospital for their help in the management of this case. Almost 30 years have gone by since the postulation that GISTs derive from mesenchymal stem elements, and only 15 years have gone by since the definitive detection of origin of GISTs. Research in the last decade was more focused upon the justification of imatinib mezylate therapy in GISTs and clarification why a secondary resistance that occurred during the kinase inhibitors therapy. The era of therapy for GISTs, targeting the primary activating mutations in the KIT proto-oncogene; is being proclaimed as bringing the message of special importance to the pathologist role in multidisciplinary team that are responsible for treating patients with locally advanced or metastatic GIST. This is the first conclusive message forthcoming from this book. On the other hand, the book provides summarised and case-based knowledge on current management of gastrointestinal and extragastrointestinal stromal tumours. We hope that this book may be considered as a worthwhile timely addition to clinical science dissemination, medical education, further basic and clinical research.
How to reference
In order to correctly reference this scholarly work, feel free to copy and paste the following: | 2017-09-17T03:18:06.685Z | 2012-04-27T00:00:00.000 | {
"year": 2012,
"sha1": "fc32768567e6388280dcde51007d61a5165bebf3",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.5772/32281",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "90a58ab4a160f07865b05ea69bd7b4be040719f0",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
246679473 | pes2o/s2orc | v3-fos-license | Left Ventricular Wall Reconstruction with Autologous Vascularized Tunica Muscularis of Stomach in a Porcine Pilot Model
Introduction: Surgical replacement of dysfunctional cardiac muscle with regenerative tissue is an important option to combat heart failure. But, current available myocardial prostheses like a Dacron or a pericardium patch neither have a regenerative capacity nor do they actively contribute to the heart’s pump function. This study aimed to show the feasibility of utilizing a vascularized stomach patch for transmural left ventricular wall reconstruction. Methods: A left ventricular transmural myocardial defect was reconstructed by performing transdiaphragmatic autologous transplantation of a vascularized stomach segment in six Lewe minipigs. Three further animals received a conventional Dacron patch as a control treatment. The first 3 animals were followed up for 3 months until planned euthanasia, whereas the observation period for the remaining 3 animals was scheduled 6 months following surgery. Functional assessment of the grafts was carried out via cardiac magnetic resonance tomography and angiography. Physiological remodeling was evaluated histologically and immunohistochemically after heart explantation. Results: Five out of six test animals and all control animals survived the complex surgery and completed the follow-up without clinical complications. One animal died intraoperatively due to excessive bleeding. No animal experienced rupture of the stomach graft. Functional integration of the heterotopically transplanted stomach into the surrounding myocardium was observed. Angiography showed development of connections between the gastric graft vasculature and the coronary system of the host cardiac tissue. Conclusions: The clinical results and the observed physiological integration of gastric grafts into the cardiac structure demonstrate the feasibility of vascularized stomach tissue as myocardial prosthesis. The physiological remodeling indicates a regenerative potential of the graft. Above all, the connection of the gastric vessels with the coronary system constitutes a rationale for the use of vascularized and, therefore, viable stomach tissue for versatile tissue engineering applications.
Abstract
Introduction: Surgical replacement of dysfunctional cardiac muscle with regenerative tissue is an important option to combat heart failure. But, current available myocardial prostheses like a Dacron or a pericardium patch neither have a regenerative capacity nor do they actively contribute to the heart's pump function. This study aimed to show the feasibility of utilizing a vascularized stomach patch for transmural left ventricular wall reconstruction. Methods: A left ventricular transmural myocardial defect was reconstructed by performing transdiaphragmatic autologous transplantation of a vascularized stomach segment in six Lewe minipigs. Three further animals received a conventional Dacron patch as a control treatment. The first 3 animals were followed up for 3 months until planned euthanasia, whereas the observation period for the remaining 3 animals was scheduled 6 months following surgery. Functional assessment of the grafts was carried out via cardiac magnetic resonance tomography and angiography. Physiological remodeling was evaluated histologically and immunohistochemically after heart explantation. Results: Five out of six test animals and all control animals survived the complex surgery and completed the follow-up without clinical complications. One animal died intraoperatively due to excessive bleeding. No animal experienced rupture of the stomach graft. Functional integration of the heterotopically transplanted stomach into the surrounding myocardium was observed. Angiography showed development of connections between the gastric graft vasculature and the coronary system of the host cardiac tissue.
Conclusions:
The clinical results and the observed physiological integration of gastric grafts into the cardiac structure demonstrate the feasibility of vascularized stomach tissue as myocardial prosthesis. The physiological remodeling indicates a regenerative potential of the graft. Above all, the connection of the gastric vessels with the coronary system constitutes a rationale for the use of vascularized and, therefore, viable stomach tissue for versatile tissue engineering applications.
Introduction
Chronic left-sided heart failure is one of the most common cardiovascular diseases in western societies [1]. In severe cases, surgical reconstruction of the ventricle can be performed. This surgical intervention aims for both the reduction of the enlarged ventricle and the restoration of its physiological ellipsoid shape [2]. Synthetic patch materials, such as Dacron, are often required for the reconstruction [3]. However, synthetic materials cannot undergo growth and do not have a regenerative potential. Finally, such akinetic or dyskinetic grafts do not actively contribute to the ventricular output with subsequent disadvantages especially after large scale tissue replacement.
Therefore, biological and regenerative approaches have been attempted [4], such as supporting ischemic myocardium with skeletal muscle [5,6]. These approaches were driven by the anticipation of physiological remodeling of the applied tissues and their acquisition of specific myocardial functions such as contraction and conduction of cardiac excitation. The substrates need to withstand an intracardiac blood pressure of up to 240 mm Hg when used as patch material for the left ventricle. This mechanical requirement calls for stronger tissues that usually have a thicker diameter. However, supplying oxygen and nutrients via diffusion, as well as removing metabolic wastes, becomes problematic for graft tissues of a thickness greater than 100 μm [7]. Adequate vascularization of the substitute myocardial tissue is a key to ensure viability and functionality. Therefore, we employed a stomach piece, including its arterial and venous vessels and their native connection to the body circuit, which we translocated from the abdomen to the thoracic cavern via the diaphragm. Hence, in the present study, we examined the applicability and the physiological integration of autologous vascularized stomach tissue as a prosthetic material for the full wall replacement of dysfunctional left ventricular myocardium in a swine model.
Study Design
All experiments were carried out according to the European Convention on Animal Husbandry and the ARRIVE guidelines where applicable [8]. The study was approved by a competent authority (LAVES, Niedersächsisches Landesamt für Verbraucherschutz und Lebensmittelsicherheit, Lower Saxony, Germany) in accordance with Article 8 Paragraph 1 of the Animal Welfare Act, Civil Code 1. IS 01484 (#07/1353, and #08/1604). The Ruthe Teaching and Research Farm of the University of Veterinary Medicine Hannover, Foundation provided all animals.
A left ventricular transmural myocardial defect was covered with an autologous vascularized segment of stomach tissue in Lewe mini pigs (n = 6) with an average weight of 31 kg. No pig showed signs of clinical impairments prior to surgery, which would have been an exclusion criterion. Euthanasia and explantation of the grafts were planned 3 (group 3M), and 6 (group 6M) months after surgery. Three animals served as a control group, in which the myocardial defect was covered by a Dacron patch (Dacron group, n = 3).
First, a circular piece of the large curvature of the stomach with a diameter of 4 cm was isolated via median laparotomy while maintaining the gastroepiploic vascular supply. The resulting defect of the stomach was immediately closed with a continuous suture (PDS 2.0; Ethicon, Norderstedt, Germany). Surgical access to the thoracic cavity was achieved via an expansion of the laparotomy as a left-lateral thoracotomy in the fourth intercostal space. The cannulas for cardiopulmonary bypass (Stöckert S3; Sorin Group Germany GmbH, München, Germany) were placed into the carotid artery and via the inferior vena cava into the right atrium. The cardiopulmonary bypass was started following systemic heparinization (400 international units [IU]/kg body weight; Heparin-Natrium-25000; Ratiopharm, Ulm, Germany) and a flow rate of 60-80 mL/kg body weight/min aiming for a mean blood pressure of 50-60 mm Hg at a body temperature of 28-30°C. The body temperature of the animal was cooled to 28°C because deep hypothermia was shown to have a preventive effect against heart rhythm disorders of the test animals in preliminary test series.
After opening the pericardium and following induction of myocardial fibrillation, a piece of myocardium with a diameter of approximately 4 cm was resected from the anterolateral area of the left ventricle. Then, the excised piece of the stomach including the vascular pedicle was transdiaphragmatically transferred into the thoracic cavity. The lamina mucosa was mechanically removed before the patch was grafted into the left ventricular defect using the single-button technique (Polyprolene 4.0; Ethicon, Norderstedt, Germany) (Fig. 1, bottom right). A nonresorbable Dacron patch was used for surgery of the control animals instead of the stomach segment.
The heparin effect was then antagonized by protamine (400 IU/ kg body weight; Medapharma, Dübendorf, Switzerland). After decannulation hemostasis, reheating to 37°C body temperature and stabilization of circulation was carried out. Then, the readaptation of the ribs (Mersilene 2.0; Ethicon, Norderstedt, Germany) as well as the muscle layers of both the thoracotomy and laparotomy (Vicryl 2.0; Ethicon, Norderstedt, Germany) was performed using continuous suture. This was followed by a Donati suture (CBX1 Vicryl; Ethicon, Germany) and sealing of the wound with aluminum spray (Almapharm, Germany). Cardiac MRI Cardiac magnetic resonance imaging (MRI) was performed with a 1.5 Tesla MR scanner (Genesis Sigma CVI; GE Healthcare, Solingen, Germany). The MRI investigation of the animals took place under general anesthesia 3 months following surgery and immediately prior to scheduled euthanasia. Before the MRI examination, anesthesia of the animals was induced with ketamin (20 mg/ kg body weight i.m., B. Braun) and propofol (propofol-lipuro 1%, 4-6 mg/kg body weight, i. v.; B. Braun, Melsungen, Germany) and maintained after that with inhalative 2% isoflurane in oxygen.
Cardiac MRI was done using a four-element phased array receiver coil. ECG gated, breath-hold balanced steady-state free precession gradient echo sequence (FIESTA) in the short-axis view were used for the quantitative evaluation of left ventricular volumes and function. Additionally, late enhancement imaging was performed using a T1-weighted inversion recovery gradient echo of a series of short axes. For quantitative analysis, endocardial and epicardial contours were traced manually in all end-systolic and end-diastolic short-axis slices between the atrioventricular plane and the apex of the heart using cvi42 software version 4.1 (Circle Cardiovascular Imaging Inc., Calgary, AB, Canada).
Angiography
Angiography was carried out immediately prior to euthanasia via injection of phenobarbiturate (450 mg/kg body weight; WDT, Wertingen, Germany) under general anesthesia. A median sternolaparotomy was chosen as the surgical access path. After systemic heparinization (400 IU/kg body weight; Heparin-Natrium-25000; Ratiopharm), the graft's gastroepiploic artery was incised to allow the antegrade insertion of a cannula (Vasofix ® Safety, 22G; B. Braun, Melsungen, Germany). A nonionic contrast agent (Imeron 350 ® ; Bracco-Byk Gulden, Konstanz, Germany) was applied via the cannula. The screening was performed using a C-arm X-ray unit.
Histology
The explanted graft tissue was stained with hematoxylin-eosin and Movat pentachrome for histological characterization. For the immunohistochemical analysis, samples were fixed in Tissue Tek (Sakura Finetek, Torrance, CA, USA) and flash frozen with liquid nitrogen (Messer Griesheim GmbH, Krefeld, Germa-ny). To differentiate between the smooth musculature of the stomach tissue and the myocardium, double staining with different antigen specificity for troponin T and the myosin heavy chain proteins of the smooth muscle cells was performed. Connexin 43 was used to reveal gap junctions. The general cell nucleus staining was performed using 4′,6-diamidine-2-phenylindole.
Clinical Results
Eight out of nine animals survived the surgical procedure without significant postoperative complications until planned termination of the observation period. One animal died intraoperatively because of excessive bleeding due to unmanageable leakage of the anastomosis. No animal died postoperatively due to rupture of the gastric graft. Several days after the procedure, the animals exhibited normal eating behavior and activity. Neurological abnormalities were not observed. No wound infections occurred.
Magnetic Resonance Imaging
The animals of the Dacron group reached a mean LVEF of 71.3%. Overall, the left ventricular ejection fractions of the animals in the sample ranged from 10 to 55%. The MRI examination revealed a covered perforation of the graft in the animal with the intraventricular thrombus (Fig. 2, top C, D). There was dilatation with hypokinesia of the left ventricular myocardium in the boundary between the myocardium and the stomach tissue in other animals but no rupture of the gastric patch (Fig. 2, top B, bottom).
An improvement of the pumping function of the left ventricle in one animal took place over the course of the observation period. There was even a regression of a small aneurysm formation in one animal of group 6M (6 months).
A late enhancement was consistently detected in the boundary zone between the myocardium and the stomach patch. This indicates the formation of connective tissue as part of the biological integration of the graft into the myocardium (Fig. 2, top).
Angiography
No stenoses, embolisms, or aneurysms of the grafts' vasculature were diagnosed. There was no evidence of insufficiently perfused regions of the grafts. The antegrade infusion of contrast agent into the gastroepiploic artery flowed into the native coronary vessels of the myocardium in each animal (Fig. 3). The movements of the stomach segment indicated beat-synchronous motility.
Macroscopic Findings
There were connective tissue adhesions between the transplanted stomach segment, the surrounding epicardium, the pericardium, and the pleura (Fig. 1, top left). After opening the explanted hearts, there was the pleated aspect of the contractile muscular stomach wall visible (Fig. 1, top right and bottom left G). The boundary zone between the myocardium and stomach presented as solid white scar tissue with striated offshoots into the surrounding endocardium (Fig. 1, top left D; top right F, bottom left F).
The operative site during explantation revealed a dilatation of the transplanted stomach. The surrounding thoracic organs showed no pathological changes. In one an-imal (group 6M), there was a thrombus alongside an aneurysm.
Histology and Immunohistochemistry
The explanted tissues of the control group revealed a foreign body reaction to the synthetic fabric. Fiber-rich connective tissue with high capillary density formed between the Dacron patches and the myocardium (online suppl. Fig. 1; for all online suppl. material, see www. karger.com/doi/10.1159/000522478). A cell-rich neointima covered the Dacron tissue.
In the experimental groups there was integration of the various tissues. The conduction system presented as Purkinje fibers, and there were a large number of capillaries in all explants (Fig. 4).
The boundary zone between the myocardium and stomach tissue was characterized by granulation and scar tissue. Thick collagen fiber bundles were adjacent to vascularized granulation tissue with macrophages and stromal cells. Neutrophilic granulocytes were visible, but not abundant in all explants. Myofibroblasts were found in the gastric patch (Fig. 4).
The transplanted stomach tissue consisted of the tunica muscularis and tunica serosa. Spindle-shaped smooth muscle cells were found in the inner and outer ring layers. Neurons of the myenteric plexus were detected (Fig. 4).
There was a continuous single-layer of endothelial cells forming a neointima on the transplanted stomach (Fig. 4). There were no signs of necrosis, degeneration, or infection.
Using double staining against cardiac troponin and the myosin of the smooth muscle cells, both muscle types were detected in all explanted samples. Stained blood vessels were ubiquitously detectable (online suppl. Fig. 2, top). Immunohistochemical staining with connexin 43 revealed the connection between cardiomyocytes and cells of the grafts (online suppl. Fig. 2, bottom).
Discussion
We introduced a procedure for a transmural left ventricular reconstruction of dysfunctional myocardium with a vascularized graft. Overall, this two-cavity intervention in the pig model represents a considerable challenge to the surgical, anesthesiological, veterinary, and technical requirements. Nevertheless, our results suggest good biological integration of the graft into the hosts' myocardium, the possibility of functional improvement, and most notably connection of the gastric vasculature to the coronary system.
Regenerative Capacity via Physiological Remodeling
Reconstructing dysfunctional heart muscle with regenerative materials is important because of the increasing incidence and prevalence of severe heart failure [9]. Several biological substrates have been assessed as regenerative patch materials [10][11][12][13][14][15]. The major motivation for the use of viable biological tissue is its potential for physiological remodeling. The connection to the electrical cardiac conduction system via gap junctions is also essential in order to synchronize the contraction phases of both the heart and graft. In our study, we observed good integration of the stomach tissue into the left ventricular myocardium. Only one animal with the series' largest aneurysmal formation of the graft showed a thrombus in the lumen of the transplanted gastric tissue. Blood that stagnated in this pronounced aneurysm is likely the cause for the thrombus.
We were able to observe the pulse-synchronized movement of stomach tissue in the MRI and upon explantation of the heart. It remains unclear whether the smooth muscle cells of the tunica muscularis of the stomach took over a rhythmic contractile function. Nevertheless, Ota et al. [16] were also able to measure moderate electrical activity from porcine urinary bladder after it was used to cover defects in the right ventricle in pigs.
Mechanical Stability and Function
Most biological tissue grafts are unable to withstand the high pressure loads present in the left ventricle. Thus, most groups employ biological grafts only for reconstructive surgery of the right ventricle or atrium [17]. In the current study, the feasibility of covering a transmural left ventricular myocardial defect with a piece of autologous vascularized stomach was demonstrated in our swine model. Despite the dilation of the graft tissue in one animal, 5 out of 6 test animals showed no rupture of the transplanted stomach tissue. Maybe an iatrogenic damaging of the stomach's tunica muscularis while removing the tunica mucosa has caused the covered perforation in the one animal with the large aneurysm. Moreover, an improved left ventricular ejection fraction over time could be detected in one animal, indicating the potential regenerative capacity of the graft.
Vascularization
A common problem of the previously tested approaches to using biological myocardial prostheses is the lack of vascularization [4]. Only tissue with a thickness of up to 100 μm can be adequately supplied by diffusion [7]. Only pre-vascularized tissue would warrant supply with oxygen, nutrition, and evacuation of products of metabolism and simultaneously sufficient stability. In our current work, the connection of gastric vascularized tissue to the arterial and venous supply of the myocardium was demonstrated.
Ruel et al. [18] transplanted an autologous segment of the stomach transdiaphragmatically onto the ischemic myocardium and observed improved perfusion in this area. But, Ruel et al. [18] fixed the stomach tissue to the myocardium just epicardially. In our study, we were able to demonstrate the functional integration of the vascular supply of the transplanted stomach tissue by a complete transmural left ventricular wall replacement.
Limitations
The contribution of the graft's smooth muscle cells and its physiological remodeling to the heart's pump function cannot be distinguished by the results of this study. In order to statistically prove the observed positive effect, a higher number of tests would be required. Finally, the presumed long-term superiority of regenerative myocardial prostheses over conventional methods should be evaluated in controlled comparative trials.
Conclusions
The results of this study provide evidence that an improvement in heart function through full left ventricular wall reconstruction with vascularized grafts may be feasible, yet technically demanding. However, the observed dilation of some grafts indicates that stomach tissue does not initially have the required full mechanical stability to withstand the high pressure of the left ventricle in all cases. In order to use stomach tissue as left ventricular myocardial prostheses, additional mechanical stabilization would foster the safety of this therapeutic approach.
Nevertheless, the improvement of cardiac function, the good physiological integration, as well as the connection of the gastric vessels with the coronary system of the hosts verify the potential of stomach tissue as a regenerative myocardial prosthesis. Finally, the use of vascularized tissue as demonstrated in this study would facilitate further tissue engineering concepts for many surgical fields, which -so far -are not clinically applicable because of the lack of sufficient vascular supply. | 2022-02-10T06:17:07.113Z | 2022-02-08T00:00:00.000 | {
"year": 2022,
"sha1": "ae89b35af8091c503b6b127afcb00cb0a00f4535",
"oa_license": "CCBY",
"oa_url": "https://www.karger.com/Article/Pdf/522478",
"oa_status": "HYBRID",
"pdf_src": "Karger",
"pdf_hash": "a979285736c6b2a677c09def3d3021852fe32d1f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
258631155 | pes2o/s2orc | v3-fos-license | "Small Victories of Survival in a Deeply Homophobic World": Current Realities and Paths Forward for Substance Use in the LGBTQIA+ Community
According to the National Institute on Drug Abuse, members of the LGBTQIA+ community are disproportionately impacted by problematic substance use (
I n a YouTube video by Brujas World (2019), a New York-based feminist street collective and streetwear brand, a group of young people of color stand watching a soccer game, passing around a joint.Meanwhile, a New York City Police Department watch tower looms overhead.What starts as an everyday scene of friends hanging out and playing soccer suddenly morphs into a public health announcement.A powerful voice informs viewers that "deaths due to opioid-related overdoses nearly tripled in 2015" (Brujas World, 2019, poor people, gay people, sick people, to keep us, punished for our looks dizzy, and their friends run over to help.The voice reminds with you.Give them to your loved ones.Help them use them.When According to the National Institute on Drug Abuse (2020), LGBTQIA+ than the cisgender, heterosexual population.This paper explores the prevalence of substance use in the LGBTQIA+ community, barriers to treatment, and suggested paths forward through the lens of the minority (2003) as occurring when "stigma, prejudice, and discrimination create a hostile and stressful social environment that causes mental health problems," thus increasing the likelihood of substance use and its potential associated risks (p.674).This model thus posits that minority stress increases the likelihood of substance use, as well as its potential associated risks.
LGBTQIA+ individuals struggling with substance use, this paper argues that treatment approaches must evolve.Suggested approaches include treatment models, strategies that address co-occurring Post-Traumatic Stress Disorder (PTSD) and substance use disorder (SUD), harm reduction methods, and non-police crisis intervention.Approaches with these considerations would better support the needs and well-being of
METHODS & LIMITATIONS
Research for this article includes meta-analysis and thematic analysis of various sources from databases including Columbia University Library behavioral healthcare," "substance use treatment," "crisis intervention," "LGBTQIA+ people of color," "harm reduction," and "minority stress one pilot study, six surveys, one sample study, one systematic review, 2 pieces of advocacy-oriented content, four creative pieces, and one educational training video.Publication dates range from 2003 to 2023, with most from 2014 forward.
With the intent of surveying the literature, this article analyzes 20 peerreviewed studies with evidence from 13 additional sources, such as prominent LGBTQIA+ advocacy centers, healthcare facilities, harm reduction centers, news organizations, and companies.The available sources exhibit noticeable disparities in their demographic and topical foci.Among the 20 peer-reviewed articles, 14 discussed substance use, while others explored topics such as minority stress, social services, and the health and mental health issues of these populations.Fourteen articles broadly focused on the LGBTQIA+ community, three on LGBTQIA+ youth, three on the trans population, 2 on the LGB population, and one on the LGBTQIA+ homeless population.Concerning racial demographics, 9 sources on people of color are referenced, including 3 on the Black population and 1 on the Latinx population.Two sources pertain to substance use among people of color more broadly.Finally, 8 articles discuss substance use in the LGBTQIA+ population, with 1 focusing on youth within that category and 1 solely on trans substance users.
Limitations include a lack of research on substance use treatment in LGBTQIA+ communities (Glynn & van den Berg, 2017).Alarmingly, the National Survey on Drug Use and Health does not even include sexual orientation or gender identity in demographic surveys (Glynn & van den men and trans women of color, is practically nonexistent.In addition, there is a huge gap in research on older adults in the community (Crath et al., 2021;Vareed & Mendoza, 2019).
LGBTQIA+ HISTORY, CULTURE, AND REALITIES CONCERNING SUBSTANCE USE
In a Youtube video from an event called "HaHa Harm Reduction," Del Castillo (2017) describes an experiment in which a rat was locked in a cage and provided a water bowl containing heroin.Quickly, the rat became addicted to heroin.When the scientists took the rat out of the cage, they gave it a jungle gym to climb on, lots of space to run, food, and water, and added other rats to the area.Some rats tried heroin but remained disinterested in it; none of the rats in the second cage became addicted to heroin.In the words of Del Castillo, "Is the problem the Del Castillo (2017) elaborates that for many LGBTQIA+ individuals, substance use is not so much about the high, but instead about "the safe haven from a hostile world that would not otherwise embrace the rainbow," a statement that illustrates an experience of the minority providers are aware of their patients' gender identity or sexual orientation, the patients are more likely to build rapport with their providers and disclose health information, and the providers, in turn, are and trans people, that is a luxury.While healthcare spaces have not always provided a safe space for the LGBTQIA+ community, bars and clubs have always been a central part of the history of the LGBTQIA+ around relationships, connection, community, and chosen family.While bars and clubs can be a liberating source of joy, spaces centered around drugs and alcohol can also come with risks, especially for those with preexisting challenges related to substance use (Vareed & Mendoza, 2019).An example of this is the use of party and play (PNP), which is a term describing the use of party drugs, such as crystal meth and ecstasy, during sex among men who have sex with men (Mallon, 2018).
for developing substance use disorders.On this topic Mallon (2018) states, "The role of oppression, being part of a marginalized population, and the importance women place on relationships are integral to understanding addiction among lesbian women" (p.71).This suggests that lesbians may use substances as a way to relate to one another.Therefore, Mallon (2018) argues treatment interventions for people who identify as lesbians should focus on relationship development and "expression of the true self, examining both external and internal homophobia, including addressing shame or a lack of self-acceptance" stereotyping of women, community building and building authentic connections are time-honored pieces of LGBTQIA+ culture.
Unfortunately, in addition to high rates of substance use in the on LGBTQIA+ history.Much work has been done around the trans population, for example, in the context of HIV risk due to the high use within a syndemic framework (Glynn & van den Berg, 2017).The AIDS epidemic points not only to another collective trauma but also to co-occurring illnesses with the potential to be treated together.For example, a summary of 12 studies on LGB youth informs readers that the most common risk factors for substance use include experiences of victimization, stress, and housing insecurity (Goldbach et al., 2014).Risk LGBTQIA+ substance users, point toward the necessity for increased research, improved access to care, and treatment for this group.
THE PREVALENCE OF SUBSTANCE USE IN LGBTQIA+ COMMUNITIES
the organization elaborates that it is impossible to establish long-term trends on this topic because surveys only recently began to include gender identity and sexuality.Much research on the topic orients this exposure to discrimination over time by people in marginalized groups leads to higher rates of mental health and substance use challenges (Glynn & van den Berg, 2017).Studies have shown that discrimination and substance use are correlated (Glynn & van den Berg, 2017).Social stigma and discrimination increase the likelihood of harassment and violence.These sources of added stress expose the community to a greater risk of behavioral health vulnerabilities (NIDA, 2020).To compound matters, a disproportionate number of LGBTQIA+ young people go without housing each year in the U.S. LGBTQIA+ youth without housing have excessive rates of substance use issues and mental health challenges, higher rates of suicidal behavior and HIV risk, and are more likely to be victims of violence (Keuroghlian et al., 2014).
Similarly, substance use is comparatively high within the trans community.Among transgender individuals, there are higher rates of use for alcohol, illicit drugs, and non-medical prescription drugs compared with the cisgender population (Glynn & van den Berg, 2017).Reasons for the higher prevalence of substance use among trans people include the prevalence of intimate partner violence, low-income status, housing instability, PTSD, and participation in sex work (Keuroghlian et al., 2014).In fact, 35% of trans people who have experienced verbal harassment in school, physical or sexual assault, or have been expelled from school report using substances as a coping mechanism for these gender-related traumas (Keuroghlian et al., 2014).Furthermore, the psychological stress of disparities in healthcare access that trans people experience is another trauma that worsens mental health and increases the likelihood of substance use.This stress also leads to decreased healthcare utilization, which puts the trans population at increased risk under the minority stress model (Keuroghlian et al., 2014).
FURTHER DISPARITIES WITHIN LGBTQIA+ SUBSTANCE USE RESEARCH
Despite well-documented disparities, research on the mental health outcomes of LGBTQIA+ people of color lacks nuance and heterogeneity, with many studies grouping people of color into one singular group or looking only at Black and Hispanic populations (Allen & Leslie, 2020;Eisenburg et al., 2022).However, people of color in the LGBTQIA+ example, Drazdowski et al.'s (2020) study surveyed 200 LGBTQIA+ people of color about their experiences with racism, LGBTQIA+ discrimination, and substance use.The study found that being both a person of color and LGBTQIA+ puts one at a higher likelihood of using all researched types of "illicit drugs," disaggregating data based on experiences of internalized racism, homophobia, and discrimination based on both identity groups (Drazdowski et al., 2020).2022) study displays that Latinx and Black trans youth are the group with the highest prevalence of substance misuse of their age group.The experiences of multiple marginalizations and minority stress, including racism from within the LGBTQIA+ community, are likely to impact the prevalence of service utilization and completion (Cyrus, 2017).Therefore, a more thorough analysis of varied racial groups' substance use trends, treatment access, and treatment outcomes may help improve health outcomes for those from diverse cultures and experiences.While advocacy groups like the Trevor Project and aforementioned researchers are working toward expanding the research and data on this topic, the absence of earlier research suggests there is still a long way to go (2022National Survey on LGBTQ Youth Mental Health, 2022.; "Substance Use and Suicide Risk Among LGBTQ Youth," 2022).
ACCESS TO SUBSTANCE USE TREATMENT IN THE LGBTQIA+ COMMUNITY
The reasons canvassed above prove the necessity of using traumainformed, community-based, holistic, person-in-environment centered treatment modalities for substance use in LGBTQIA+ populations.
the clear need for these services, culturally competent substance use treatment remains scarce (Williams & Fish, 2020).modalities has improved treatment outcomes compared to programming exception is a study which examined the outcomes of participants in an Austin, Texas-based recovery housing facility for men who have sex with men (Mericle et al., 2020).The study displayed that relief from minority LGBTQIA+ individuals have lower completion and abstinence rates on average in substance use treatment than their cisgender, heterosexual complicate the potential for success in traditional substance use-related services.Twelve-step programs, such as Alcoholics Anonymous (AA), have higher success rates among those who identify as part of the group and believe in a higher power (Vareed & Mendoza, 2019).Since LGBTQIA+ people might be more uncomfortable with the religious aspect of 12-step programs due to the fear of certain religious groups displaying homophobia or transphobia (Vareed & Mendoza, 2019), groups.
"AA is not the only model that responds to alcoholism.Scholars of the history of the Alcoholics Anonymous program have pointed out that the program often eclipses harm reduction approaches.Even as I dream of the abundance of those options," Jain adds, "I believe in that meeting.In the embodied warmth of the church room in Oakland, in the happy existing harm reduction services in their area were usually inaccessible, unsafe, and a space where they experienced judgment from providers (Goodyear et al., 2021).Many participants feared they would face drug sometimes chose not to use drug-checking services, which screen for the presence of risky substances, including fentanyl, because of the concern that the police would stop them (Goodyear et al., 2021).Since harassment, drug criminalization is a massive issue for the LGBTQIA+ population, especially for people of color, who are even more at risk of police harassment and violence (Goodyear et al., 2021).Professionals accessible substance-use services.
BEST TREATMENT PRACTICES have been shown to improve treatment utilization and outcomes
LGBTQIA+ population, historical and cultural themes of the people,
"SMALL VICTORIES OF SURVIVAL IN A DEEPLY HOMOPHOBIC WORLD"
and treatment access trends.Several models are discussed, including implementing integrated behavioral healthcare to improve access and utilization of care, implementing treatment models tailored for the trans population to address disparities, and providing treatment for co-occurring PTSD and SUD.As these disorders are prevalent that account for realities within the LGBTQIA+ community and putting into action means of crisis de-escalation outside of policing
BEHAVIORAL HEALTH INTEGRATION
According to the Integration Academy, "Integrated behavioral health care blends care in one setting for medical conditions and related Integrated Behavioral Health?[WIBH?]," n.d., para.2).When working within an integrated care model, providers must recognize that physical and behavioral health are interrelated and that clinicians working on both sides of the healthcare sphere must work together to treat patients and help them meet their health goals ("WIBH?").This convenience makes it easier for patients to access behavioral healthcare treatment, However, most healthcare professionals have not received training to work in that system ("WIBH?").In the highest level of integrated care, there is complete collaboration between providers in a merged practice within the same building (Keuroghlian, n.d.).Advocating for more training imperative in order to reduce the disproportionate risk of substance use.Fenway Health, a Boston-based LGBTQIA+-focused healthcare center, explains that Fenway's integrative behavioral healthcare improves the patient experience because its holistic approach reduces stigma around substance use and mental health while simultaneously improving access to treatment and reducing healthcare costs.In addition, Keuroghlian has positively impacted outcomes.
TRAUMA-INFORMED, LGBTQIA+-SPECIFIC TREATMENT MODELS
The literature broadly suggests a person-in-environment model that is holistic and also trauma-informed is the best course of action.Due to the disproportionate rates of substance use and lack of access among trans individuals, this section will focus on treatment models for trans substance users.As a treatment model, Behavioral Health Integration for this population should take place in an environment tailored for the to be aware of the minority stress model and implement a traumainformed framework that centers on the realities faced by people impacted by minority stress and that highlights the strengths of the trans individuals are more susceptible to having a background of trauma associated with violence compared to the cisgender, heterosexual population, adopting trauma-informed practices is critical in mitigating the likelihood of substance use relapse (Vareed & Mendoza, 2019).Hence, interventions should celebrate identity.
TREATMENT OF CO-OCCURRING PTSD AND SUBSTANCE USE: THE SEEKING SAFETY STUDY
fostering relationships and community while acknowledging and mitigating the impacts of the societal "cage" of transphobia and homophobia can be essential factors in preventing substance misuse.from both substance use disorders and PTSD is impactful in improving both diagnoses (Keuroghlian, n.d.).A 2017 study called Seeking Safety sought to address substance use through a holistic model (Empson et al., 2017).Seeking Safety is a treatment program that uses cognitive behavioral therapy for co-occurring PTSD and substance use disorder.It was tested in 12 sessions with a group of women of trans experience "SMALL VICTORIES OF SURVIVAL IN A DEEPLY HOMOPHOBIC WORLD" PTSD symptoms, alcoholism, and substance use (Empson et al., 2017).This study shows the importance of confronting substance use in the trans community holistically, in line with the concept of integrated behavioral healthcare (Empson et al., 2017).
HARM REDUCTION
programmes, and practices that aim to minimize negative health, social and legal impacts associated with drug use, drug policies and drug laws" ("What is Harm Reduction?"para.1).Harm reduction is a rightsbased approach that focuses on support without discrimination.This philosophy implies that models which do not prescribe harm-reduction strategies may involve discrimination, which explains why marginalized Soprano, one of the producers of the Brujas World video, does this well with his production of harm reduction kits, which include practical tools for safer sex and drug use as well as more artistic items, such as a para.6).According to Soprano, "so much of harm reduction practices and theories came out of sex working communities, people who are chemically dependent, sick and disabled people, and communities of care made up of gay men of color and trans women of color" (Kuwabara Blanchard, 2020, para. 5).Soprano expands by asking, "What if the both a piece of utility and a piece of political propaganda?" (Kuwabara Blanchard, 2020, para.5).Such creative approaches to harm reduction may reduce stigma and increase service utilization.
Another group which focuses on harm reduction education is Queer Appalachia (Worlley, n.d.).Their website explains that "with the disheartening and exponentially increasing rate of opioid use in to recover only further exacerbates these experiences" (Worlley, n.d., in providing impactful and accessible services when it comes to substance abuse treatment, especially if the intersection of race is considered (Dradzowski et al., 2022).For this reason, communities have turned to harm reduction and mutual aid practices to support their loved ones and community members in a way that does not rely on government support.
CRISIS INTERVENTION OUTSIDE OF POLICING
The criminalization of substance use is intrinsically linked to the history of racism in the U.S., with disproportionate negative impacts on people of color.Plus, as previously discussed, there is a collective trauma associated with police violence in the LGBTQIA+ community.Hence, building methods of crisis intervention that exist outside of the policing and carceral systems is another critical next step in supporting LGBTQIA+ people who use substances (Alang et al., 2017;Atlas, 2021;Bor et al., 2018;Goodyear & Knight, 2021).For example, implementation of crisis intervention models outside of policing has proven impactful among the general population in Portland, Oregon through the Crisis Intervention Helping out on the Streets program, which proved successful not only in de-escalating crises, but also reducing costs and leading to only a 1% need to obtain police backup ("Cahoots Media Guide," 2020).In addition, implementing this model in communities could increase access to care by drawing a direct line between communities and behavioral health providers, instead of a line between substance users and the carceral system.
CONCLUSION
substance use-related services and higher substance user rates, helpseeking behaviors, and treatment completion rates (Vareed & Mendoza, 2019).From literature assessment, historical and cultural factors, and statistics, this paper concludes that while more research and funding "SMALL VICTORIES OF SURVIVAL IN A DEEPLY HOMOPHOBIC WORLD" are certainly needed to support this vital issue, service models must additionally be rethought to best support LGBTQIA+ communities.stress on those in the LGBTQIA+ community who use drugs include models, cognitive behavioral therapy focusing on co-occurring PTSD and substance use disorder, harm reduction, and crisis intervention What might it look like to build models of care for alcohol abuse mind?Models that recognize the interconnectedness of social marginalization and alcohol abuse instead of pathologizing alcoholism?That commemorate the small victories of survival in a deeply homophobic world?That to accept and even celebrate that sometimes, you have to hide parts of yourself?(para.15) | 2023-05-12T15:08:34.597Z | 2023-05-10T00:00:00.000 | {
"year": 2023,
"sha1": "b5f445b735042b9bb14f441e2599ce25579134ee",
"oa_license": "CCBY",
"oa_url": "https://journals.library.columbia.edu/index.php/cswr/article/download/11206/5557",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "25c27d2055d96c80b3f9fc78aa9b4a4bc1aeb0d0",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": []
} |
49862718 | pes2o/s2orc | v3-fos-license | Safe Margin beyond Dens Tips to Ventral Dura in Anterior Odontoid Screw Fixation : Analysis of Three-Dimensional Computed Tomography Scan of Odontoid Process
Objective Anterior odontoid screw fixation is a safe and effective method for the treatment of odontoid fractures. The surgical technique is recommended for perforation of the apical cortex of the dens by the lag screw. However, overpenetration of the apical cortex may lead to potentially serious complications such as damages of adjacent vascular and neural structures. The purpose of this study was to assess the role of three-dimensional computed tomography (CT) scan to evaluate the safe margin beyond dens tip to ventral dura for anterior odontoid screw fixation. Methods We retrospectively analyzed the three-dimensional CT scans of the cervical spines in 55 consecutive patients at our trauma center. The patients included 38 males and 17 females aged between 22 and 73 years (mean age±standard deviation, 45.8±14.2 years). Using sagittal images of 3-dimensional CT scan, the safe margins beyond dens tip to ventral dura as well as the appropriate screw length were measured. Results The mean width of the apical dens tip was 9.6±1.1 mm. The mean lengths from the screw entry point to the apical dens tip and posterior end of dens tip were 39.2±2.6 mm and 36.6±2.4 mm. The safe margin beyond apical dens tip to ventral dura was 7.7±1.7 mm. However, the safe margin beyond the posterior end of dens tip to ventral dura was decreased to 2.1±3.2 mm, which was statistically significant (p<0.01). There were no significant differences of safe margins beyond dens tip to ventral dura with patient gender and age. Conclusion Extension by several millimeters beyond the dens tip is safe, if the trajectory of anterior odontoid screw is targeted at the apical dens tip. However, if the trajectory of the screw is targeted to the posterior end of dens tip, extension beyond dens tip may lead to damage immediately adjacent to the vental dura mater.
INTRODUCTION
Anterior odontoid screw fixation is an ideal surgical option to stabilize type II odontoid fractures, because it provides a high union rate without limiting neck motion 1,2,4,9,10) . We have recently reported that a fracture gap of ≥2 mm resulted in a J Korean Neurosurg Soc 61 | July 2018 21-fold increase nonunion rates after anterior odontoid screw fixation 7) . Therefore, in case of type II odontoid fractures, we believe that the fracture gap of odontoid fracture is one of the important risk factors for successful bony union 7,14) . Although cannulated lag screws or headless compression screws are frequently used to compress and tighten the fracture gap during anterior odontoid screw fixation, it may not be easy to achieve effective reduction 7,10,15,16) . To enhance the lag effect during inter-fragmentary compression, in general, the recommended surgical technique is believed to perforate the apical dens tip by the screw, and for correct sizing of the implant to obtain bicortical purchase [1][2][3]5,6) .
However, the cortical purchase by the cannulated screw may be technically challenging and difficult, because the brain stem is located just beyond the apical cortex of the odontoid process, and any overpenetration of the apical cortex by the screw may damage the vertebral artery and neural tissue 2,14,17) . To our knowledge, there is no previous study analyzing the distance from the apical dens tip to the adjacent neural structure.
The purpose of this study was to analyze the three-dimensional computed tomography (CT) scans to evaluate the distance from the apical dens tip to the adjacent neural structure through the trajectory of the anterior odontoid screw, and demonstrate the safe margin from the dens tip to the adjacent neural structures during anterior odontoid screw fixation.
MATERIALS AND METHODS
We retrospectively analyzed the three-dimensional CT scans of the cervical spine in 55 consecutive patients, who were scanned at our trauma center after traffic accidents. None of the patients sustained any fracture of the cervical spine. After appropriate Institutional Review Board approval (2016-07-003), we retrospectively reviewed and analyzed the patient medical charts and their sagittal reconstructions of the 3-dimensional CT scans in order to measure the width of the apical dens tip, length for anterior odontoid screw, and safe margin beyond dens tip to ventral dura.
The entry point for the anterior odontoid screw was marked at the anterior-inferior portion of the body of axis on the sagittal CT images. The apex of dens tip and the posterior end of the dens tip were marked as well (Fig. 1). The width of apical dens tip was measured horizontally in the middle of anterior arch of C1 ( Fig. 2A). The mean lengths from the entry point to the apex of dens tip, and the mean lengths from the entry point to the posterior end of dens tip were measured, respectively (Fig. 2B). The safe margin was defined as the distance from the dens tip to the adjacent dura mater. Safe margin beyond the apical dens tip to ventral dura, and safe margin beyond the posterior end of dens tip to ventral dura were measured, respectively (Fig. 3). All the parameters were measured using the automated measuring tool on the PACS system (PiViewSTAR 5.0; INFINITT, Seoul, Korea). All values were measured in triplicate and averaged.
Statistical analysis
All statistical analyses were performed using Statistical Package for the Social Sciences software version 18 (SPSS Inc., Chicago, IL, USA). All data were espressed as mean±standard deviations. Significant differences in safe margin between the apex of dens tip and posterior end of dens tip were determined using the independent t-test. The threshold for statistical significance was set at p<0.05. For analysis, patients were dichotomized by gender (male and female), and age (younger than 60 years and those 60 years or older).
RESULTS
There were 38 male and 17 female patients with ages ranging from 22 to 73 years (mean age±standard deviation, 45.8± 14.2 years). The mean width of the apical dens tip was 9.6±1.1 mm. The mean lengths from the screw entry point to the apical dens tip and posterior end of dens tip were 39.2±2.6 mm and 36.6±2.4 mm. Safe margin beyond the apex of dens tip to ventral dura was 7.7±1.7 mm. However, the safe margin beyond the posterior end of dens tip to ventral dura was decreased to 2.1±3.2 mm, which was statistically different (p<0.01). There were no statistical differences in safe margins beyond dens tip to ventral dura between patients' gender and age (Table 1).
DISCUSSION
Although the risk factors for nonunion after anterior odontoid screw fixation were disputed, the fracture gap was generally considered as one of the significant risk factors 7,8,[10][11][12][13] . In our previous study, a fracture gap ≥2 mm was found to be significantly associated with fusion failure after anterior odontoid screw fixation, therefore we also believe the gap of odon- toid fracture is one of the important risk factors for successful bony union 7,14) . Therefore if the surgical technique can reduce the fracture gap, we can achieve higher rates of bone fusion after the operation.
From an orthopedic perspective, inter-fragmentary compression and stable fixation are essential for successful bone fusion, and the pressure at the fractured edges enhances fracture healing in a predictable manner 1,5,7,8,13) . Therefore, appropriate instrumentation and surgical techniques are needed for the reduction of fracture gap during anterior odontoid screw fixation.
In general, cannulated lag screws or headless Herbert screws are usually used to compress and tighten the fracture gap during anterior odontoid screw fixation 7,10,14,16) . Theoretically, after the lag screw or headless Herbert screw crosses the fracture line, the threads engage the fragment and the lag effect of the screw reduces the fracture gap. Further tightening of the screw increases the strength of the pull force of the fractured piece in a caudal direction, increasing inter-fragmentary compression, which results in enhanced fracture healing 1,4,7,8,10,11) . Penetration of the apical dens by the screws is recommended for the sufficient reduction of fracture gap following the lag effect 1,2) . We totally agreed with the importance of bicortical purchase by the screw.
In our recent biomechanical study, we reported that interfragmentary compression pressures of the Herbert screw were significantly increased when the screw tip penetrated the opposite dens. Our findings support the generally recommended surgical technique, and penetration of the apical dens tip with the screw is essential to facilitate the maximal reduction of the fracture gap during anterior odontoid screw fixation 14) .
Accurate screw length is essential for anterior odontoid screw fixation. Short screws prevent penetration into the odontoid tip, and cannot pull the fractured odotoid fragment for inter-fragmentary compression. In our study, the mean screw lengths from the screw entry point to the apical dens tip and the posterior end of dens tip were 39.2±2.6 mm and 36.6± 2.4 mm, respectively, which were consistent with previous study 17) . These values represent the real screw lengths for anterior odontoid screw fixation, measured as the line through the real screw entry point (anterior-inferior point of the body of axis) to the odontoid tip. Therefore, screws measuring several millimeters longer than the real screw length must be used for the penetration of the odontoid tip by the screw.
However, the cortical purchase by the screw may be technically challenging and stressful to the surgeon, because the brain stem is located slightly beyond the apical cortex of dens, and therefore, over-penetration of the apical cortex by the screw, as well as K-wire or tapping of pilot hole, may damage to the adjacent neurovascular structure. In our previous study, despite all our efforts for penetration of apical dens tip by the Herbert screw, we achieved cortical purchase by the screw in only 16 out of 37 patients (unpublished data). The result was attributed to the fear of damaging the dura immediately beyond the apical dens tip during the anterior odontoid screw fixation. Therefore, preoperatively, it is important to determine the proximity of the apical dens tip to adjacent dura.
The distance from the apical dens tip to the adjacent neural structure has not been reported previously. In the present study, the width of the apical dens tip was sufficiently wide (mean length 9.6±1.1 mm), and the screw was targeted at two different target points of dens tip by the screw trajectory : apical dens tip and posterior end of dens tip. The distance from the apical dens tip to the ventral dura was 7.7±1.7 mm, suggesting that the safe margin was relatively wide, if the screw trajectory was aimed at the apical dens tip. However, if the screw trajectory is aimed at the posterior end of the dens tip, the safe margin beyond the posterior end of the dens tip to ventral dura Values are presented as mean±standard deviation was decreased to 2.1±3.2 mm, which suggested that the screw trajectory had a relatively higher risk of dural injury. The study limitations are as follows. First, we analyzed the safe margin only in patients without odontoid fracture. In the case of odontoid fracture, the real safe margin from apical dens tip to ventral dura can be altered by the fracture displacement and fracture gap. But in the case of odontoid fracture with severe displacement or wide fractured gap, preoperative reduction by proper surgical position is essential for anterior odontoid screw fixation. So we thought that measurement of safe margin in the patient with no fracture is meaningful. However, fracture displacement and fracture gap of odontoid fractures as well as, surgical position of the patient during operation can inf luence the measurement of the real screw length and safe margin. As mentioned before, the correct size of screw length is also important for the penetration of dens tip by the screw. Second, because the dura is filled with cerebrospinal fluid, brain and vessels do not contact each other directly. Thus the distance from dens tip to the ventral dura cannot determine the real risk of adjacent neurovascular structures. However, our findings highlight the proximity of apical dens tip to the adjacent dura, and safe margin may be tailored according to morphology of fracture.
CONCLUSION
If the trajectory of anterior odontoid screw is targeted at the apical dens tip, it is safe to extend beyond dens tip by several millimeters. However, if the trajectory of the screw is targeted to the posterior end of the dens tip, extension beyond dens tip may damage the area immediately adjacent to the ventral dura mater.
CONFLICTS OF INTEREST
No potential conflict of interest relevant to this article was reported.
INFORMED CONSENT
Informed consent was obtained from all individual partici-pants included in this study. | 2018-08-01T18:42:52.657Z | 2018-07-01T00:00:00.000 | {
"year": 2018,
"sha1": "3b4292d7a2c8579d8112b5a35c41e95940b22d95",
"oa_license": "CCBYNC",
"oa_url": "http://www.jkns.or.kr/upload/pdf/jkns-2018-0034.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3b4292d7a2c8579d8112b5a35c41e95940b22d95",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
237502370 | pes2o/s2orc | v3-fos-license | Epiretinal membrane surgery challenges and results in patients with premium intraocular lens
DOI: 10.4328/ACAM.20565 Received: 2021-03-02 Accepted: 2021-08-14 Published Online: 2021-08-17 Printed: 2021-09-01 Ann Clin Anal Med 2021;12(9):1021-1025 Corresponding Author: Berkay Akmaz, Department of Ophthalmology, Manisa City Hospital, Manisa, Turkey. E-mail: berkayakmaz@hotmail.com P: +90 506 917 11 37 Corresponding Author ORCID ID: https://orcid.org/0000-0003-1852-9474 Abstract Aim: In this study, it was aimed to evaluate the epiretinal membrane surgery challenges and results in patients with premium intraocular lens performed by an experienced surgeon in our clinic and compare with the current literature. Material and Methods: In this retrospective study, 75 patients who underwent vitrectomy by a single surgeon were included. All patients had previously undergone phaco + iol implantation. Patients were divided into three groups according to the types of intraocular lens (Group 1: monofocal, group 2: bifocal, and group: 3 trifocal). Surgery time, retinal nipping, best-corrected visual acuity (BCVA) and Central macular thickness (CMT) were analyzed among three groups (Pre-op Post-op 6th month and Post-op 1st year). Results: Compared to the group of monofocal IOLs, surgery time and the number of retinal nipping were significantly increased in groups of bifocal and trifocal IOLs (p<0.001). In addition, there was a significant positive correlation between surgery time and retinal nipping (p<0.001, r: 0.371**). When the Pre-op, Post-op 6th month and Post-op 1st year logMAR visual acuity values in the groups were compared, it was found that the logMAR (Logarithm of the minimum angle of resolution or recognition) visual acuity values in Post-op 6th month and Post-op 1st year increased statistically significantly compared to Pre-op logMAR (p<0.001). Discussion: Premium lenses prolong the surgery time during vitreoretinal surgery. Since premium iols negatively affect visual acuity, it should not be recommended to patients with retinal disease. However, with careful preoperative planning, proactive familiarity with these premium IOLs, and proper contact with patients, retinal surgeons do not need to fear these sophisticated lenses.
Introduction
Cataracts are the first cause of blindness in underdeveloped countries. After cataract surgery, vision is corrected by removing the lens and replacing it with an intraocular lens (IOL). The majority of implanted IOLs in the world are monofocal IOLs that are designed to change lens dioptric power to a single focal point and can provide only satisfying far vision, but require glasses for close vision [1]. Therefore, a wide variety of designs and optical properties have been developed to overcome this obstacle. Multifocal IOLs that have developed over the past 20 years can now supply high levels of uncorrected vision for both close visual tasks and distance. Modern multifocal IOLs provide independence from spectacles for most patients with refractive lens exchange (RLE) and cataracts. Patients were very pleased with the release of new generation lenses. However, since there are some disadvantages of the new generation lenses besides its advantages, one of these disadvantages is that when it is necessary for patients who will have a premium lens in the future, vitreoretinal surgery will be required. One of the conditions requiring vitreoretinal surgery is the formation of epiretinal membrane (ERM), which can be seen mostly in elderly patients. ERM has an avascular, fibrocellular structure formed by proliferation on the inner surface of the Internal Limit Membrane (ILM) and causes varying levels of visual impairment [2]. ERMs that develop in normal eyes, other than the detection of a posterior vitreous detachment, are called idiopathic [2]. The mean age of ERM diagnosis is 65 years old. The incidence of idiopathic ERM is 5.8%. Its incidence is equal in men and women, and it is bilateral in 20% -30% of cases [2,3]. In the literature, it has been reported that multifocal IOLs cause imaging difficulties during vitrectomy for retinal detachment and the epiretinal membrane (ERM) peeling [4]. On the other hand, normal imaging with multifocal IOLs during PPV has also been reported [5]. In patients with a premium intraocular lens during ERM peeling surgery, problems such as focusing on the membrane and defocus occur when the lens of the lens coincides with the optic axis of the surgeon. There are a limited number of studies in the literature evaluating visual outcomes related to ERM surgery in patients with multifocal IOLs and other macular diseases [6,7]. However, there are very limited human studies in the literature evaluating visual results related to ERM surgery in patients with premium intraocular lenses. In this respect, our study is very important in terms of contributing to the literature. In this study, it was aimed to evaluate the epiretinal membrane surgery challenges and results in patients with premium intraocular lens performed by an experienced surgeon in our clinic and compare with the current literature.
Material and Methods
This study was conducted at Katip Celebi University Ataturk Education and Research Hospital with the permission of the Ethics Committee of the Department of Ophthalmology. The medical files of 75 patients, all of whom were pseudophakic and underwent vitreoretinal surgery at the West Eye Institute ambulatory surgery center from March 2014 to July 2018, were analyzed retrospectively. Informed consent was obtained from all participants included in the study.
Patients and data selection
The electronic medical data of patients who had previously undergone uncomplicated phaco + iol implantation and who underwent PPV + ERM + ILM peeling surgery due to idiopathic erm at our center were scanned retrospectively. Inclusion criteria for files were as follows: a comprehensive ophthalmological examination (pre-op and post-op, written iol features, follow-up year), ERM surgery duration recorded, those with complete OCT images, and those who had a post op follow-up for at least 1 year. Inclusion criteria for iols were as follows: Alcon SA (mono), Zeiss AT LISA (Bi) or Alcon panOptix (Tri) patients. Exclusion criteria were those with ocular surface defects, those with corneal pathology, those with pupil and pathology affecting the anterior segment (pupil dysfunction), those with posterior capsulotomy and those with posterior capsular opacity, those with vitreous disorder (asteroid hyaloids), those with secondary ERM (trauma, diabetic ERM), those with optic neuropathy, those with systemic diseases (DM, HT, hyperlipidemia). The patients were divided into three groups according to the types of intraocular lenses already inserted (Group 1: monofocal, Group 2: bifocal, Group: 3 trifocal). Age, gender, type of IOLs applied, surgery time, number of retinal nipping, logMAR (Logarithm of the minimum angle of resolution or recognition) (Pre-op/Post-op 6th month and Post-op 1st year) and CMT (Central macular thickness) (Pre-op/Post-op 6th month and Post-op 1st year) values were recorded. Surgery procedure All surgeries were performed under subtenon local anesthesia using the Möller-Wedel microscope. Before the surgery, the periorbital skin and eyelids were cleaned using a 5% povidoneiodine solution, and the eyelid was carefully closed to avoid the surgery area. Sclerotomy areas were carefully performed between 1 and 2 o'clock positions of the endoillumination probe and 10 to 11 o'clock positions of the vitrectomy probe. The infusion was carefully placed between 8-9 hours for the right eyes and 3-4 hours for the left eyes. After the conjunctiva displaced about 2 mm, the sclera penetrated the limbus with a 3.5 mm posterior trocar to the limbus at an angle of 25 ° to 30 ° with the 25-G one-step Kit (Alcon Laboratories, Inc, TX, USA). All vitrectomy transactions were applied utilizing a Constellation Alcon Vision System, and a noncontact lens (Eibos 90 [90D] and SPXL [132 D], Möller-Wedel, Wedel, Germany) was utilized for imaging of the posterior segment throughout the surgery. The SPXL lens was utilized during core vitrectomy and peripheral retinal control, and the 90 D lens was peeling erm during macular surgery. The working distance of the SPXL lens was 4 mm from the cornea, while the 90 D macular lens was 7 mm from the cornea. The posterior hyaloid was removed in all cases by core vitrectomy triamcinolone after standard trocars with 25 gauges. MembraneBlue-Dual (DORC International, Zuidland, the Netherlands) was used under the liquid to stain the ERM and ILM membranes. The epiretinal membrane was peeled with pinch and peel technique using a 25-G intraocular forceps (Dorc Int., Netherlands). ILM was stained again with dual dye and peeled off with the same technique. Retinal nipping was defined as involuntarily pinching of the neurosensory retina with forceps during the peeling of the ERM and ILM membranes. Scleral indentation and retinal circumference were carefully examined, and any refraction in the retina was repaired with laser retinopexy. Air liquid change was made (30-50%). Injection of 20% sf6 gas was made. Trocars were removed. Gas leakage control was done and surgery was terminated. For surgery time, the beginning was the entering of the trocars and the end was the end of the leakage check. IOL design IOLs used in patients are monofocal (Alcon SA or alcon iQ model) (Constellation; Alcon, Fort Worth, TX), bifocal Zeiss AT LISA 809 and diffractive aspherical trifocal alcon Panoptix iol (AcrySof-PanOptixTM, Alcon Laboratories, Inc., TX, USA). The AcrySof SA60AT IOL is monofocal, anterior asymmetric biconvex, onepiece IOL with a square edge of 6 mm. The AT-LISA-809 (Carl Zeiss) is an aspheric diffractive (bifocal biconvex) IOL. This lens is a single-piece IOL with an overall diameter of 11.0 mm and an optic diameter of 6.0 mm. The surface is divided into phase zonesand main zones; the phase zones take on the function of the steps of the main zones' diffractive power. The close vision add of this lens is +3.75 D over the distance power. The AcrySof® IQ PanOptix® Trifocal intraocular lenses (IOLs) are ultraviolet absorbing and foldable multifocal IOLs (blue light filtering). Each IOL model is a single-piece design with a central optic and two open-loop haptics. The optical diffractive structure is in the central optic portion of 4.5 mm and divides the incoming light to create a +3.25 D near and a +2.17 D intermediate add power at the IOL plane.
Statistical analysis
The data were unified and statistical analysis was supplied with SPSS v25 (SPSS Inc., USA). The Snellen value was used for visual acuity and afterwards turned to logMAR scale for analysis. The Chi-square or Kruskal-Wallis variance analysis test (post-hoc Bonferroni test) were used for comparison between groups. Paired t-test analysis was used to determine changes before and after changes in outcome variables. The p-values <0.05 were considered statistically significant.
Results
The records of 190 patients were screened retrospectively. A total of 125 patients were excluded from the study since 32 patients had undetectable iol subtypes, 5 patients had traumatic erm, 5 corneal pathologies, 23 patients had laser capsulotomy, and 60 patients had systemic diseases, lack of sufficient VA and OCT data, and lack of sufficient follow-up time.
Demographic characteristics of patients
The sociodemographic comparison of patients is shown in Table 1. There was no statistically significant difference between the groups in terms of mean age and gender (p = 0.770 and p = 0.299, respectively) ( Table 1). ERM surgery time and number of retinal nipping among iol subtypes Comparison of patients with monofocal and multifocal (bifocal, trifocal) IOLs in terms of surgery time and the number retinal nipping is shown in Table 2. Compared to the group using monofocal IOLs, surgery time was found to be statistically significantly increased in the groups using bifocal IOLs and trifocal IOLs (p<0.001). Compared to the group using monofocal IOLs, it was found that the number of retinal nipping increased statistically significantly in groups using bifocal IOLs and trifocal IOLs (p<0.001) ( Table 2). In addition, there was a significant positive correlation between surgery time and retinal nipping (p=0.001, r: 0.371**).
OCT measurements among iol subtypes
Comparison of patients with monofocal and multifocal (bifocal, trifocal) IOL in terms of logMAR and CMT is shown in Table 3. There was a statistically significant difference between the groups in terms of Pre-op, Post-op 6th month and Postop 1st year logMAR values (p= 0.043, p=0.031, and p=0.016, respectively). In terms of Pre-op, Post-op 6th month and Postop 1st year logMAR values, there was a statistically significant increase in group bifocal and trifocal compared to group monofocal (p=0.034, p=0.012, and p=0.008, respectively). Table 3).
Discussion
In our study, we compared our epiretinal membrane surgery results in patients who had previously undergone phacoemulsification with different types of iol implantations.
To the best of our knowledge, this is the first study in the published literature. We showed that surgery time increased statistically significantly in groups using bifocal and trifocal IOLs. It was also found that the number of retinal nipping increased statistically significantly in groups using bifocal and trifocal IOLs, and there was a significant positive correlation between surgery time and retinal nipping. Furthermore, BCVA was better in monofocal compared to multifocal in logMAR at all times.
A curious question about presbyopia correction is whether IOLs block imaging for retinal work. In general, lenses that can provoke problems are multi-focused, as they have diffraction or optical zones with changing power. However, various studies have reported that the posterior pole imaging is comparable to monofocal IOLs [10], while other studies have objected [11]. A study of nine retinal surgeons in the first author's study reported that a few have had macular visualization problems with current multifocal lenses. In the case of smaller optical designs, such as crystalline optics, the environment may be more difficult due to the rapid alteration in optical power encountered when the lens crosses the optical edge [12]. In our study, the total retinal nipping count was calculated as 17 in the Monofocal iol group, 25 in the bifocal iol group, and 24 in the trifocal iol group, and compared to the group monofocal IOLs, it was found that the number of retinal nipping increased statistically in groups bifocal and trifocal IOLs. In addition, there was a significant positive correlation between surgery time and retinal nipping. Integrated phacovitrectomy has represented efficacy, and the argumentation of lens options (eg. monofocal, bifocal, trifocal) is an accepted standard of care for all patients undergoing cataract extraction [14]. [5]. In our study, while patients with trifocal IOLs did not have a focal problem in imaging the peripheral retina (similar to monofocal), focusing problems were experienced during peripheral retinal control with bifocal IOLs. However, the number of retinal nipping was found to be increased in patients with trifocal and bifocal IOLs. Different strategies have been applied to achieve independence from eyeglasses and better visual acuity after cataract surgery, and there are many options related to intraocular lenses (IOL). In many studies in the literature, it was stated that, although there is uncertainty as to the size of the effect, multifocal IOLs are potent at improving near vision relative to monofocal IOLs [16,17]. However, some studies reported similar logMAR values in both groups [17]. One study reported lightly preferable logMAR values in the monofocal group and one study reported substantially better logMAR values in the multifocal group [17]. year decreased statistically significantly compared to Pre-op logMAR. In other words, median visual acuity improved at postoperative month 6 and post-operative year 1 in all 3 groups. Final visual acuity (at post-operative year 1) was significantly worse in patients with multifocal lenses compared to patients with monofocal lenses. The drawbacks associated with multifocal IOLs design are loss of contrast sensitivity, an increase in higher-order aberrations, and night-time glare and halos [18,19]. In a few studies in the literature, the authors found no differences in retinal macula thickness, retinal volume, or fundoscopic photographs between monofocal and multifocal iols [18,19]. Aychoua et al. showed a relevant reduction in visual sensitivity in patients with multifocal IOLs [19]. Another study reported wavy horizontal artifacts on OCT line scanning ophthalmoscopy images in patients with multifocal IOLs [18]. In a study by Lee et al., central macular thickness significantly decreased in patients with monofocal lens after surgical removal of the idiopathic macular epiretinal membrane [20]. In our study, when comparing the Pre-op, Postop 6th month and Post-op 1st year CMT values in the groups, it was found that CMT values in Post-op 6th month and Postop 1st year decreased statistically significantly compared to Pre-op CMT. However, there was no statistically significant difference between the groups in terms of pre-op, post-op 6th month and post-op 1st year CMT (Central macular thickness) values. The study has some limitations. This study was carried out only by a surgeon in a center and without a control group and was a retrospective study. Second, post-op near vision could not be observed because it was retrospective. However, it should be accepted that it is difficult to find a patient who has a Premium lens and underwent retinal surgery. Therefore, although it is retrospective, we think that it is a strong study in terms of the number of patients and it can be evaluated as a preliminary pilot study for future studies.
Conclusion
Premium lenses prolong surgery time during vitreoretinal surgery. Since premium iols negatively affect visual acuity, it should not be recommended for patients with retinal disease. However, with careful preoperative planning, proactive familiarity with these premium IOLs, and proper contact with patients, retinal surgeons do not need to fear these sophisticated lenses. | 2021-08-27T17:13:16.939Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "8f3dec318b3428c10da621bd789ba08fb5995517",
"oa_license": null,
"oa_url": "https://doi.org/10.4328/acam.20565",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "d186ec69b6d7e581c81dd7a02f34da55f2cad802",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
255864443 | pes2o/s2orc | v3-fos-license | Identification of TRAIL-inducing compounds highlights small molecule ONC201/TIC10 as a unique anti-cancer agent that activates the TRAIL pathway
We previously reported the identification of ONC201/TIC10, a novel small molecule inducer of the human TRAIL gene that improves efficacy-limiting properties of recombinant TRAIL and is in clinical trials in advanced cancers based on its promising safety and antitumor efficacy in several preclinical models. We performed a high throughput luciferase reporter screen using the NCI Diversity Set II to identify TRAIL-inducing compounds. Small molecule-mediated induction of TRAIL reporter activity was relatively modest and the majority of the hit compounds induced low levels of TRAIL upregulation. Among the candidate TRAIL-inducing compounds, TIC9 and ONC201/TIC10 induced sustained TRAIL upregulation and apoptosis in tumor cells in vitro and in vivo. However, ONC201/TIC10 potentiated tumor cell death while sparing normal cells, unlike TIC9, and lacked genotoxicity in normal fibroblasts. Investigating the effects of TRAIL-inducing compounds on cell signaling pathways revealed that TIC9 and ONC201/TIC10, which are the most potent inducers of cell death, exclusively activate Foxo3a through inactivation of Akt/ERK to upregulate TRAIL and its pro-apoptotic death receptor DR5. These studies reveal the selective activity of ONC201/TIC10 that led to its selection as a lead compound for this novel class of antitumor agents and suggest that ONC201/TIC10 is a unique inducer of the TRAIL pathway through its concomitant regulation of the TRAIL ligand and its death receptor DR5.
Introduction
TRAIL is an endogenous protein that induces fulminant tumor-specific apoptosis through binding to death receptors DR4 or DR5 expressed in human tumor cells [1]. TRAIL has received considerable attention since the gene was first cloned because of its therapeutic potential as a drug target for human cancer due to its ability to distinguish tumor from normal cells. TRAIL is naturally expressed in a several human tissues and membranebound TRAIL is also conditionally expressed in some immune cells following cytokine stimulation [2][3][4][5][6]. Through its expression in such cells, TRAIL plays a direct role in tumor suppression during immune surveillance though this anticancer mechanism is lost during the disease progression.
The ability of TRAIL to initiate apoptosis selectively in cancer cells has led to clinical trials with novel agents that engage the TRAIL pathway, which includes recombinant TRAIL and TRAIL-agonist antibodies that target DR4 or DR5 [7][8][9][10][11][12][13]. TRAIL-based experimental therapies have exhibited promising preclinical activity and safety in early phase clinical trials [14]. Nevertheless, these investigational therapies did not prove sufficiently effective in clinical trials and the clinical development of recombinant TRAIL has been halted. While the reasons for clinical failure are not entirely clear, we and others have highlighted several undesirable drug properties that may hinder the efficacy of recombinant TRAIL such as serum half-life, stability, and/or biodistribution.
Several experimental efforts to improve the efficacy of TRAIL-targeted therapies have been reported. Recombinant TRAIL mutants that are remarkably more stable have been identified [15], as well as variants that contain leucine or isoleucine zippers to facilitate trimerization of the soluble ligand, since receptor-bound TRAIL is trimeric [16,17]. We previously reported a novel class of DR4-targeted proteins called DR4 Atrimers that are engineered to mimic the conformation of trimeric TRAIL bound to DR4 using a stable tetranectin scaffold [18]. Mesenchymal stem cells overexpressing TRAIL have been described in preclinical studies that improve the biodistribution of TRAIL to enable activity against glioma since the available TRAIL-based therapies do not cross the blood-brain barrier [19]. In vitro characterization and structure-activity relationships of small molecules that induce DR5 clustering and activation have also be reported [20].
TRAIL is a robust and selective tumor suppressor that offers itself as an attractive natural drug target to restore anti-tumor immunity. We hypothesized that upregulation of TRAIL expression by a small molecule would lead to a potent and novel anti-tumor mechanism by improving suboptimal drug properties of recombinant TRAIL. Regulation of the TRAIL gene has been described for several transcription factors [21], most of which are tumor suppressors such as p53 [22], and Foxo3a [23]. We explicitly selected for TRAIL-inducing compounds that upregulate TRAIL gene transcription using a mechanism that does not rely on p53 due to its frequent inactivation in late stage cancers that causes resistance to many standard-of-care therapies [24]. To identify small molecule p53-independent inducers of the human TRAIL gene we conducted a small molecule library screen using the NCI Diversity Set II. The screen was conducted in HCT116 cells lacking the Bax gene, which renders TRAIL resistance to allow for assay readout [25], and stably expressing a luciferase reporter of the human TRAIL gene promoter. Here we describe the preclinical studies that led to the selection of ONC201/TIC10 as the lead TRAIL-inducing compound that we previously reported as a novel and potent antitumor agent and has entered phase I clinical trials in advanced cancers [26].
Screening for small molecules that induce TRAIL gene promoter activity
We transiently transfected HCT116 Bax-null cells with a luciferase gene reporter construct under transcriptional control of the first 504 base pairs of the human TRAIL gene promoter and selected for clones with stable expression. The NCI Diversity Set II was tested at 20 nM, 200 nM, 500 nM, and 1 μM using this cell-based reporter system at 12, 24, 36, and 48 hours post-treatment. Overall the small molecule library resulted in relatively modest changes in the TRAIL gene reporter activity, with most molecules causing a decrease in reporter activity due to cytotoxic effects or repression of the reporter (Figure 1a). We selected 29 compounds that induce >1.4-fold induction of reporter activity for further study. Normalizing for change in cell viability, we found that 10 of these 29 compounds were able to upregulate TRAIL reporter activity >2-fold under at least 2 of the tested conditions, with most of this activity at a dose of 1 μM (Figure 1b-c).
TIC9 and ONC201/TIC10 induce TRAIL and apoptosis in vivo
Nine of these 10 compounds that induce TRAIL gene promoter reporter activity were selected for further characterization, as TIC3 was unavailable at the time of study (Figure 1d). Interestingly TIC9 is breflate, the prodrug of the small molecule brefeldin A that is a classic ER stress-inducer. In general, these 9 small molecules stimulated TRAIL gene promoter reporter activity in a dose-dependent manner and at time points ≥24 hours post-treatment ( Figure 2a). RT-qPCR analysis of p53deficient HCT116 cells revealed that TIC4, TIC8, TIC9, and ONC201/TIC10 were capable of upregulating TRAIL messenger RNA levels under the tested conditions in a p53-deficient background ( Figure 2b). Next we assessed the capability of these molecules to upregulate TRAIL on the surface of tumor cells. TIC9 and ONC201/TIC10 were the only compounds capable of upregulating TRAIL protein at the surface of HCT116 cells under the tested conditions ( Figure 2c).
The ability of TIC9 and ONC201/TIC10 to induce TRAIL in vivo was assessed in subcutaneous xenografts of HCT116 cells following a single intraperitoneal dose. Messenger RNA harvested from tumor xenografts found that TRAIL transcript levels were significantly elevated by both TIC9 and ONC201/TIC10 compared to vehicle treatment or TIC4 treatment, which was used as a negative control comparator (Figure 3a). Immunohistochemical analysis of treated tumor xenografts indicated that TIC9 and ONC201/TIC10 also elevate TRAIL protein levels in vivo ( Figure 3b). Assessment of apoptosis in treated tumor xenografts revealed that TIC9 induces cell death as soon as the first day following treatment in contrast with ONC201/TIC10, which induces high levels of apoptosis at day 3 ( Figure 3c-d). Interestingly, TIC4 also induces significant levels of apoptosis in a time-dependent manner that suggests the molecule may have TRAILindependent pro-apoptotic effects on tumors in vivo. Based on the ability of TIC9 and ONC201/TIC10 to induce TRAIL and apoptosis in tumor xenografts, we performed an in vivo study to assess the antitumor activity of these two small molecules by bioluminescent imaging of luciferase-infected HCT116 subcutaneous xenografts. Administration of a single dose of ONC201/TIC10 potently inhibited the tumor signal following administration of a single dose (Figure 3e). TIC9 also inhibited the bioluminescent signal of the tumor compared to vehicle-treated xenografts but was inferior to ONC201/TIC10 under the tested conditions.
Selection of ONC201/TIC10 as a lead TRAIL-inducing compound
A time-course analysis of cell death in HCT116 cells indicated that TIC9 and ONC201/TIC10 were exclusively capable of inducing cell death. In accordance with our other studies, ONC201/TIC10 induced a delayed and modest amount of cell death that was apparent at 72 hours post-treatment with a 1 μM dose (Figure 4a-b). Interestingly, TIC4 did not induce tumor cell death under these conditions despite the observation that the molecule induced apoptosis in vivo. Together, these observations suggest that TIC4 may have TRAIL-independent apoptotic activity that depends on physiological factors not present in vitro or requires higher doses. Assessing slightly higher doses of TRAIL-inducing compounds in the low micromolar range revealed that both TIC9 and ONC201/TIC10 are capable of inducing high levels of Sub-G1 DNA content in p53-deficent human tumor cells (Figure 4c). Assaying the effects of the same cytotoxic dose on normal human fibroblasts revealed that TIC9 also induced significant levels of cell death in these cells (Figure 4d). However ONC201/TIC10 did not induce any appreciable levels of cell death in normal cells under the same conditions that were cytotoxic to tumor cells, suggesting a favorable therapeutic window.
Parallel cell viability assays with normal and tumor cells confirmed that ONC201/TIC10 has a wide therapeutic window. ONC201/TIC10 eliminated tumor cells in vitro with a dose-response relationship that was steep, saturable, and much more potent against tumor cells than normal cells (Figure 4e). Despite lack of induction of apoptosis in normal cells, ONC201/TIC10 appears to modestly inhibit the proliferation of normal cells at higher micromolar dose and an GI50 was not reached. We evaluated the potential effects of ONC201/ TIC10 on normal cell morphology and genotoxicity by microscopy. Immunofluorescence experiments revealed that ONC201/TIC10 does not induce changes in the morphology or levels of gamma-H2AX in normal cells, unlike the DNA-damaging chemotherapy doxorubicin, despite long-term incubation with doses that are cytotoxic to tumor cells ( Figure 5). A similar lack of genotoxicity or alteration of normal cell morphology was observed at lower doses of ONC201 as well (data not shown). Together these studies rationalize the selection of ONC201/ TIC10 as a lead TRAIL-inducing compound that upregulates TRAIL gene transcription and protein levels, induces tumor-specific cell death, and is not cytotoxic or genotoxic to normal cells ( Figure 6).
Cytotoxic TRAIL-inducing compounds exclusively active Foxo3a
Our previous studies with ONC201/TIC10 demonstrate that the small molecule induces TRAIL and TRAIL- mediated cell death through dephosphorylation and activation Foxo3a, which directly regulates TRAIL gene transcription at its gene promoter. Western blot analysis revealed that several of the TRAIL-inducing compounds reduced levels of phospho-Akt (Figure 7). Among the top TRAIL-inducing compounds, only TIC9 and ONC201/ TIC10 inhibited phospho-ERK levels and caused the dephosphorylation of Foxo3a that is associated with its nuclear translocation and activation of target genes. Furthermore, TIC9 and ONC201/TIC10 were the only molecules that upregulated DR5, which is also a Foxo3a target gene that may contribute to the sensitivity of tumor cells to ONC201/TIC10-induced TRAIL. The observation that the exclusively cytotoxic TRAIL-inducing compounds also exclusively activate Foxo3a suggests that Foxo3a is a uniquely proapoptotic regulator of TRAIL-mediated apoptosis among other TRAIL gene regulators. Concomitant induction of TRAIL and DR5 by Foxo3a and perhaps other transcriptional mechanisms may explain the ability of ONC201/TIC10 to induce significant levels of apoptosis with modest induction of the ligand.
Discussion
The magnitude and time points for kinetics of TRAIL induction observed in the screen for TRAIL-inducing compounds indicate that regulation of the TRAIL gene may be tightly controlled, perhaps due to its potent apoptotic potential. Chemical derivatives may be explored to improve TRAIL-inducing compounds in terms of potency and magnitude of TRAIL induction as well as potentially unveil structure-activity relationships. The observation that TIC9 and ONC201/TIC10 uniquely affect Akt-and ERK-mediated phosphorylation of Foxo3a is particularly interesting, given their exclusive ability to induce cancer cell death among the TRAIL-inducing compounds. While the two molecules are very diverse in structure and likely differ in other effects on cell signaling, this common effect suggests that Foxo3a may be an attractive transcriptional mechanism for inducing the TRAIL gene as an anticancer therapeutic mechanism.
While our first report of ONC201/TIC10 was in preparation, another report was published that described another novel class of small molecules that are capable of engaging the TRAIL death receptor pathway [20]. Identified in other efforts aiming to find small molecule Smac mimetics, Wang et al. reported a small molecule, bioymifi, that induces the clustering of DR5 and activation of its downstream apoptotic signaling. This novel therapeutic approach holds promise a new class of TRAIL-based agents, though its spectrum of activity and in vivo activity will need to be evaluated in future studies.
TRAIL-inducing compounds are a novel class of TRAIL-based therapy that engage tumors and the host system to upregulate TRAIL and potentially sensitize tumor cells to its pro-apoptotic activity through dual death receptor and ligand induction, as seen with TIC9 and ONC201/TIC10. Several chemotherapies have been reported to induce DR5 as a mechanism of tumor cell sensitization to TRAIL-mediated apoptosis [27]. Though p53 can also regulate the DR5 gene [28], both ONC201/ TIC10 [26] and brefledin A [29], which TIC9 is a prodrug of, possess p53-independent anti-cancer activity that suggests DR5 induction occurs through other transcription factors. p53-independent induction of DR5 in tumor cells has been confirmed for ONC201 [26]. While both the TRAIL and DR5 gene promoters contain FOXO binding sites, future studies will explore the molecular mechanism of DR5 induction that may involve Foxo3a and/or other transcription factors. CHOP also has a binding site on the DR5 gene promoter that should be explored given recent reports of ONC201-induced activation of the integrated stress response [30].
These studies indicate that TIC9 was much more toxic to normal cells than ONC201/TIC10, which could be the result of a more pronounced DR5 induction by TIC9 that may carry over into normal cells unlike prior reports of ONC201/TIC10-induced DR5 in normal fibroblasts [26]. Clinical trials with ONC201/TIC10 have recently commenced in advanced cancers to evaluate its safety as a monoagent. The safety features of ONC201/ TIC10 that were a key selection criteria in this study offer a range of clinical opportunities where genotoxic and toxic therapies are intractable. Furthermore, combination therapy may be facilitated by the absence of overlapping toxicities and broad synergy with approved anti-cancer compounds that was recently reported for ONC201/TIC10 [31]. Our studies cumulatively suggest that ONC201/TIC10 is a safe and effective antitumor agent that possesses a distinct mechanism of action that highlights the therapeutic potential of this novel class of anticancer drugs. This notion is further enhanced by its chemical structure, safety, and efficacy characteristics that are differentiated from other therapies used to treat cancer [32,33].
Cell culture and reagents
Cell lines were obtained from ATCC and cultured in ATCC-recommended media in a humidified incubator at 5% CO 2 and 37°C. TICs were obtained from the NCI DTP, stored at −80°C, resuspended in DMSO, and maintained at −20°C for storage. The following compounds were ordered from the NCI DTP for follow up study:
Cell death assays
For Sub-G1 DNA content analysis, cells were trypsinized at the indicated time points and fixed in 80% ethanol at 4°C for a minimum of 30 minutes. Fixed cells were then stained with propidium iodide in the presence of RNase and analyzed on an Epics Elite flow cytometer (Beckman Coulter). Cell viability was assessed using Cell TiterGlo (Promega) in 96-well plates using the manufacturer's protocol.
Western blot analysis
Cells were treated in log-phase growth, harvested by cell scraping, centrifuged, and lysed on ice for 2 hours with cell-lysis buffer. The supernatant was collected following centrifugation, and protein concentration was determined using the Bio-Rad protein assay (Bio-Rad Laboratories). Samples were electrophoresed under reducing conditions on NuPAGE 4-12% Bis-Tris gels (Invitrogen), transferred to PVDF, and blocked in 10% non-fat milk in TBST for 1 hour. Membranes were then incubated with primary antibodies obtained from Cell Signaling at 1:1000 in 2% non-fat milk in TBST overnight at 4°C. Membranes were washed in TBST, incubated with the appropriate HRP-conjugated secondary antibody (Thermo-Scientific) for 1 hour, washed in TBST, and visualized using ECL-Plus (Amersham) and X-Ray film (Thermo-Scientific).
In vivo studies
All animal experiments were conducted in accordance with the Institutional Animal Care and Use Committee. Athymic female nude mice (Charles River Laboratories) were inoculated with 5X10 6 of cancer cells in each rear flank as a 200 μL suspension of 1:1 Matrigel (BD):PBS. Treatment was administered by intraperitoneal injections at a total volume of 200 μL in DMSO. For tissue analysis, tissue was harvested from euthanized mice and fixed in 4% paraformaldehyde in PBS for 48 hours. Tissue was paraffin-embedded and sectioned by the Histology Core Facility at Penn State Hershey Medical Center. H&E staining (Daiko) and TUNEL staining (Millipore) were carried out according to the manufacturer's protocols. TUNEL assessment was carried out by manual counting of positive cells in ten random fields of view. Bioluminescent imaging and immunohistochemistry for TRAIL expression was performed as previously described [26].
Microscopy
Cells were grown in chamber slides under sterile conditions as indicated. At end point, cells were fixed using BD Cytofix/Cytoperm according to the manufacturer's protocol. Following fixation, cells were incubated with an anti-gamma H2AX antibody (Calbiochem DR1017) at 1:200 for 2 hours, rinsed, incubated with an Alexafluor 488 secondary antibody at 1:250 for 30 minutes, stained with Hoechst 33342 at 1 μg/mL for 5 minutes, rinsed, and imaged. | 2023-01-17T14:57:54.769Z | 2015-05-01T00:00:00.000 | {
"year": 2015,
"sha1": "b1f6ff51b04f42d5cdcb21678d90c3e0811c2747",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1186/s12943-015-0346-9",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "b1f6ff51b04f42d5cdcb21678d90c3e0811c2747",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": []
} |
235795226 | pes2o/s2orc | v3-fos-license | Dynamics of Opinions with Bounded Confidence in Social Cliques: Emergence of Fluctuations
In this paper, we study the evolution of opinions over social networks with bounded confidence in social cliques. Node initial opinions are independently and identically distributed; at each time step, nodes review the average opinions of a randomly selected local clique. The clique averages may represent local group pressures on peers. Then nodes update their opinions under bounded confidence: only when the difference between an agent individual opinion and the corresponding local clique pressure is below a threshold, this agent opinion is updated according to the DeGroot rule as a weighted average of the two values. As a result, this opinion dynamics is a generalization of the classical Deffuant-Weisbuch model in which only pairwise interactions take place. First of all, we prove conditions under which all node opinions converge to finite limits. We show that in the limits the event that all nodes achieve a consensus, and the event that all nodes achieve pairwise distinct limits, i.e., social disagreements, are both nontrivial events. Next, we show that opinion fluctuations may take place in the sense that at least one agent in the network fails to hold a converging opinion trajectory. In fact, we prove that this fluctuation event happens with a strictly positive probability, and also constructively present an initial value event under which the fluctuation event arises with probability one. These results add to the understanding of the role of bounded confidence in social opinion dynamics, and the possibility of fluctuation reveals that bringing in cliques in Deffuant-Weisbuch models have fundamentally changed the behavior of such opinion dynamical processes.
Introduction
Today in our society, social interactions among peers increasingly take place over online social networks.Such interactions are much broad than classical social interactions among family members, friends, co-workers, etc.
Peers from various places of the world via online platforms gather in places such as Facebook interest groups, Twitter/Reddit discussion threads, etc., to exchange opinions about various social, economical, or political issues [1,2].The study of the underlying dynamics of the opinion flows is of growing importance [3][4][5], for which the classical DeGroot model shed lights on understanding the mechanism behind trustful social interactions and how a connected and trustful social structure leads to a consensus or aggrement among the social members [6].
In standard DeGroot model, peers hold opinions described as real-valued dynamical states, communicate with neighbors in a fixed graph representing the social network structure, and update the states iteratively at discrete slots by averaging the neighbor states that are being communicated [6].It was proven that as long as the underlying social graph is connected, all peer states converge to a common value known as the consensus state.Generalizations of this DeGroot model to continuous-time dynamics and time-varying network structures have been extensively studied in the literature, e.g., [7][8][9][10][11][12][13]. Since DeGroot rule imposes non-expansiveness of the convex hull of the node opinions over time, such convergence to consensus has been proven to be true for a number of deterministically switching networks e.g., [7,10,12].The varying network structure can also be modeled as random graph processes, over which DeGroot types of opinion dynamics were shown to continue to lead to consensus in the mean-square or almost sure sense e.g., [14][15][16].For both deterministic and random switching networks, a minimum degree of connectivity is required despite the social graph may never be connected for any given time.
The strong consensus preserving property of DeGroot model under connectivity made it extremely useful in explaining collaborative interpersonal relations and the resulted social learning [3,17].However, consensus is rarely observed in real-world human groups even though the underlying social network is well connected.
Beyond consensus, social opinion formations may be clustering in the sense that agent opinions converge to distinct finite limits; fluctuating in the sense that agent opinions experience lower and higher values for all time instead of converging.In the literature, there have been quite a few proposals that started from social phenomena such as antagonism/mistrust, stubbornness, biases, etc., and went on to establish asymptotic opinion formations that are beyond consensus.In [18][19][20][21], signed networks were used to model social networks with both trustful and mistrustful links, and clustering into bipartite groups was established for DrGroot model with negative links under structurally balanced graphs.In [22,23], a type of randomized DrGroot model was studied in the presence of stubborn agents who never revise their opinions, and it was shown that agent opinions undergo fluctuations between the stubborn opinions.It turned out fluctuations may also be observed for opinion dynamics over signed networks [19][20][21], where the interplay between positive and negative links may yield such opinion formations.In [24,25], individual biases were modeled as nonlinear wights on self-opinion and local group opinion in the iterations, based on which clustering to extreme opinions was also revealed.
Along this line of research, there has been an important development on bounded confidence in social interactions.Bounded confidence attempts to capture the social tendency that peers are more inclined to only believe others whose opinions are within a vicinity of their own opinions, despite being exposed to opinions in diverse ranges.There have been mainly two types of bounded confidence models.In the Deffuant-Weisbuch model [26], peers meet randomly in pairs and exchange their opinions, but only revise their opinions by the DeGroot rule when their opinion difference is lower than a threshold.In the Hegselmann-Krause model [27][28][29], each agent averages the opinions of peers whose opinions differ below a threshold as their new belief.The bounded confidence Deffuant-Weisbuch and Hegselmann-Krause models preserve this non-expansive property of the network states, and thus convergence of individual states is expected [26][27][28][29].
In this paper, we propose and study opinion dynamics over a social network with bounded confidence in social cliques.Social cliques are local complete subgraphs in a social network.At each time step, nodes compute the average opinions of a randomly selected clique with a given cardinality, as a representation of peer pressure in local social networks.Then nodes update their opinions by averaging their current opinion and the cliqure peer pressure, when the difference between the two is below a prescribed bound.The initial node opinions are randomly assigned, and this clique bounded confidence model is a generalization to the classical Deffuant-Weisbuch model where only pairwise interactions were allowed.First of all, we present and prove conditions under which all node states converge to finite limits, and show consensus and disagreement clustering are both nontrivial events.Next, we prove that fluctuations may take place for the node opinions in the sense that at least one node state in the network fails to converge to a limit value.In particular, we show that this fluctuation event happens with a strictly positive probability, and also constructively present an initial value event for the network initial opinions, under which fluctuation arises with probability one.
These results add to the understanding of bounded confidence models in social opinion dynamics, and the possibility of fluctuation reveals a new type of social opinion formations that is arguably better matching our real-world experiences.
The remainder of the paper is organized as follows.In Section 2, we present the social network model for our study, and introduce our problems of interest.Then Section 3 presents our main results.Finally Section 4 concludes the paper with a few remarks on potential future directions.All proofs of our statements are given in Appendix.
Problem Definition
In this section, we propose a social network model for bounded confidence in social cliques, where peers in a social network randomly interact with each other in cliques, i.e., local complete subgraphs [40]; and then define our problems of interest.
The Social Network Model
Consider a social network of n nodes (peers) indexed in the set V = {1, 2, . . ., n}.Time is slotted at t = 0, 1, 2, . . . .At each time t, each node i ∈ V randomly selects m (1 ≤ m ≤ n) nodes as its neighbor from the network node set V, independent with other nodes' selections.This results in a random set of neighbors, termed a social clique and denoted by N i (t), for i ∈ V and t = 0, 1, . . . .Let N = {V 1 , . . ., V z } be the set containing all subsets of V with m elements, where z = n m with n m representing the m-combinations of V.For the random neighbor set N i (t), we impose the following assumption.
Each node i holds an opinion x i (t) ∈ R at time t.After interacting with the neighbors in the set N i (t), each node i observes the following clique opinion as the average of the peers' opinion in the group: x j (t).
Then the nodes update their opinions for time t + 1 according to for all i ∈ V.Here 0 < δ < 1 is the mixing parameter and η > 0 is the confidence level, which are assumed to be two constants.For the initial node opinions x 1 (0), . . ., x n (0), we impose the following assumption.
A2.The x i (0), i ∈ V are independent and identically distributed in [0, 1] by a uniform distribution.
The assumptions A1 -A2 are assumed throughout the paper as our standing assumptions, without specific futher mentioning.
Related Work
The proposed social network model with clique bounded confidence is a generalization of the classical Deffuant-Weisbuch type of social interactions [26].In Deffuant-Weisbuch models [26], peers only meet randomly in pairs, while N i (t) is assumed to be a clique with m nodes.The boundedness of social confidence in Deffuant-Weisbuch models is inherited in our model, where the clique opinions describe peer pressure in a local social group.When m is reduced to two, our model recovers the Deffuant-Weisbuch model with homogeneous confidence bound [26].Another closely related bounded-confidence model is the Hegselmann-Krause model [27][28][29], where at each round, nodes average their states among a deterministic neighborhood determined by the nodes sharing the states within a given bound.The confidence bound thus leads to a state-dependent communication graph, in contrast to static or time-dependent communication graphs [6].
The random clique selection process is also a generalization of the gossip processes where node interactions are held between pairs [39,41].The advantage of utilizing cliques in a gossip process to accelerate information dissemination or computation has been noted in [42][43][44].
Problems of Interest
We are interested in the asymptotic behaviors of the node opinions from a probabilistic point of view.We use P to denote the probability of the total randomness generated by both the neighbor selection process and the nodes' initial values.The following example illustrates that the proposed bounded confidence model in social cliques may undergo drastically different behaviors compared to the typical Deffuant-Weisbuch and Hegselmann-Krause models, in the sense that node states may fail to converge for certain range of parameters.
Example 1.Consider a network of n = 20 nodes.Let m = 4 be the size of cliques for social interactions.
Clearly, when η went from 0.3 to 0.2, the network opinions witnessed a phase transition from convergence to a global consensus to random fluctuations.
In view of Example 1, we are interested in the following questions on the state evolution of the social dynamic model (1): Q1. Are there conditions on the network parameters (m, n, δ, η) so that all node states converge to finite limits with probability one?
Q2.What are the probabilities of the limiting values agreeing or disagreeing when convergence is guaranteed?
Q3. Can we establish ranges on the network parameters under which opinion fluctuations emerge from the dynamical process almost surely?
Answers to these questions will add to understandings of social opinion dynamics with bounded confidence.
In particular, the almost sure fluctuations for node opinions were only observed or proved in the literature for opinion dynamics over a type of signed social networks [18][19][20].The proposed model therefore might potentially shed lights on the study of social interaction mechanisms leading to non-convergent opinion formations, as in the real world convergence and consensus are rarely observed for public opinions.
Main Results
In this section, we present the results on the asymptotic behavior of our bounded confidence opinion dynamics model (1).
Almost Sure Convergence Conditions
First of all, we present the following result on the conditions under which the network node opinions all converge to finite limits.
Theorem 1 Suppose m = n.Then the opinion dynamics (1) leads the node states to a convergence almost surely.To be precise, there exist random variables B 1 , . . ., B n such that Theorem 1 shows that if the sampling of cliques is always across the entire network, all node states will converge to finite limits.This result is consistent with the studies on the Deffuant-Weisbuch and Hegselmann-Krause models.Further, we have the following result showing that the probabilities of having consensus or pairwise distinct limiting states are nontrivial.
Definition 1
The events Theorem 2 Suppose m = n and n ≥ 4. Let η < 1/(n + 1).Then both E consensus and E disagreement are nontrivial events, i.e., they take place with a strictly positive probability.
Theorem 1 and Theorem 2 are certainly quite restrictive since they only apply to the case with n = m.
This condition n = m allows us to thoroughly develop an approach to fully decompose the event space of x i (t), i ∈ V according to the initial values, under which the kinds of results in the two theorems only become possible.Next, we introduce the following definition on the ordered statistics of the node opinions.
Definition 2 (i) The ordered statistics of opinion states x i (t), i ∈ V is defined as where (ii) The ordered average opinions among the clique N are defined as represents the k'th smallest average node states among the n m cliques in N.
The next theorem further shows that the node states may even preserve their orders throughout the entirety of the time horizon under certain range of initial values.Denote ∆ k : Theorem 3 Assume m = n and 0 ≤ ∆ k ≤ η for some k ∈ V. Further let the following hold: Then along the opinion dynamics (1), the order of the node states x i (t) is preserved for all t = 0, 1, . . .and for all i ∈ V.In this case, there holds almost surely that We remark that the condition m = n in Theorems 1, 2, 3 in social network context implies the clique opinion is from the entire network.In other words, peers are under the pressure of the society's average opinion at any iteration step.We believe similar results would continue to hold for general m, evident from Example 1 where m = 4 and n = 20, and the proof can be established by extending the same line of analysis for m = n.However, a full treatment for that would be much more involved as the proof relies on explicit construction of certain subtle probabilistic events.
Opinion Fluctuations
We introduce the following definition on the fluctuation events.
Definition 3
The fluctuation event for the opinion dynamics ( 1) is defined as We present the following theorem which establishes a condition under which the fluctuation takes places with a strictly positive probability along (1).
. Then along the opinion dynamics (1) E fluctuation is a nontrivial event.
From the proof of Theorem 4, a lower bound of P(E fluctuation ) may be established explicitly.It is of interest to further have clear sets of initial values under which E fluctuation can be proven to happen.To this end, we constructively define the following event of the initial node states: We present the following result.
Further Discussions
The proofs for the results stated in this section are available in the appendix.Most of the proofs are established by constructive arguments, where we carry out probabilistic analysis on a series of special events for the initial values.Such events are established based on the ordered statistics of the node states and the ordered statistics of the clique average states.Then the effects of the bounded confidence in the convergence or fluctuation events are estimated with upper and/or lower bounds, which eventually lead to the presented results.Compared to classical Deffuant-Weisbuch and Hegselmann-Krause models and their variations, the clique bounded confidence opinion dynamics (1) brings in the interplay between the size of the network n and the size of the cliques m.Such an interplay, uncovers the new phenomena that for both consensus and disagreement may happen with nontrivial probabilities, e.g., Theorem 1 and Theorem 2; and fluctuations also take place with a nontrivial probability which may be determined entirely from the initial states, e.g., Theorem 4 and Theorem 5. To the best of our knowledge, these kinds of results are established for the first time in the literature for bounded confidence models.In the meantime, the coupling between n and m brings fundamental difficulties in the analysis, which largely limited our study to a few special ranges of the parameters (n, m, η, δ) and node initial states.
Conclusions
We have proposed a generalized Deffuant-Weisbuch model where the evolution of opinions over a social network is governed by bounded confidence in social cliques.With initial opinions being independently and identically distributed, at each time step, peers review the average opinions of a randomly selected local clique with a prescribed cardinality.Then nodes update their opinions by averaging their current opinion and the randomly realized clique average, only when the clique average is within the bounded confidence intervals.We proved a series of results on the asymptotic behaviors of the social opinions at a system level, focusing on three events: consensus, disagreement and fluctuations.Surprisingly, all three events would happen under certain nontrivial probabilities for given network conditions, in sharp contrast of the universal clustering behavior in bounded confidence social network models.Future works include extending the results to general network structures, and validations of the established opinion formations with real-world social network data.
Appendix A. Proof of Theorem 1
Before proceeding to giving explicit proofs of Theorem 1, some fundamental preliminaries on the initial node states are presented below.
Denote the network node states X(t) = (x 1 (t), x 2 (t), . . ., x n (t)) for all t ≥ 0. With x[k] (t) defined as the k'th smallest average node states among the n m cliques in N, we introduce three disjoint sets for the initial state X(0) as below. (i) It is worth noting that each I i , i = 1, 2, 3 is defined as a union of several disjoint sets, and given any initial network state, there is a unique i ∈ {1, 2, 3} such that the corresponding initial network state X(0) ∈ I i by appropriately ordering the network nodes.
In view of the above analysis, the proof reduces to show that all node states converge to a finite limit under X(0) ∈ I i , for each i = 1, 2, 3. Note that, when m = n, the updates of the x i (t) for all i ∈ V becomes deterministic once the initial values are randomly assigned, and x[k] (t) = n i=1 x i (t)/n for all k ∈ V and t ≥ 0.
A.1 Proof for X(0) ∈ I 1 Note that I 1 is a union of disjoint subsets A k , k = 1, 2, . . ., n − 1.Thus, in this subsection we approach the proof by studying the convergence of the update rule (1) under X(0 First of all, we consider X(0) ∈ A 1 , and proceed to show that the node state x 1 (t) converges a.s. and all other node states x i (t), i = 2, . . ., n remain as their initial values for all t ≥ 0. Without loss of generality, we assume that 0 When t = 1, it immediately follows from (1) that x i (1) = x i (0) for all i = 1, and Besides, we can also establish where a) is obtained by using (2) and b) is obtained by using 1 n−1 > δ n .Besides, with (2), it follows that When t = 2, similarly we can obtain from (1) that x i (2) = x i (1) for all i = 1, and Besides, we can also establish where a) comes from ( 2) and (3).
Along this way, we can recursively apply the previous arguments to the case when t = 3, 4, . ... For any t ≥ 3, suppose x i (t) = x i (0) for all i = 1 and where This, together with (1), leads to and Furthermore, by some simple calculations, it can be concluded that |x [1] (t + 1) − x 1 (t + 1)| ≤ η and In summary of the previous analysis, we can conclude that x j (t) = x j (0) for all j = 1 and t ≥ 0, and In the previous analysis, we have shown that all node states asymptotically converges to some finite values under the initial conditions X(0) ∈ A 1 .Similarly, we can apply the previous arguments to other initial conditions X(0) ∈ A k , k = 2, . . ., n − 1, leading to an asymptotic convergence of all node states.For the sake of simplicity, the corresponding details is omitted.Therefore, the statements in Theorem 1 can be concluded for the initial states X(0) ∈ I 1 .
A.2 Proof for X(0) ∈ I 2 Note that the set I 2 is comprised of a number of disjoint subsets B kl .We now proceed to prove the theorem for each subset B kl .
At time t * + 1, we have which implies that for t = t * there holds Moreover, when t > t * , it can be verified that x i (t) = x i (0) for i = 3, 4, . . ., n, and for i = 1, 2 Therefore, all node states x j (t), j ∈ V are convergent a.s..
In the previous analysis, we have shown that all node states converge to some finite values almost surely under the initial conditions X(0) ∈ B kl with (k, l) = (1, 1).Similarly, we can establish the same conclusions under other initial conditions X(0) ∈ B kl for other pairs of (k, l), which is omitted herein for the sake of simplicity.Therefore, the statements in Theorem 1 are shown to be true for all initial states X(0) ∈ I 2 .
A.3 Proof for X(0) ∈ I 3 For the initial value set I 3 := C 1 C 2 , we can draw the following immediate conclusions.
(ii).When X(0) ∈ C 2 , the node state updating rule (1) becomes a standard DeGroot model over a complete interaction graph.Thus, all node states x i (t) converge to the average of the initial states.
The desired theorem holds.
B Proof of Theorem 2
Denote the state range at time t as (t) for all t ≥ 0 and i, j ∈ V. We then introduce three disjoint sets for the initial state X(0) as below. (i) With these sets being the case, the proof can be divided into two steps, corresponding to the subsequent two subsections, respectively.
B.1 Non-triviality of E consensus
First of all, we note that all opinions reach consensus if D [1,n] (0) < η.Thus, the limit state set for X(0) ∈ I * 1 is a subset of E consensus .Besides, P{I * 1 } > 0, yielding that P{E consensus } > 0. Next we proceed to prove that P{E consensus } < 1 by showing that the node states under X(0) ∈ I * 2 do not reach consensus.Given any X(0) ∈ I * 2 , it can be found that This then implies > η, i = 2, 3, . . ., n where we have used η n−1 < ( 1 n+1 ) n−1 < 1 n to obtain a).Thus, all node states x i (t) remain unchanged for all t.Therefore, we have P{I * 2 } > 0 and P{E consensus } < 1.
B.2 Non-triviality of E disagreement
It is observed that With this in mind, we now proceed to prove that the limit set of where a) is obtained by using n 2 1 n ≤ 1 2 and b) is obtained by using η ≤ 1 5 ≤ 1 n+1 and min i∈V |x i (0)−x [1] (0)| > η.It then immediately follows that X(t) = X(0) for X(0) ∈ I * 3 .Thus, the limit set of I * 3 is a subset of E disagreement , i.e., P{I * 3 } > 0 and P{E disagreement } > 0. This proves the non-triviality of E disagreement .The proof is thus completed.
C Proof of Theorem 3 C.1 Order Preservation
In this subsection, we aim to prove that along the opinion dynamics (1), the order of the node states x i (t) is preserved for all t = 0, 1, . . .and for all i ∈ V. We fix any k and denote x i (0) > 0 for k < j ≤ n.Thus, the order of the node states x i (t) is preserved at t = 1.
For any t ≥ 1, we suppose that x i (t) ≤ x k (t) ≤ x j (t) for i < k < j and the order is preserved.It can be seen from the proof of Theorem 1 that |x k (t) − x[1] (t)| ≤ η.Then, according to (5), node states x k (t) and the average x[1] (t) strictly increase, which yields > 0 for k < j ≤ n.Thus, the opinion order is preserved at time t + 1.
C.2 Convergence Limits
Recalling the analysis in Appendix A.1 for X(0) ∈ I 1 , we can obtain This, together with (1), implies Furthermore, by induction we can obtain The first and third of the above equations immediately render that lim The proof is thus completed.
D. Proof of Theorem 4
Before proceeding to the explicit proofs, we first introduce some instrumental terminologies to facilitate the subsequent analysis.We fix the agent indexes by letting x i (0) 3 (0)) the selection tube of the average opinion xi (0), i ∈ V = {1, 2, . . ., C m n } where there are k (i) k (0) opinions selected from G k , k = 1, 2, 3 for the average opinion xi (0) at time 0. For simplicity, we denote xi (0) ∈ S i (0) = (k 1 .Define an equivalent relation S for the average opinion {x i (0)}, that is, if S i (0) = S j (0), i ∈ V, then xi (0) S xj (0).Then the quotient set of the average agent index set V on the equivalent relation S is defined as where K s = min{s, n − s + 1, m + 1} + min{s, n − s + 1, m} 2 , and It can be easily verified that
D.1 Measures of Quotient Sets
The following lemma is given to measure the quotient sets of average values {x [i] (0), i ∈ V}.
Lemma 1 Given any initial values and α m > 0, all average opinion values {x [i] (0), i ∈ V} can be separated into K s cliques, and for any l ∈ {1, 2, . . ., K s }, there hold Proof.For any {x [i] (0)}, i ∈ V, we denote Then we can obtain that where the lower bound is obtained by using x 1 (0) ≤ x i (0) for i ∈ G 1 and x s+1 (0) ≤ x j (0) for j ∈ G 3 , and the upper bound is obtained by using x i (0) ≤ x s−1 (0) for i ∈ G 1 and x j (0) ≤ x n (0) for j ∈ G 3 .
To prove the bounds of R l and C l in ( 8) and ( 9), we study the difference between x[i+1] (0) and x[i] (0) with the following three cases.
Taking the parameter C l into account, we can obtain that and where the equality a) is deduced by using the fact that there exists only one pair (i 1 , j 1 ) for the average x[i] (0) and x[i+1] (0), the inequality b) is obtained by using for i 1 , j 1 belonging to the same set, and the inequality (c) is obtained by using |x j (0) 2 (0) = 0 for S i (0), then there hold where we have sued max i,j∈G to obtain the inequality a).On the other hand, if i, j ∈ H l and k Similarly, the lower bound of R l can be concluded by where the equality a) holds by setting max i,j∈H In summary of the above three cases, the lemma is thus concluded.
D.2 Opinion Fluctuations
In this subsection, we proceed to prove that the opinion order is preserved and node s fluctuates a.s.within certain conditions based on the measure of the quotient sets.
then the order is unchanged and node s fluctuates a.s..
Proof.The proof of this lemma consists of three steps.
Step 1.At the first step, we aim to show that agent s has a positive probability to change its opinion values at any time t, and the opinion order is preserved.
Without loss of generality, we consider the case with x s (0) ∈ [α j , α j ] for some j ∈ {1, 2, . . ., K s }.At t = 0, by Lemma 1, if η > αs m , we then can obtain This implies that agent s will change its value if it selects the average opinion with the index in any H j , j ∈ {1, 2, . . ., C m n }.Thus, by Assumption 1, agent s has a positive probability to change its opinion values at t = 0.
We now show that the opinion order is preserved at t = 0.It is noted that where the inequality a) is deduced by (11) and b) by (m Therefore, the opinion order is preserved at t = 0. Next we proceed to show that agent s has a positive probability to change its opinion values, and the opinion order is preserved at t = 1.If x s (1) > x s (0), x s (1) ≤ x s (0) + δ min{R l , η}.Similarly, all the average >η where a) is obtained by using The similar conclusion can be obtained if x s (1) < x s (0).Thus, agent s has a positive probability to change its value at time 1.
Taking the opinion order at t = 1 into consideration, we observe that where a) is deduced by using ( 11) and min{R l ,η} m−1 > δη m .This, together with Lemma 1, concludes that opinion order is preserved at time 1.
Similarly, with the similar proof method used in Theorem 1, we can get that the opinion order is preserved at any time.Thus, agent s has a positive probability to change its opinion values at any time.Specially, our proof in (3) shows that agent s approaches to the upper bound of {x [i] (t)} if the selected average opinion is larger than x s (t), i ∈ H j , t ∈ N.While agent s still have a positive probability to change its value towards the opposite direction.
Step 2. At the second step, we proceed to show that that opinions of all other agents are unchanged, even if agent s has a maximum range to change for t > 0.
The extremal condition for agent s to influence any other opinions is through the influence of average opinions' adjustment range, whereas we only need to judge whether the changed average opinion is in the confidence range of any other opinions.By Lemma 1 and αs With the similar proof method to the proof of Theorem 1, it can be seen that for any t > 0, the maximum movable range of x s (0) is m m−1 R l , and then the maximum move range of xi (0) for any i ∈ V is 1 m−1 R l .By (16), it is observed that agent s is not influenced by average opinions out of H j .Thus, the movable range of any average opinions is not larger than 1 m−1 max l {R l }.With the same method of deriving the inequality (15), we get that all other opinions will keep unchanged at any time.
Step 3. At the final step, we will show that the upper limit of agent s is larger than the lower limit of agent s almost surely.By Lemma 1, we have This indicates that agent s can be influenced by at least two average opinions within the index set H j .
Without loss of generality, we set H j = {1, 2, . . ., i j }.In addition, agent opinion By Assumption 1, with a positive probability, the state of agent s will decrease if it selects average opinion x [1] (0) and increases if it selects average opinion x[2] (0).
Towards this end, in the following we consider two cases where k i (0) = 0 and k i (0) = 1, respectively.
i (0) = 0 for i ∈ H j , then all average opinions x[i] (0) are unchanged for i ∈ H j .Thus, the opinion dynamics (1) can be rewritten as otherwise.
This yields that lim sup i (0) = 1 for i ∈ H j , then all average opinion x[i] (t) will change if x s (t) selects average opinion x [1] (t) for any t ≥ 0. Thus, the opinion dynamics (1) can be rewritten as where This, together with the fact that x[1] (0) ≤ x s (0) < x[2] (0), yields that lim sup t→∞ x s (t) > lim inf t→∞ x s (t) holds.
In summary of the previous analysis, the proof of Lemma 2 is finished.
D.3 Fluctuation Events
Instrumental to the subsequent analysis is the following two technical lemmas in probability theory.
Lemma 3 ( [45, Theorem 1.6.9]Change of variables formula is positive, and the probability densities of X, Y exist, then we have where y = (y 1 , y 2 , . . ., y K ) .
With these lemmas in mind, we are now ready to prove Theorem 4. With Lemma 1 and Lemma 2, the proof of Theorem 4 reduces to find the initial condition that satisfies the condition of Lemma 2, which consists of the following three steps.
Step 1.At the first step, we provide an initial condition that x s (0) ∈ [α j (0), α j (0)] for certain j ∈ {1, 2, . . ., K s }.We denote Given any s ∈ {n − m + 1, . . ., m − 1} and an opinion selection tube (k − 1, 1, m − k), we can obtain that We then consider the opinion selection tube (k, 0, m − k), and can obtain that With ( 18) and ( 19), it follows the following relation of adjacent average opinion cliques: , if the selection tube of H i j is (k, 0, m − k) the selection tube of This deduces that agent s fluctuates almost surely if which satisfies the initial conditions in Lemma 2, and thus completes the first task.
Step 2. At this step, we show that agent s fluctuates almost surely.The following two cases are studied, respectively.
(i) If s ∈ H i 1 , then with the similar method in Theorem 1, we need to ensure all opinions less than s can not be influenced by any average opinions.In fact, if η < Similarly, if Note that both the average opinion ranges in (18) and (19) either ( 22) holds for any k = 1, 2, . . ., s or ( 23) holds for any k = 1, 2, . . ., s − 1, then we can get that opinion s will fluctuate a.s..
Step 3. Finally, we derive the lower probability bound of the initial condition x s (0).By the previous analysis, we only need to consider the parameter restrictions for the initial states under Assumptions A1 and A2.
By the inequalities ( 20) and ( 24), lim sup x i (t), lim inf x i (t), i ∈ V|∃k ∈ {1, 2, . . ., w}, where Then by recalling Lemma 4, we can obtain the density function of X s as Denote Further by Lemma 3, we can obtain that f Ys (y) = 1 |Ls| f Xs (L −1 s y) = f Xs (L −1 s y) where y = (y 1 , y 2 , y 3 , y 4 , y 5 ), 0 ≤ y i < 1, 5 i=1 y i ≤ 1.Thus, the density function ( 25) can be transfered into Therefore, P{E fluctuation } ≥ B 1 ∪B 2 f Ys (y)dy where and We can verify from Lemma 4 that We thus have Particularly, it is noted that the above inequality requires the confidence bound η to satisfy with m ≥ 4. The proof is completed.
E. Proof of Theorem 5
Fix the agent indexes by letting x i (0) x [i] (0) for i ∈ V and let s ∈ {n − m + 1, . . ., m − 1}.We denote 3 (0)) as the same definition in Appendix D for i ∈ V.
With these preliminaries, we now proceed to prove this theorem, consisting of the following steps.
Step 1.At this step, we aim to prove that min |x we have by ( 11) Then, by calculating min x[k+1] (0) − max x[k] (0) and m ≤ 2 3 n, we obtain min With this in mind, we further consider the following two cases.
(i) If m is even, then there must exist an index [j] such that N [j] (0) includes m 2 selections from the index set G 1 (0) and m 2 selections from G 3 (0).
(ii) If m is odd, then we can get that there exists an index [k] such that N [k] (0) includes m−1 2 selections from the index set G 1 (0), m−1 2 selections from G 3 (0) and one from G 2 (0).
In light of the above both cases, it can be concluded that there always exists at least one index such that ) because all opinion values in the same index group are the same.By the initial values setting, we have Besides, it follows that which implies that x [i] (0) keeps unchanged for i = K.
Step 2. By (26), the opinion order is preserved at time 0, and x s (1) = x s (0) for s = K.Besides, there holds max{x Similarly, at t = 1, it can be deduced that where x[j] (0) = x[k] (0).Therefore, x s (1) keeps unchanged for s = K.Similarly, we can get that x i (2) keeps Step 3. At this step, we proceed to show that x i (t), i = K keeps unchanged for any t ∈ {0, 1, 2, . . .}.
Recursively, we assume that x i (t − 1) keeps unchanged at time t − 1 for i = K and |x , one can see that x i (t) keeps unchanged at time t − 1 for Step 4. Finally, we prove that lim sup By the definition of S i (0), i ∈ V, there exists a i 0 ∈ V such that 0 ≤ x K (0) − x[i 0 ] (0) < η and another j 0 ∈ V such that 0 ≤ x[j 0 ] (0) − x K (0) < η.In fact, we can set , if m is even, and By the order preservation of {x [j] (t)} for j ∈ K at Step 3, we can obtain that max j∈ V{|x [j] (t)− x[j] (0)|} ≤ η m−1 .At time t = 1, we will show that x K (1) will increase or decrease with a positive probability.In fact, there always exists k 0 ∈ V such that Thus, if m is odd, ), if the selected average is x[i 0 ] (0) (1 − δ)x K (0) + δ( 1 2 + 1 2m ), if the selected average is x[j 0 ] (0) ), if the selected average is x[k 0 ] (0), and then we analyze certain limits in the following six cases: (i) If m is an odd number and the selected tube is always S i 0 (t), t ≥ 0, then (ii) If m is an even number and the selected tube is always S i 0 (t), t ≥ 0, then (iii) If m is an odd number and the selected tube is always S j 0 (t), t ≥ 0, then (1 − δ) t−1 → 1 2 + 1 2m as t → ∞; (iv) If m is an even number and the selected tube is always S j 0 (t), t ≥ 0, then as t → ∞; (v) If m is an odd number and the selected tube is always S k 0 (t), t ≥ 0, then (vi) If m is an even number and the selected tube is always S k 0 (t), t ≥ 0, then Note that if the selected average value of x K (t) is x[i 0 ] (t), t = 0, 1, . . ., T , then It is clear that |x K (t)−x ).With a similar method, for any mixed selection of {x [s] (t), s ≥ k 0 }, we can get that max{x [j 0 ] (t) − x[k 0 ] (t)} < 1 2m (1 + 1 m ) < η for the extremal condition that x[j 0 ] (t) is the selected average of agent K. Also, x K (t) ∈ [x [k 0 ] (t), x[j 0 ] (t)] ∈ ( 1 2 , 1 2 (1 + 1 m−1 )).Based on above analysis, the lower limit is not larger than any given limits and the upper limit is not smaller than any given ones, we get that lim sup The proof is thus completed.
E
disagreement := {B 1 , B 2 , . . ., B n are pairwise distinct} are termed, respectively, the consensus event and the disagreement clustering event along the opinion dynamics (1).
Now we are ready to present the proof of Theorem 4, consisting of three steps.We first give the parameter condition of opinion fluctuation and the bounds of the parameter ranges in Subsection D.1.Then in Subsection D.2 we show that the opinion fluctuation happens under certain parameter conditions.Finally, we prove that opinion fluctuation happens with a positive probability in Subsection D.3. | 2021-07-13T01:15:58.610Z | 2021-07-11T00:00:00.000 | {
"year": 2021,
"sha1": "617cf5965076bbd9953d177fdaf8e7ebbf40d219",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "617cf5965076bbd9953d177fdaf8e7ebbf40d219",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
258196211 | pes2o/s2orc | v3-fos-license | Interaction of Hyaluronan Acid with Some Proteins in Aqueous Solution as Studied by NMR
According to actual literature data, hyaluronic acid (HA) that is presented in the extracellular matrix can interact with proteins and thereby affect several important functions of the cell membrane. The purpose of this work was to reveal the features of the interaction of HA with proteins using the PFG NMR method by sampling two systems: aqueous solutions of HA with bovine serum albumin (BSA) and aqueous solutions of HA with hen egg-white lysozyme (HEWL). It was found that the presence of BSA in the HA aqueous solution initiates a certain additional mechanism; as a result, the population of HA molecules in the gel structure increases to almost 100%. At the same time, for an aqueous solution of HA/HEWL, even in the range of low (0.01–0.2%) HEWL contents, strong signs of degradation (depolymerization) of some HA macromolecules were observed such that they lost the ability to form a gel. Moreover, lysozyme molecules form a strong complex with degraded HA molecules and lose their enzymatic function. Thus, the presence of HA molecules in the intercellular matrix, as well as in the state associated with the surface of the cell membrane, can, in addition to the known ones, perform one more important function: the function of protecting the cell membrane from the destructive action of lysozymes. The obtained results are important for understanding the mechanism and features of the interaction of extracellular matrix glycosaminoglycan with cell membrane proteins.
Introduction
The plasma membrane of mammalian cells is a highly dynamic structure, whose biomechanical properties play a vital role in the regulation of many functions of the living cell, such as adhesion, migration, signal transmission and others [1]. It is believed that one of the most dynamic processes within these membranes is the formation of fine structures, which, in turn, are involved in intercellular adhesion [2]. There are works that experimentally show that it is these fine structures that provide the pathway for intracellular and intercellular communication [3][4][5].
Many molecular components, including various proteins, may be involved in the formation of intercellular (intermembrane) connections, which, in turn, may affect the characteristics of cell membranes themselves. Among the various components of the extracellular matrix, polysaccharides (glycosaminoglycans) play an important role, as they have the greatest variability and represent the most dynamic structures in tissues. Many enzymes are known to specifically "adapt" proteoglycan molecules during pathophysiological processes [6]. For example, the tumor necrosis factor, alpha-stimulated protein TSG6, can covalently bind to HAs, which in turn promotes the transfer of the inter-alpha trypsin inhibitor chain to the COOH bonds of HAs. As a result, this transfer leads to the formation of a complex called serum hyaluronan-associated protein (SHAP), which is involved in many pathologies [7]. It follows from the works of [7,8] that protein complexes with HA have the ability to change not only the local properties of the membrane itself, but also, acting as an external cytoskeleton, to modify and control the shape of the cell.
All NMR measurements were performed at 298 K on a 400 MHz Bruker Avance-III TM spectrometer equipped with a gradient system that allowed for a maximum gradient, g, of 28 T/m (e.g., 2800 G/cm). Temperature was calibrated using a set of test samples with known diffusion coefficients. Self-diffusion coefficients (hereinafter referred to simply as diffusion coefficients) were measured using the stimulated-echo pulse sequence (PGSTE) [31]. 1 H experiments were performed using 48 different values of g, a gradient pulse duration δ of 1 ms, the time between the leading edges of gradient pulses ∆ = 50 and 300 ms, the time interval between the first and the second radiofrequency pulses τ ranging from 6 ms, and a recycle delay of 15,000 ms.
The measurement of self-diffusion coefficients of molecules by NMR is based on registration of loss-of-phase coherence of the molecules' spins due to spatial displacements of molecules in the magnetic field gradient [32]. If the translational mobility of molecules is not limited, the distribution of their spatial displacements is described by a Gaussian function: where P s ( − r , − r , t) is the conditional probability density or "propagator" of spin detection at the radius vector − r at time t, if at the initial time point the spin was at the radius vector − r ; D s -self-diffusion coefficient (SDC) of molecules.
The primary information in the PFG NMR method was obtained from the analysis of the diffusion decay A(τ 1 , τ 2 , g, t)-dependence of the spin echo signal amplitude on the magnetic field gradient parameters and time t. For the stimulated echo sequence, the decay of the signal amplitude was determined by the expression: where A 0 -initial amplitude of the echo signal, τ 1 and τ 2 -time intervals between the first and second, second and third RF pulses, respectively, T 1 and T 2 -nuclear magnetic relaxation times, γ-gyromagnetic ratio of protons, δ and g-duration and amplitude of magnetic field gradient pulses, ∆-time interval between two successive gradient pulses, the expression (∆ − δ /3) is diffusion time t d . A direct determination of the self-diffusion coefficient of molecules can be made from the determination of the tangent of the envelope of the echo amplitude (diffusion decay), which has the form of a straight line in coordinates lg A g 2 /A(0) /γ 2 g 2 δ 2 ·t d . In the case of a nonexponential form of diffusion decay, we can analytically describe the decay of the amplitude of the echo signal by an expression of the form: where p i is the "weight" coefficient of the i-th exponent, characterized by the effective self-diffusion coefficient D si . The NMR method with a pulsed magnetic field gradient makes it possible to measure the SDC of molecules in the range of 10 −8 ÷ 10 −15 m 2 /s.
Interaction of Bovine Serum Albumin with Hyaluronic Acid Molecules
When researching the translational mobility of each molecular component of the "BSA + HA + Water" system by NMR methods with PFG in a spectrally resolved mode for the further isolation of characteristic signals of protein and polysaccharide molecules and obtaining characteristic spin echo signals for them, NMR proton spectra of the studied system and water solutions of the initial components of BSA and HA were recorded ( Figure 1). The recorded diffusion decays of the spin echo signal for HA molecules in a water solution at a polysaccharide concentration of 0.75% (wt.) have a complex form, while for HA molecules characterized by a minimum SDC of about ~10 −14 m 2 /s, there are clearly signs of limited diffusion. First, some of the HA molecules are characterized by the values, which depend on the diffusion time . Second, this dependence corresponds to the mode of completely limited diffusion ∝ −1 . This follows from the coincidence of the finite slopes of the diffusion decays shown in Figure 2B in the coordinates ( ( 2 ) (0) ⁄ )/ 2 2 2 , as well as from the dependence ( ) itself ( Figure 2C). As a result of a comparative analysis of the spectra presented in Figure 1, it can be concluded that the spectrum of an aqueous solution of BSA and HA contains, as expected, signals of both protein and hyaluronate. On the proton spectrum of an aqueous solution of HA, the signal in the region of 1.9 ppm according to [35], it is characteristic of the protons of the methyl (-CH 3 ) N-acetyl group of hyaluronate. Signals located in the region between 3.8 and 3.0 ppm correspond to signals from protons of disaccharide units of HA. Thus, when studying the "BSA + HA + Water" system, we could obtain data on the translational mobility of both protein and HA molecules. In the proton NMR spectrum of the BSA solution, the signal characteristics of the protein are in a fairly wide range of chemical shifts: according to [33], the signals in the range of chemical shifts characteristic of aliphatic groups in the range of 3.1-2.8 ppm are due to the presence of Cys34 (C β H 2 groups); signals in the region of 2.08-1.98 ppm are attributed to the protons of such amino acids as glutamine Gln33 (signal of the C γ H 2 ) and proline Pro35 (signal of the C γ H 2 ); signals in the intervals of chem. shifts of 8.2-7.5 and 7.1-6.9 ppm should be attributed to the signals from the C ε H and C γ H 2 groups of histidines, respectively. The region at 7.3-6.6 ppm corresponds to the signals of aromatic rings of tyrosine residues [34].
The recorded diffusion decays of the spin echo signal for HA molecules in a water solution at a polysaccharide concentration of 0.75% (wt.) have a complex form, while for HA molecules characterized by a minimum SDC of about ∼ 10 −14 m 2 /s, there are clearly signs of limited diffusion. First, some of the HA molecules are characterized by the D smin values, which depend on the diffusion time t d . Second, this dependence corresponds to the mode of completely limited diffusion D smin ∝ t −1 d . This follows from the coincidence of the finite slopes of the diffusion decays shown in Figure 2B in the coordinates lg A g 2 /A(0) /γ 2 g 2 δ 2 t d , as well as from the dependence D smin (t d ) itself ( Figure 2C).
As a result of a comparative analysis of the spectra presented in Figure 1, it can be concluded that the spectrum of an aqueous solution of BSA and HA contains, as expected, signals of both protein and hyaluronate. On the proton spectrum of an aqueous solution of HA, the signal in the region of 1.9 ppm according to [35], it is characteristic of the protons of the methyl (-CH 3 ) N-acetyl group of hyaluronate. Signals located in the region between 3.8 and 3.0 ppm correspond to signals from protons of disaccharide units of HA. Thus, when studying the "BSA + HA + Water" system, we could obtain data on the translational mobility of both protein and HA molecules. In systems such as gelatin [36], this phenomenon is due to the formation of a supramolecular gel structure. Thus, the dependence of SDC on diffusion time, shown in Figure 2C, allows us to make the conclusion that HA molecules in a water solution at a concentration of 0.75 (wt.) form a supramolecular structure-a three-dimensional gel network. This state of HA molecules means that the root-mean-square (rms) displacement remains constant since 〈 2 〉~0, as follows from the equation: Estimation of the restriction size or gel grid size formed by HA molecules by Formula (4) gives the value 〈 2 〉 = 0.314 ± 0.016 μm. The second important result is that the fraction of HA molecules characterized by the sign of completely restricted diffusion (Expression (4)) depends on the diffusion time. This is demonstrated from the comparison of the curves shown in Figure 2B. Figure 2D directly shows the population dependence on .
It can be supposed that such a change in the population of the gel component is associated with the manifestation of the lability of the gel formed by HA molecules. Such behavior, namely, limitation of diffusion of macromolecules in meshes formed by natural polymers, along with changes in the population of molecules involved in the formation of supramolecular structures, has already been observed [10,37].
The dependence shown in Figure 2D can be approximated by the function: In systems such as gelatin [36], this phenomenon is due to the formation of a supramolecular gel structure. Thus, the dependence of SDC on diffusion time, shown in Figure 2C, allows us to make the conclusion that HA molecules in a water solution at a concentration of 0.75 (wt.) form a supramolecular structure-a three-dimensional gel network. This state of HA molecules means that the root-mean-square (rms) displacement remains constant since r 2 ∼ t 0 d , as follows from the equation: Estimation of the restriction size or gel grid size formed by HA molecules by Formula (4) gives the value r 2 = 0.314 ± 0.016 µm. The second important result is that the fraction of HA molecules characterized by the sign of completely restricted diffusion (Expression (4)) depends on the diffusion time. This is demonstrated from the comparison of the curves shown in Figure 2B. Figure 2D directly shows the population p min dependence on t d .
It can be supposed that such a change in the population of the gel component is associated with the manifestation of the lability of the gel formed by HA molecules. Such behavior, namely, limitation of diffusion of macromolecules in meshes formed by natural polymers, along with changes in the population of molecules involved in the formation of supramolecular structures, has already been observed [10,37]. The dependence shown in Figure 2D can be approximated by the function: The dotted line shown in Figure 2D corresponds to Expression (5) at values of τ = 415 ± 31 ms and p min (0) = 0.9 ± 0.03. Thus, from the results of the study of self-diffusion of HA molecules in an aqueous solution with a HA concentration of 0.75%, it follows that 90% of the HA molecules form a gel network, and 10% are in a free state. The observed dependence p min (t d ) is itself a consequence of the molecular exchange of HA molecules between the free state and the state in the gel net. The obtained value τ can be interpreted as the lifetime of HA molecules in the gel state within the given reasoning. The obtained characteristics of the translational mobility of HA molecules in aqueous solution can serve as a certain reference for the study of more complex molecular systems containing additional protein components. Figure 3 shows the diffusion decay of the back echo signal for "BSA + HA + Water" in the spectrally resolved mode.
The dotted line shown in Figure 2D corresponds to Expression (5) at values of 〈 〉 = 415 ± 31 ms and (0) = 0.9 ± 0.03. Thus, from the results of the study of self-diffusion of HA molecules in an aqueous solution with a HA concentration of 0.75%, it follows that 90% of the HA molecules form a gel network, and 10% are in a free state. The observed dependence ( ) is itself a consequence of the molecular exchange of HA molecules between the free state and the state in the gel net. The obtained value 〈 〉 can be interpreted as the lifetime of HA molecules in the gel state within the given reasoning. The obtained characteristics of the translational mobility of HA molecules in aqueous solution can serve as a certain reference for the study of more complex molecular systems containing additional protein components. Figure 3 shows the diffusion decay of the back echo signal for "BSA + HA + Water" in the spectrally resolved mode. As a result, we can judge unambiguously enough about the translational mobility of each of the components of the "BSA + HA + Water" system. Translational mobility of BSA molecules in the solution with HAs is characterized by a single self-diffusion coefficient, equal to 3.7 × 10 −11 m 2 s ⁄ , which is quite close to the value of the self-diffusion coefficient of freely diffusing protein molecules in the "BSA + Water" solution with the same concentration (4%) of protein, equal to 4.28 × 10 −11 m 2 s ⁄ . At the same time, the lack of dependence of the diffusion decay shape of BSA molecules on the diffusion time shown in Figure 3B is more evidence of the unrestricted diffusion of BSA molecules in aqueous HA solution.
In [38], where the translational mobility of BSA in solution with HAs was also studied, a 1.5-times decrease in the BSA SDC was registered compared to the SDC of the protein in water solution at the same concentration. The authors of this work suggest that this effect is a consequence of the formation of complexes between BSA molecules and HAs. In our opinion, such a hypothesis has the right to exist, although, as in [38], it is not possible to register a direct sign of complex formation, specifically, the establishment, at least for some albumin molecules, of the SDC values coinciding with the SDC values of HA molecules. Nevertheless, the observed decrease in the SDC values of BSA molecules in the As a result, we can judge unambiguously enough about the translational mobility of each of the components of the "BSA + HA + Water" system. Translational mobility of BSA molecules in the solution with HAs is characterized by a single self-diffusion coefficient, equal to 3.7 × 10 −11 m 2 /s, which is quite close to the value of the self-diffusion coefficient of freely diffusing protein molecules in the "BSA + Water" solution with the same concentration (4%) of protein, equal to 4.28 × 10 −11 m 2 /s. At the same time, the lack of dependence of the diffusion decay shape of BSA molecules on the diffusion time shown in Figure 3B is more evidence of the unrestricted diffusion of BSA molecules in aqueous HA solution.
In [38], where the translational mobility of BSA in solution with HAs was also studied, a 1.5-times decrease in the BSA SDC was registered compared to the SDC of the protein in water solution at the same concentration. The authors of this work suggest that this effect is a consequence of the formation of complexes between BSA molecules and HAs. In our opinion, such a hypothesis has the right to exist, although, as in [38], it is not possible to register a direct sign of complex formation, specifically, the establishment, at least for some albumin molecules, of the SDC values coinciding with the SDC values of HA molecules. Nevertheless, the observed decrease in the SDC values of BSA molecules in the solution with HA as compared to the aqueous solution of BSA cannot be explained only by the influence of the restrictions of the HA polymer chains due to a too low (0.75%) concentration of HA. Hence, it is reasonable, as well as in [38], to assume the formation of BSA-HA complexes, which, however, are characterized by short lifetimes.
Note that in [38], it was not established in what state the hyaluronate molecules are in the "BSA + HA + Water" system. To determine this state, we needed to obtain and analyze the diffusion decays of the spin echo signal for the HA molecules by integrating the signals located in the region of chemical shifts from 1.8 to 3.8 ppm (the spectrum shown in Figure 1).
Thus, the obtained dependence of the diffusion decay of the spin echo signal in water solution of HA and BSA ( Figure 4A) shows a decrease in the slope of the diffusion decay with increasing diffusion time, which indicates a decrease in the value of the minimum SDC with increasing t d for a part of HA molecules. Figure 4B shows the diffusion decays related only to HA molecules. From this figure, it is well seen that all diffusion decays in the presented coordinates coincide with the accuracy of the experimental error. Figure 5 below shows the dependence of the SDC of HA molecules in water BSA solution on the diffusion time t d . solution with HA as compared to the aqueous solution of BSA cannot be explained only by the influence of the restrictions of the HA polymer chains due to a too low (0.75%) concentration of HA. Hence, it is reasonable, as well as in [38], to assume the formation of BSA-HA complexes, which, however, are characterized by short lifetimes. Note that in [38], it was not established in what state the hyaluronate molecules are in the "BSA + HA + Water" system. To determine this state, we needed to obtain and analyze the diffusion decays of the spin echo signal for the HA molecules by integrating the signals located in the region of chemical shifts from 1.8 to 3.8 ppm. (the spectrum shown in Figure 1).
Thus, the obtained dependence of the diffusion decay of the spin echo signal in water solution of HA and BSA ( Figure 4A) shows a decrease in the slope of the diffusion decay with increasing diffusion time, which indicates a decrease in the value of the minimum SDC with increasing for a part of HA molecules. Figure 4B shows the diffusion decays related only to HA molecules. From this figure, it is well seen that all diffusion decays in the presented coordinates coincide with the accuracy of the experimental error. Figure 5 below shows the dependence of the SDC of HA molecules in water BSA solution on the diffusion time . The experimentally obtained dependence of the minimum SDC (D s ) on the diffusion time t d for a sample of water solution of hyaluronic acid 0.75% (wt.) in the presence of 4% BSA indicates that the self-diffusion coefficient D smin is inversely proportional to the diffusion time; consequently, HA molecules in water solution with BSA are in a completely limited state.
The dependence of the self-diffusion coefficient on the diffusion time shown in Figure 5 allows us to conclude that the HA molecules in the presence of BSA in aqueous solution form a supramolecular structure similar to that (gel) in aqueous solutions of HAs. However, the restriction size or gel grid size formed by the HA molecules in the presence of BSA calculated by Formula (5) turned out to be 0.362 ± 0.019 µm, about 14% larger compared to the same value for the sample of aqueous HA solution. Therefore, in an aqueous solution of HAs with BSA, the HA molecules form a gel structure, which is less "rigid" in its characteristics compared to the gel formed in an ordinary aqueous solution of HAs.
However, another experimental fact is more interesting: in the HA system with BSA, we could not find HA molecules with signs of free diffusion. In addition, the independence of the form of diffusion decays ( Figure 4B) for HA molecules in the "HA + BSA" system from the diffusion time demonstrates, in contrast to the data shown in Figure 2B, the absence of any signs of molecular exchange. This can be interpreted as the absence of the very "phase" with which such exchange can take place. In other words, it can be argued to the accuracy of the experiment that the presence of the BSA protein in the system initiated some additional mechanism that caused all HA molecules to form a gel structure. At the same time, if there are free HA molecules in the system, their share is negligibly small.
No direct evidence of BSA-HA complex formation could be found, but the occurrence of a decrease in SDC in the BSA molecules in the presence of a rather small (0.75%) amount of HA can be formally interpreted as a consequence of BSA-HA complex formation with a relatively short lifetime. At the same time, the presence of BSA molecules quite noticeably affected the characteristics of the gel structure formed by HA molecules. This result confirms that there is a certain interaction mechanism between BSA molecules and HAs to which characteristics of translational mobility of the high molecular weight component, which is HA, are quite sensitive.
Interaction of Hen Egg-White Lysozyme (HEWL) with Hyaluronic Acid Molecules
Compared to BSA, HEWL is a more "active" protein, as it exhibits antimicrobial activity as a lytic enzyme [39]. It is a protein with a molecular weight of 14.3 kDa and an isoelectric point of 11.35, making it cationic (total charge + 7) at neutral pH [40].
A typical view of the proton spectrum of the HEWL solution is shown in Figure 6, in which, similarly to the spectrum of the globular BSA protein solution, one can observe signals in the region of chemical shifts from 7 to 9 ppm. The translational mobility of HEWL in an aqueous solution at a protein concentration of 4% is characterized by a SDC of 8.6 10 m s ⁄ . Similar to the BSA solution, the shape of the diffusion decay of the spin echo signal of the HEWL solution has a monoexponential shape ( Figure 6B), without signs of any processes of association of protein molecules. Since HEWL can specifically interact with polysaccharides [24,41], it makes sense to start the study of aqueous solutions of HA-HEWL with minimal protein concentrations. Figure 7 shows the proton NMR spectra of aqueous solutions of HA with different concentrations of HEWL. The translational mobility of HEWL in an aqueous solution at a protein concentration of 4% is characterized by a SDC of 8.6 × 10 −11 m 2 /s. Similar to the BSA solution, the shape of the diffusion decay of the spin echo signal of the HEWL solution has a monoexponential shape ( Figure 6B), without signs of any processes of association of protein molecules. Since HEWL can specifically interact with polysaccharides [24,41], it makes sense to start the study of aqueous solutions of HA-HEWL with minimal protein concentrations. Figure 7 shows the proton NMR spectra of aqueous solutions of HA with different concentrations of HEWL. The translational mobility of HEWL in an aqueous solution at a protein concentration of 4% is characterized by a SDC of 8.6 10 m s ⁄ . Similar to the BSA solution, the shape of the diffusion decay of the spin echo signal of the HEWL solution has a monoexponential shape ( Figure 6B), without signs of any processes of association of protein molecules. Since HEWL can specifically interact with polysaccharides [24,41], it makes sense to start the study of aqueous solutions of HA-HEWL with minimal protein concentrations. Figure 7 shows the proton NMR spectra of aqueous solutions of HA with different concentrations of HEWL. In the presented NMR spectra, as expected, with an increase in the concentration of HEWL, characteristic signals of NH groups of the protein appear. Figure 8 shows the diffusion decays of the spin echo signal of HA solutions with different concentrations of HEWL, obtained by integrating signals located in the chemical shift region from 1.6 to 3.0 ppm, which does not contain an intense signal from OH groups.
All diffusion decays of the spin echo signal for aqueous solutions of HEWL and HA, presented in Figure 8, refer mainly to HA molecules, since for the indicated integration region, the signal from HA is dominant compared to the signal from HEWL molecules. As can be seen from Figure 8A, when even a small amount of HEWL is added, global changes in the shape of the diffusion damping of the spin echo signal occur. Figure 8B shows diffusion decays for various diffusion times t d at 0.2% lysozyme concentration. From these data, one can see that the part of the diffusion decay characterized by minimum self-diffusion coefficient values depends on the diffusion time, and the character of the dependence is similar to that previously found for HA molecules (see Figures 2A and 4A). In this regard, for the indicated part of the diffusion decay showing signs of restricted diffusion, it makes sense to associate it with the presence of the gel structure caused primarily by HA molecules. In the presented NMR spectra, as expected, with an increase in the concentration of HEWL, characteristic signals of NH groups of the protein appear. Figure 8 shows the diffusion decays of the spin echo signal of HA solutions with different concentrations of HEWL, obtained by integrating signals located in the chemical shift region from 1.6 to 3.0 ppm, which does not contain an intense signal from OH groups. All diffusion decays of the spin echo signal for aqueous solutions of HEWL and HA, presented in Figure 8, refer mainly to HA molecules, since for the indicated integration region, the signal from HA is dominant compared to the signal from HEWL molecules. As can be seen from Figure 8A, when even a small amount of HEWL is added, global changes in the shape of the diffusion damping of the spin echo signal occur. Figure 8B shows diffusion decays for various diffusion times at 0.2% lysozyme concentration. From these data, one can see that the part of the diffusion decay characterized by minimum self-diffusion coefficient values depends on the diffusion time, and the character of the dependence is similar to that previously found for HA molecules (see Figures 2A and 4A). In this regard, for the indicated part of the diffusion decay showing signs of restricted diffusion, it makes sense to associate it with the presence of the gel structure caused primarily by HA molecules.
Returning to the discussion of diffusion decay in Figure 8A, we note that the fraction of the signal with signs of restricted diffusion decreases quite clearly with increasing lysozyme content. The dependence of the fraction of molecules with restricted diffusion signs on the lysozyme protein content is shown in Figure 9. Returning to the discussion of diffusion decay in Figure 8A, we note that the fraction of the signal with signs of restricted diffusion decreases quite clearly with increasing lysozyme content. The dependence of the fraction of molecules with restricted diffusion signs on the lysozyme protein content is shown in Figure 9. As can be seen from Figure 9, even very low lysozyme protein contents lead to a noticeable decrease in the fraction of HA molecules retaining the ability to form the gel structure. Already at a lysozyme concentration of only 0.2%, the population of such molecules decreases from 90% to about 50%. At the same time, the population of molecules for which there are no signs of gel formation completely increases. Apparently, this is a consequence of the manifestation of the enzymatic properties of the lysozyme. According to [23,24], HEWL is able to cleave the glycosidic bonds of polysaccharides due to the presence of aspartic and glutamic acids in its amino acid sequence.
Attempts to obtain a mixture of HAs and HEWL with more than 0.2% HEWL, for instance, 0.5%, in aqueous solutions produced a translucent/muddy precipitate, which did not disappear with either mechanical/thermal treatment of the sample or attempts to change the acidity (pH) of the aqueous solution. Apparently, this is due precisely to the fact that HEWL is able [42] to form coacervates under certain conditions (LLPS). For example, coacervation of a polyelectrolyte/protein complex is the separation of a solution into two phases due to nonspecific electrostatic interactions [43,44].
At this point of time, coacervates are of considerable interest because they form spontaneously from aqueous mixtures and provide stable compartmentalization without the need for a membrane. In addition, the authors of [45] have illustrated the mechanism of cytoplasm organization arising from clusters of weakly "sticky" molecules, including other assemblies of ribonucleoproteins (e.g., P-bodies, Cajal cells or stress-granules) by the example of a Germline P-granule localization study [46,47]. It was also [45] that suggested that such phase structuring may represent the initial mechanism of functional self-assembly of relatively undeveloped molecular ensembles at the early stages of life evolution.
In total, unlike aqueous solutions of BSA-HA and native solutions of hyaluronate, HA molecules in aqueous solution of the lysozymes undergo severe degradation. Thus, even with a lysozyme content of 0.2%, the proportion of HA molecules retaining the ability to form a gel structure decreases almost twofold. Coacervate formation with increasing protein content cannot be interpreted otherwise than as a consequence of the formation of a complex of HAs with the lysozymes. A similar formation of complexes between proteins and HAs was established earlier for the silk fibroin/HA system [48], as well as for a mixture of HAs and IgG from bovine serum (Bovine IgG) [49]. As an explanation for the formation of phase-separated coacervates, the authors point out the result of weak multivalent interactions between biomacromolecules, despite the understanding of the influence As can be seen from Figure 9, even very low lysozyme protein contents lead to a noticeable decrease in the fraction of HA molecules retaining the ability to form the gel structure. Already at a lysozyme concentration of only 0.2%, the population of such molecules decreases from 90% to about 50%. At the same time, the population of molecules for which there are no signs of gel formation completely increases. Apparently, this is a consequence of the manifestation of the enzymatic properties of the lysozyme. According to [23,24], HEWL is able to cleave the glycosidic bonds of polysaccharides due to the presence of aspartic and glutamic acids in its amino acid sequence.
Attempts to obtain a mixture of HAs and HEWL with more than 0.2% HEWL, for instance, 0.5%, in aqueous solutions produced a translucent/muddy precipitate, which did not disappear with either mechanical/thermal treatment of the sample or attempts to change the acidity (pH) of the aqueous solution. Apparently, this is due precisely to the fact that HEWL is able [42] to form coacervates under certain conditions (LLPS). For example, coacervation of a polyelectrolyte/protein complex is the separation of a solution into two phases due to nonspecific electrostatic interactions [43,44].
At this point of time, coacervates are of considerable interest because they form spontaneously from aqueous mixtures and provide stable compartmentalization without the need for a membrane. In addition, the authors of [45] have illustrated the mechanism of cytoplasm organization arising from clusters of weakly "sticky" molecules, including other assemblies of ribonucleoproteins (e.g., P-bodies, Cajal cells or stress-granules) by the example of a Germline P-granule localization study [46,47]. It was also [45] that suggested that such phase structuring may represent the initial mechanism of functional self-assembly of relatively undeveloped molecular ensembles at the early stages of life evolution.
In total, unlike aqueous solutions of BSA-HA and native solutions of hyaluronate, HA molecules in aqueous solution of the lysozymes undergo severe degradation. Thus, even with a lysozyme content of 0.2%, the proportion of HA molecules retaining the ability to form a gel structure decreases almost twofold. Coacervate formation with increasing protein content cannot be interpreted otherwise than as a consequence of the formation of a complex of HAs with the lysozymes. A similar formation of complexes between proteins and HAs was established earlier for the silk fibroin/HA system [48], as well as for a mixture of HAs and IgG from bovine serum (Bovine IgG) [49]. As an explanation for the formation of phase-separated coacervates, the authors point out the result of weak multivalent interactions between biomacromolecules, despite the understanding of the influence of molecular interactions on the formation and properties of protein-polyelectrolyte coacervates, much remains unexplored. In this context, let us consider our experimental data on the translational mobility of lysozyme molecules in solutions with HAs in more detail. Figure 10 shows diffusion decays for lysozyme molecules in an aqueous solution with a protein concentration of 0.2%, as well as in a mixture with HA at the same protein concentration. For comparison, the same figure shows the diffusion decay for that part of the HA molecules that have degraded and lost signs of limited diffusion.
As can be seen from Figure 10, the diffusion decay (curve 1) for lysozyme molecules in solution with water with a protein concentration of 0.2% is described by an exponential function with a single SDC value, which was found to be 6.9 × 10 −11 m 2 /s. At the same time, in solution with HA at the same (0.2%) concentration of lysozyme, the diffusion decay shape for protein molecules (Curve 2) has a more complex form, and its initial slope is described by a significantly lower value (1.4 × 10 −11 m 2 /s) of the average SDC. Thus, as compared to the BSA protein, for which a relatively small decrease in the SDC value as a result of interaction with HA was observed; in this case, we see a significant (almost five-fold) decrease in the translational mobility of lysozyme molecules in the presence of HA. Even more interesting is the result of comparing the diffusion decay of lysozyme molecules (curve 2) with the diffusion decay (curve 3) for that part of HA molecules which, as mentioned above, were degraded by the lysozymes and lost their ability to form a gel structure. The indicated diffusion decays coincide with the experimental error. Such coincidence of the translational mobility characteristics for the lysozyme molecules and the degraded part of the HA molecules unambiguously testifies to the formation of a sufficiently strong HEWL-HA complex. In other words, this result suggests that during the interaction of lysozymes with HAs as a result of the cleavage of the glycoside bonds in the HA molecule by the active amino acids of lysozyme [50,51], the lysozyme molecule does not remain free, but is attached in some way to one of the parts of the hydrolyzed HA molecule. In contrast to the common model [51,52], this result agrees with the results of [53] in which the study of aqueous dextran solutions showed that after the reaction of glycoside bond hydrolysis by the lysozymes, covalently bound protein and dextran complexes are found.
on the translational mobility of lysozyme molecules in solutions with HAs in more detail. Figure 10 shows diffusion decays for lysozyme molecules in an aqueous solution with a protein concentration of 0.2%, as well as in a mixture with HA at the same protein concentration. For comparison, the same figure shows the diffusion decay for that part of the HA molecules that have degraded and lost signs of limited diffusion. As can be seen from Figure 10, the diffusion decay (curve 1) for lysozyme molecules in solution with water with a protein concentration of 0.2% is described by an exponential function with a single SDC value, which was found to be 6.9 10 m s ⁄ . At the same time, in solution with HA at the same (0.2%) concentration of lysozyme, the diffusion decay shape for protein molecules (Curve 2) has a more complex form, and its initial slope is described by a significantly lower value (1.4 10 m s ⁄ ) of the average SDC. Thus, as compared to the BSA protein, for which a relatively small decrease in the SDC value as a result of interaction with HA was observed; in this case, we see a significant (almost fivefold) decrease in the translational mobility of lysozyme molecules in the presence of HA. Even more interesting is the result of comparing the diffusion decay of lysozyme molecules (curve 2) with the diffusion decay (curve 3) for that part of HA molecules which, as mentioned above, were degraded by the lysozymes and lost their ability to form a gel structure. The indicated diffusion decays coincide with the experimental error. Such coincidence of the translational mobility characteristics for the lysozyme molecules and the degraded part of the HA molecules unambiguously testifies to the formation of a sufficiently strong HEWL-HA complex. In other words, this result suggests that during the interaction of lysozymes with HAs as a result of the cleavage of the glycoside bonds in the In conclusion, the data presented in Figure 9, do not depend on the exposure time of the sample. This fact, combined with the established fact of the formation of a complex between lysozyme molecules and degraded HA molecules, indicates that lysozyme molecules lose their enzymatic activity after interaction with HA. This conclusion allows us to hypothesize that HA molecules have an additional function-the function of neutralizing such an enzyme as this lysozyme. Moreover, since HA molecules, as mentioned above, are associated with the outer surface of the cell, they can form the first line of cell defense against the penetration of lysozyme molecules into the membrane. In particular, this conclusion is confirmed in earlier works [54,55] in which a noticeable decrease in the effects of the lysozymes on the membrane of Gram-negative bacteria was found by some polysaccharides.
Conclusions
The characteristics of the translational mobility obtained by NMR with PFG demonstrated the peculiarities of the interaction of HAs with bovine serum albumin (BSA) and hen egg-white lysozyme (HEWL). The characteristics of the translational mobility of HAs demonstrate the marked effects of the presence of BSA protein in the system. First, it manifests itself in the fact that the presence of BSA initiated some additional mechanism, as a result of which 100% of HA molecules formed the gel structure. In this case, the recorded decrease in the SDC value of BSA molecules because of the interaction with HAs can be interpreted because of the formation of short-lived BSA-HA complexes.
On the contrary, in the HEWL-HA system, more significant effects of protein interaction with hyaluronate are observed. Thus, the lysozyme acts as an enzyme, hydrolyzing the glycosidic bond of the polysaccharide. As a result, some of the HA molecules are degraded (torn into pieces with a lower molecular weight), in such a way that they lose the ability to form a gel structure. This effect is noticeable even at very low protein concentrations. As the protein concentration increases, the proportion of degraded HA molecules increases, but, importantly, the other part of the HA molecules retains its characteristics, including the ability to form a gel structure. The most important result, in our opinion, is the establishment of the fact that the lysozyme molecules in the process of performing the function of hydrolyzing the polysaccharide do not remain free, but form a strong complex with parts of the degraded HA molecules and thereby acquire the same characteristics of translational mobility.
With further increases in the concentration of HEWL in HA aqueous solution, the effect of HA/HEWL coacervate formation appears, due to which phase separation occurs in the system. Thus, the presence of the lysozymes in HA aqueous solution demonstrates not only the ability of HEWL to cleave the HA polymer chain, but also the ability to form intermolecular complexes with HA parts.
An important result of the interaction of lysozyme molecules with HA is the neutralization of the enzymatic activity of the lysozymes, which is probably due to the formation of the lysozyme-HA complex. Thus, HA molecules demonstrate the possibility of performing a protective function against the penetration of the lysozymes into the cell membrane.
In general, the results obtained are important enough for a deeper understanding of the mechanisms and functions of the membrane system as a whole. | 2023-04-19T15:42:37.423Z | 2023-04-01T00:00:00.000 | {
"year": 2023,
"sha1": "3cd5f34ce5eda8c664f2a70d28dc2acbeaf18658",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2077-0375/13/4/436/pdf?version=1681549427",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3607657e00896817fddb2c9c90bdfe31c5cca554",
"s2fieldsofstudy": [
"Chemistry",
"Biology"
],
"extfieldsofstudy": []
} |
268161154 | pes2o/s2orc | v3-fos-license | Lethal disseminated intravascular coagulation induced by primary and metastatic neuroendocrine prostate cancer
Neuroendocrine prostate cancer has a poor prognosis. Although disseminated intravascular coagulation associated with malignancy can be lethal, it very rarely occurs among patients with primary neuroendocrine prostate cancer.
Introduction
2][3] Several malignancies can cause DIC, which can be lethal in cases with advanced malignancies. 4ADT treatment can often cause DIC in patients with advanced PCa depending on its progression.However, DIC rarely occurs in patients with primary NEPC.We herein report a case involving an 80-year-old man who developed DIC due to primary and metastatic NEPC.
Case presentation
An 80-year-old man presented to the respiratory medicine department at our hospital with bloody sputum.Initial examination revealed hypertension, hypercholesterolemia, and hyperuricemia, but the patient and his family did not have any history of malignancies.Chest CT found no cause for his bloody sputum, and no otorhinolaryngological bleeding was detected by an otorhinolaryngologist.Blood examination results revealed pancytopenia, suggesting malignancy (Table 1).The patient's serum PSA level 2 years prior to presentation was 2.168 ng/mL but increased to 15.0 ng/mL during a local medical examination 1 month prior to presentation and then again to 44.274 ng/mL upon presentation to our internal medicine department, which prompted referral to our urology department.His serum NSE and soluble interleukin-2 receptor levels were 176 ng/mL and 694 U/mL, respectively.Further examinations revealed normal levels of serum carcinoembryonic antigen, squamous cell carcinoma, carbohydrate antigen 19-9, and pro-gastrin-releasing peptide.Laboratory data suggested the presence of DIC based on the diagnostic criteria established by the Japanese Society on Thrombosis and Hemostasis (2017 edition) (DIC score = 6 [cutoff value, ≥6]; Table 1). 4A rectal examination detected a deep, hard, and irregular mass in the prostate.and pelvic CT revealed an irregular mass at the base of the prostate and multiple metastatic lesions in the lymph nodes, bone, and lungs (Fig. 1a,b).A bone scan found no significant tracer accumulation (Fig. 1c).
The patient was advised admission for core needle biopsies of the prostate and left iliac bone tumor.The biopsies were performed without any severe adverse events using 12,800 units of thrombomodulin alfa per day, which was administered before each biopsy for a total of two doses.A total of 10 core samples, including four cores from the irregular mass, were obtained from the prostate.Biopsies of the irregular prostatic mass and metastatic mass at the left iliac bone revealed similar small cell NEC, whereas biopsy of the mid-prostate revealed typical adenocarcinoma (Gleason score 3 + 4) (Fig. 2).Immunostaining characteristics determined from the biopsies suggested that left iliac bone metastasis from a primary NEPC (Fig. 3).The patient was ultimately diagnosed with DIC due to primary and metastatic NEPC.Unfortunately, invasive endoscopic examinations, such as a gastroscopy, colonoscopy, and bronchoscopy, could not be performed owing to his physical condition.A comprehensive explanation regarding the disease, its prognosis, and treatment options (ADT, platinumetoposide chemotherapy, and supportive care) was provided to the patient and his family.However, the patient opted for only supportive care without ADT, stating that he had lived long enough and had suffered from shortness of breath.Accordingly, pain relief treatment using morphine was initiated, with the patient passing away 3 weeks after the biopsies.The family did not consent to an autopsy.At the time of death, the patient's serum PSA and NSE levels were 148.7 and 255 ng/mL, respectively.NSE (d, h, and k).Similar small cell carcinomas were detected in the prostate and left iliac bone tumors (a, b, e, and f).The arrow shows the carcinoma cells present in the blood vessels (e).Given the negative immunostaining findings for PSA and positive findings for NSE, an immunostaining marker for NEC, these carcinoma cells were diagnosed as NEC (c, d, g, and h).The tumor at the mid-prostate was found to be a typical adenocarcinoma, with a Gleason score of 3 + 4 (i), PSA positivity (j), and NSE negativity (k).
Discussion
Although rarely detected in every organ, NEC is most frequently observed in the lungs, followed by the small intestines, rectum, pancreas, stomach, appendix, and colon.NEPC is a very rare disease often diagnosed in its advanced stages given that routine prostate examinations frequently overlook this disease.Furthermore, reports have shown that NEPC progresses more rapidly than does typical adenocarcinoma in the prostate. 1Moreover, serum PSA levels do not reflect the status of NEPC, which is androgen-independent, considering that PSA is a product of prostatic androgen metabolism. 1 Although NSE is a marker of NEC, its sensitivity for detecting early-stage NEC remains insufficient.Therefore, no effective approach exists for detecting early-stage NEPC.Our patient underwent a regional medical PSA examination, which revealed no remarkable findings until 2 years ago.His serum PSA levels increased rapidly within a few weeks.We hypothesize that the NEC could have destructively infiltrated the prostate, triggering an increase in PSA levels resulting from prostate cell destruction, or that the adenocarcinoma had spread throughout the patient's body, worsening his condition.Unfortunately, we could not confirm either of our hypotheses given that an autopsy could not be performed.
DIC can be classified into three subtypes according to its mechanism: "suppressed fibrinolysis," "balanced fibrinolysis," and "enhanced fibrinolysis." 4,5DIC with suppressed fibrinolysis affects several organs and has been mainly associated with sepsis, whereas DIC with enhanced fibrinolysis causes several bleeding symptoms and has been mainly associated with leukemia, vascular diseases, and PCa.However, other solid cancers can cause balanced fibrinolysis.The decrease in our patient's Plt cell counts and fibrinogen levels, as well as the evaluation of his fibrinogen levels, FDPs, and prothrombin time, indicated DIC with enhanced fibrinolysis.Furthermore, alveolar hemorrhage, which caused hemoptysis, was determined to be a symptom of DIC.
Treating the cause of malignancy-associated DIC is imperative considering the correlation between DIC prognosis and direct treatment of the cause.Localized malignancies can be treated via resection or radiation; however, metastatic malignancies are challenging to treat.Moreover, only a few cases of DIC with untreated metastatic PCa have been reported. 6DT has been shown to improve the prognosis in these patients given its remarkable effectiveness as a systemic therapy for typical PCas, such as adenocarcinoma, which is androgen-dependent.Therefore, DIC associated with ADT-na€ ıve PCa may not necessarily be lethal, at least in the short term.Following the patient's rejection of ADT after a thorough discussion, we could not strongly recommend ADT considering that immunostaining for PSA suggested that the NEC cells obtained from the bone metastasis had almost no sensitivity to ADT.
The prognosis of NEPC remains poor, with a median progression-free survival and overall survival of 2-8 and 8-19 months, respectively. 7Platinum-and etoposide-based d, h).Immunostaining for markers of NEC, such as synaptophysin, chromogranin A, and CAM5.2, came back positive (a-c and e-g).These characteristics were similar for prostate and bone mass.These findings suggest that the bone tumor was a metastatic lesion, with the primary NEC originating from the prostate.The immunostaining positivity rate for Ki67 was >50%, suggesting highly proliferative NEC cells (d, h).
8][9] Fujimoto et al. summarized the available second-line agents, which included amrubicin, irinotecan, docetaxel, everolimus, and olaparib, with several ongoing clinical trials being conducted on NEPC. 7Given the lack of an established second-line treatment, physicians should consider personalized treatment approaches for each patient with NEPC.However, chemotherapy places a considerable strain on patients with already poor physical condition due to disease progression.Therefore, no standard treatment has currently been established for patients with DIC caused by primary and metastatic NEPC, which can be lethal.
Conclusion
The treatment of DIC caused by primary and metastatic NEPC remains challenging given the current lack of satisfactory treatments for metastatic NEPC.Hence, a well-tolerated treatment regimen for patients with metastatic NEPC in poor physical condition is urgently needed.
Fig. 1
Fig. 1 CT and bone scan.(a) Axial abdominal and pelvic CT images obtained upon hospitalization.An irregular mass was identified at the base of the prostate (striated arrow).Paraaortic lymph node metastases and bone metastases were visualized (white arrows).(b) Axial chest CT images obtained upon hospitalization revealed lung metastasis (white arrow).Inflammatory changes could be visualized at the lung periphery.(c) A whole-body bone scan demonstrated no hot spots.The hot spots on the ribs suggested old fractures.
Fig. 2
Fig.2Pathological findings I. Pathological findings of the core needle biopsy samples obtained from the tumors (a-d) at the base of the prostate, (e-h) left iliac bone, and middle of the prostate (i-k).Samples were stained with hematoxylin and eosin (a, b, e, f, and i) and immunostained for PSA (c, g, and j) and NSE (d, h, and k).Similar small cell carcinomas were detected in the prostate and left iliac bone tumors (a, b, e, and f).The arrow shows the carcinoma cells present in the blood vessels (e).Given the negative immunostaining findings for PSA and positive findings for NSE, an immunostaining marker for NEC, these carcinoma cells were diagnosed as NEC (c, d, g, and h).The tumor at the mid-prostate was found to be a typical adenocarcinoma, with a Gleason score of 3 + 4 (i), PSA positivity (j), and NSE negativity (k).
Fig. 3
Fig. 3 Pathological findings II.Pathological findings of the core needle biopsy samples obtained from the tumors (a-d) at the base of the prostate and (e-h) at the left iliac bone.The samples were immunostained for synaptophysin (a, e), chromogranin A (b, f), CAM5.2 (c, g), and Ki67 (d, h).Immunostaining for markers of NEC, such as synaptophysin, chromogranin A, and CAM5.2, came back positive (a-c and e-g).These characteristics were similar for prostate and bone mass.These findings suggest that the bone tumor was a metastatic lesion, with the primary NEC originating from the prostate.The immunostaining positivity rate for Ki67 was >50%, suggesting highly proliferative NEC cells (d, h).
Table 1
Laboratory test findings DIC score based on the diagnostic criteria established by the Japanese Society on Thrombosis and Hemostasis (2017 edition). | 2024-03-03T19:45:00.677Z | 2024-02-25T00:00:00.000 | {
"year": 2024,
"sha1": "0280e64c2854835837ea7b44097902030c32b9b0",
"oa_license": "CCBYNC",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/iju5.12712",
"oa_status": "GOLD",
"pdf_src": "Wiley",
"pdf_hash": "ce25753d9c8a8e850e5cdd1a383155719e99c1eb",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
266727785 | pes2o/s2orc | v3-fos-license | Evaluation of the relationship between occupational-specific task performance and measures of physical fitness, cardiovascular and musculoskeletal health in firefighters
Introduction Firefighters are required to perform physically strenuous tasks such as hose drags, victim rescues, forcible entries and stair climbs to complete their public safety mission. Occupational-specific tasks are often used to evaluate the ability of firefighters to adequately/safely perform their duties. Depending on the regions, occupational-specific tasks include six to eight individual tasks, which emphasize distinct aspects of their physical fitness, while also requiring different levels of cardiovascular (CVH) and musculoskeletal health (MSH). Therefore, the aim of this study was to evaluate the relationship between specific occupational task performance and measures of physical fitness, cardiovascular and musculoskeletal health. Methods Using a cross-sectional design, 282 full-time male and female firefighters were recruited. A researcher-generated questionnaire and physical measures were used to collect data on sociodemographic characteristics, CVH, MSH and weekly physical activity habits. Physical measures were used to collect data on physical fitness and occupational-specific task performance. Results Absolute cardiorespiratory fitness (abV̇O2max), grip strength, leg strength, push-ups, sit-ups and lean body mass (all p < 0.001) had an inverse association with completion times on all occupational-specific tasks. Age was positively related to the performance of all tasks (all p < 0.05). Higher heart rate variability (HRV) was associated with better performance on all tasks (all p < 0.05). Bodyfat percentage (BF%) and diastolic blood pressure were positively associated with the step-up task (p < 0.05). Lower back musculoskeletal injury (LoBMSI), musculoskeletal discomfort (MSD), and lower limb MSD were associated with a decreased odds of passing the step-up. Upper body MSIs (UBMSI), LoBMSIs and Lower back MSD were associated with decreased odds of passing the rescue drag. Conclusion Firefighters that were taller, leaner, stronger and fitter with a more favourable CVH profile, higher HRV and less musculoskeletal discomfort performed best on all occupational-specific tasks. Supplementary Information The online version contains supplementary material available at 10.1186/s12889-023-17487-6.
Introduction
Firefighting is a strenuous and challenging occupation where firefighters are required to be prepared, at all times, to respond to fire and rescue emergencies.Some of these emergencies, especially those on the fire ground, require high levels of physical exertion, which often entail coping with environmental stressors, such as high temperatures, physical hazards and dangerous chemicals and fumes, [1][2][3].The harsh environments often require firefighters to be encapsulated in personal protective equipment (PPE), placing an additional burden on an already strained cardiovascular and musculoskeletal system [3][4][5].The strenuous work conditions of firefighting necessitate that firefighters maintain peak physical conditioning to manage these various and, often, unpredictable highdemand environments and situations [5][6][7].
Although firefighting elicits near maximum physiological responses, placing significant strain on the cardiovascular system, studies have found that firefighters often have multiple cardiovascular disease (CVD) risk factors and poor overall cardiovascular health (CVH) [8][9][10].The cardiovascular risk profile of firefighters progressively worsens as they age [11,12].In addition, despite many firefighters possessing the ability to perform the necessary work-related tasks required in firefighting, many firefighters are reported to not meet the minimum physical fitness levels required for the profession [3,[13][14][15][16], placing an additional burden on an already strained cardiovascular and musculoskeletal system [1-3, 5, 6].Low levels of CVH and physical fitness are prominent precursors contributing to the high incidence of cardiac events and over-exertion related incidents, which account for 40 to 50% of all on-duty fatalities among firefighters [1,2,4].To cope with the physiological and psychological stressors of the job firefighters need good cardiovascular and musculoskeletal health (MSH) and acceptable level of physical fitness [3,5,6].
Previous research has indicated that age and obesity were associated with significantly reduced occupational performance of firefighters, particularly for duties requiring heavy lifting and dragging [3,5,17,18].Activities that include a large static component may provide an exaggerated blood pressure response, especially if the tasks require overhead movements, which may be especially prominent in firefighters suffering from blood pressure irregularities [19][20][21].Firefighters are encouraged by fire departments to remain physically active to ensure they maintain an adequate level of physical fitness.Previous studies have indicated that cardiorespiratory fitness may be the most important factor contributing to adequate occupational performance [22,23].In addition, a higher level of muscle strength and endurance has been shown to improve occupational performance, particularly for tasks involving heavy lifting, dragging, pulling and breaching [3,5,6,18].An added benefit of firefighters remaining physically active is the preservation of MSH, which constitutes a major concern in the profession [24,25].Deterioration of MSH, which is common in firefighters, may reduce occupational performance due to guarding of the painful area [26,27] or reduced force production as a protective mechanism.Firefighting requires firefighters to perform awkward movement patterns to perform their duties, while carrying asymmetrical loads [27][28][29].It has been suggested that previous musculoskeletal injuries (MSIs) or current MSD may impact firefighters' effectiveness in performing specific body movements [26].Thus, firefighters are required to maintain high levels of work functioning in all occupational-specific tasks [27,30,31].
To assess firefighters' occupational performance, fire departments use simulation protocols designed to replicate the duties that firefighters are required to perform [5,6,32,33].Each occupational-specific task reflects a core or critical task that firefighters are required to perform, such as the forcible entry, hose drag, ladder raise and victim rescue [3,5,6].The performance of each task is timed to ensure firefighters are able to complete their duties with sufficient rigour and intensity.In addition, to pass the occupational-specific tasks, firefighters are required to complete each task within a given time limit.Several studies have assessed the relationship between physical fitness [3,5,6,18], specific CVH [3,18,34] and MSH [27] parameters and occupational performance in firefighters.However, there remains a need to evaluate the relationship between performance on each of the individual occupational-specific tasks and measures of physical fitness, CVH and MSH, warranting further investigation.Determining the factors influencing specific firefighter task performance in this population may highlight the tasks firefighters are most likely to fail and assist in the establishment of intervention strategies to assist firefighters in improving their performance.Therefore, the aim of this study was to evaluate the performance of occupational-specific tasks in association with firefighters' physical fitness, CVH and MSH.
Study design and population
A cross-sectional study design was employed to collect information on occupational performance, using occupational-specific tasks (based on the physical ability test), physical fitness (cardiorespiratory fitness, muscular strength and endurance, flexibility, and body composition), CVH (CVD risk factors, CVH metrics, heart rate variability) and MSH (MSIs and MSD) in firefighters.In total, 309 full-time male and female firefighters from the City of Cape Town Fire and Rescue Service (CoCT-FRS), ranging in age from 20 to 65 years, took part in the study.From the original 309 firefighters, 283 agreed to participate in the occupational-specific tasks on the day of testing.Amongst the 282 that performed the occupational-specific tasks, 268 completed all occupationalspecific tasks that were part of the PAT.However, 18.7% failed to complete the occupational-specific task battery in the required time or failed to complete all tasks.In addition, three firefighters failed to complete the first task (step-up).All volunteers for this study provided written informed consent before proceeding.Data collection took place from June to August of 2022.The University of the Western Cape's Biomedical Research Ethics Committee gave its approval (ethical clearance number: BM21/10/9).The Chief Fire Officer, the Department of Policy and Strategy, and the research all gave their approval.
Sampling and participant recruitment
Data collection took place during annual physical fitness assessments at a standardized fire station located in the City of Cape Town (CCT) metropolitan area to assure consistency in the terrain, environmental conditions and testing surface.To ensure the consistency and reliability of the testing results, all physical measures and the occupational-specific tasks were collected and recorded by trained researchers that were familiarised with all the testing instruments and research procedures [35].Every third firefighter from the 96 platoons (32 fire stations) was selected using random systematic sampling.The 96 firefighter platoons each had 8 to 12 members.All firefighters that were between the ages of 20-65 years were eligible to participate in the study.Firefighters who were on administrative duty, sick leave, worked part-time or seasonally, or did not participate in the PAT, on the day of testing, were all disqualified from partaking in this study.
Occupational-specific tasks
The occupational-specific tasks were used to assess operational performance and were conducted according to the testing protocol of the CoCTFRS wellness manual.The CoCTFRS worked with professionals in the field to establish the occupational-specific tasks as part of the fitness and wellness programme.The occupational-specific tasks consisted of tasks that are intended to replicate the numerous tasks that firefighters are required to carry out, while also attempting to replicate the physical strains to which firefighters are frequently exposed to.Firefighters were required to complete the entire simulation protocol in under 9 min (540 s), which included the allowed 20 s of recovery between tasks.Firefighters wore their full PPE equipment and breathing apparatus set, in order to pass.The simulation included six tasks, which were used to simulate various stressors firefighters are placed under.These tasks encompassed the step-up, charged hose drag and pull, forcible entry, equipment carry, ladder raise and extension and the rescue drag.Individual occupationalspecific tasks each had their own completion times that needed to be met in order to pass the testing battery.Failure to complete a task resulted in firefighters being graded as "not yet competent".The step-up required firefighters to perform 30 step-ups on a standardized platform of 200 mm and were given a time limit of 90 s.The charged hose drag and pull required firefighters to drag a tyre 27 m, drop to one knee or in a seated position, pull a tyre another 15 m and had a time limit of 180 s.The firefighters moved to the forcible entry task where they were required pick up a 6-kg sledgehammer to drive the tyre 600 mm in under 60 s.For the equipment carry, firefighters were tasked to remove two 25 kg foam drums from a 1.2-m platform, carry the foam drums 25 m and walk back another 25 m, placing the drums back on the platform which needed to be completed in under 90 s.For the ladder raise and extension firefighters were tasked to walk a seven-to-eight-meter ladder toward a building, place the ladder against the building and immediately walk toward a hauling line and hoist a 35 kg drum until it reaches the pulley and then lower the drum, in under the time limit of 90 s.Then, firefighters lower the ladder and walk the ladder back to the starting position.The rescue drag required firefighters to grasp an 80 kg tyre and drag the tyre 11 m, perform a 180-degree turn and continue for another 11 m toward the finish line in under 60 s.A full description of the occupational-specific tasks can be found in Ras et al. [35].
Physical fitness measures
Physical fitness was measured using the American College of Sports Medicine (ACSM) guidelines [36].Cardiorespiratory capacity was calculated using a validated non-exercise calculation [35,37] to determine oxygen consumption (VȮ 2 ).The push-ups and sit-ups tests were used to assess muscular endurance, handgrip and leg strength tests were used to assess upper and lower body muscle strength and the sit-and-reach test was used to assess flexibility.Body mass and Lean body mass (LBM) was used as a measure for body composition and assessed using a bioelectrical impedance (BIA) analyser (Tanita© BC-1000 Plus BIA scale).For a full description of the methods used to assess physical fitness consult the study published by: Ras et al. [38].
Classification of physical fitness parameters
For relative cardiorespiratory fitness, 42 mL•kg•min [39] was used to indicate the minimum cardiorespiratory fitness needed for firefighting.Cardiorespiratory fitness was expressed as both absolute and relative cardiorespiratory fitness and odds ratios were calculated on both separately.Due to the absence of standardized minimum requirements of absolute cardiorespiratory fitness, muscular strength, endurance and flexibility, the 50th percentile was used to indicate good levels of physical fitness.Absolute cardiorespiratory fitness was considered the maximum oxygen consumed in one minute and relative cardiorespiratory fitness was considered as the relative oxygen consumed, relative to lean body mass [40][41][42].An absolute cardiorespiratory fitness level of 3.40 L•min was considered "good".For grip and leg strength, firefighters that had a grip strength above 89.9kg and leg strength above 116.5 kg were considered "good".For push-ups and sit-ups, firefighters that performed 30 or more push-ups and sit-ups were considered "good".For flexibility, a sitand-reach above 43 cm was considered "good".Firefighters falling below the 50th percentile were classified as having a "low" level of muscular strength and endurance and flexibility.
Cardiovascular health measures
Cardiovascular health (CVH) was investigated using several approaches.These approaches included three main subcomponents, specifically traditional CVD risk factors, CVH metrics and heart rate variability (HRV).Using standardized techniques [36], height was measured with a stadiometer and waist and hip circumference were assessed using a tape measure, and body fat percentage (BF%) was calculated using a BIA scale.The traditional CVD risk factors included age, obesity, physical inactivity, dyslipidaemia, diabetes, hypertension and cigarette smoking.Cardiovascular health metrics were used to classify firefighters' cardiovascular health index (CVHI).The CVH metrics included smoking status, blood pressure, non-fasting blood glucose (NFBG), total cholesterol (TC), an ideal/good body mass index (BMI), level of physical activity, and diet.In addition, CVHI was classified as "poor" if firefighters had zero to two metrics classified as ideal, "intermediate" if firefighters had three to four metrics classified as ideal and "good" if firefighters had five to seven metrics rated as ideal.The 2008 Framingham risk model, developed by D' Agostino et al. [43], was used to assess cardiovascular risk of firefighters.The 2008 Framingham risk model, developed by D' Agostino et al. [43], was used to assess cardiovascular disease risk of firefighters.In addition, to determine the cardiovascular disease risk among firefighters, the American College of Cardiology (ACC) 10-year atherosclerotic cardiovascular disease (ASCVD) and ASCVD lifetime risks were calculated [44,45].For HRV, a Polar ™ (Polar Electro Oy, Kempele, Finland) H10 heart rate monitor was used, at rest, while firefighters were in a seated position, and analyzed using the Kubio© Software version 3.4.3.Moreover, the following HRV measures were collected: standard deviation of all normal-to-normal (SDNN); root-mean-square of successive differences (RMSSD; low-frequency (LF); high frequency (HF); low and high frequency ratio (LF/ HF) [46,47].For more information on the methods used to assess CVH, as well as the classifications of CVD risk factors and CVH metrics, please refer to Ras et al. [48].
Classification of musculoskeletal health
Musculoskeletal health was subdivided as musculoskeletal injury (MSI) and musculoskeletal discomfort (MSD) status, which was further separated into those that sustained an injury while on duty and those that did not, and those that are experiencing MSD and those without.Musculoskeletal injury and discomfort were measured subjectively via two validated questionnaires, namely the Cornell Musculoskeletal Discomfort Questionnaire [49] and the Nordic Musculoskeletal Questionnaire.Subcategories for those that reported MSIs and MSD were categorized based on the location of the MSI or the MSD experienced, specifically upper body MSI (UBMSI), lower body MSI (LBMSI), lower back MSI (LoBMSI) upper body MSD (UBMSD), lower body musculoskeletal discomfort (LBMSD) and lower back MSD (LoMSD).
Statistical analysis
The data were analysed using SPSS ® software, version 28 (Chicago, Illinois, USA).Descriptive statistical analyses, such as the median and 25th and 75th percentiles were performed.Thereafter, group comparisons used the Mann-Whitney U and Kruskal-Wallis H test. Univariable and multivariable linear regressions were performed to determine the independent variables associated with occupational-specific tasks, i.e., step-up, charged hose drag and pull, forcible entry, equipment carry, ladder raise and extension and rescue drag, which was considered the outcome (dependent variable) in firefighters.Completion time for each tasks was recorded to nearest second.Univariable and multivariable logistic regressions were performed to determine the independent variables associated with the occupational-specific tasks pass rates.Pass rates were calculated from predetermined cut-off values.Exploratory physical fitness variables included abVȮ2max, relVȮ2max, grip strength, leg strength, push-ups, sit-ups, and LBM.Exploratory CVH variables included age, BMI, BF%, WC, SBP, DBP, TC, NFBG, weekly MET minutes and Framingham risk score.Exploratory variables for MSH included MSI, upper body musculoskeletal injury (UBMSI), lower body musculoskeletal injury (LBMSI), lower back musculoskeletal injury (LoBMSI), MSD, lower back musculoskeletal discomfort (LoBMSD), upper body musculoskeletal discomfort (UBMSD) and lower body musculoskeletal discomfort (LBMSD).Multivariable model 2 were adjusted for age, sex, height and weekly metabolic equivalent minutes.For variables which remained significant, additional multivariable models were run where covariates included physical fitness, CVH and MSH.In addition, to reduce the number of independent variables and likelihood of multicollinearity, principal components analysis (PCA) was run on physical fitness and CVH variables to discern the variables explaining the most variability in physical fitness and CVH.The Direct Oblimin rotation was preferred due to the data being correlated.The PCA output for both physical fitness and CVH explained > 60% of the variance in each and was used in the multivariable regression models [50].To control for collinearity the VIF and Durbin-Watson statistics were used.A VIF < 5 was used to indicate that no substantial collinearity was present and a Durbin-Watson statistic between 1.5 and 2.5 indicated no autocorrelation was present.Linear least absolute shrinkage and selection operator (LASSO) regression was also used to build a prediction model for each physical fitness and CVH parameter to reduce the number of predictors (n = 19).To ensure cross-validation of the model and evaluate the predictive ability of the model a five-fold cross-validation method was used.For reporting, the more parsimonious model within 1 standard error of the optimal model was preferred.Indicators (physical fitness and CVH) with non-zero coefficients were reported, only.For data that were not normally distributed, data were fractionally ranked, and then normalized using the inverse DF, IDF.NORMAL transformation [51].A p-value of < 0.05 was used to indicate statistical significance.
Results
In Table 1 we present data on all six occupational-specific tasks based on participant characteristics.Time to complete all occupational-specific tasks were significantly different between male and female firefighters (p < 0.001), with males performing better than females.Based on agegroup, performance times of the individual occupationalspecific tasks was significantly different between age categories (p < 0.001).Firefighters with good grip strength (p < 0.01), leg strength (p < 0.001), push-ups (p < 0.001) and sit-ups (p < 0.001) had significantly shorter completion times on all individual occupational-specific tasks.Aged firefighters had significantly longer completion times on all occupational-specific tasks (p < 0.01), except the forcible entry.Firefighters that were obese, had central obesity, and were physical inactive had significantly longer completion times for all the occupational-specific tasks (p < 0.01).Firefighters that reported UBMSIs had longer completion times on the step-up and ladder raise and extension tasks (p < 0.05).Firefighters that reported LoBMSIs had longer completion times on the step-up, charged hose drag and pull and the ladder raise and extension (p < 0.05), and firefighters with LoBMSD had longer completion times on the ladder raise and extension (p < 0.05).
In Table 2 we indicate the association between demographic characteristics, physical fitness, cardiovascular health and occupational-specific task performance.Multivariable analyses indicated that an increase in abVȮ2max was associated with a shorter completion time for the step-up, charged hose drag and pull, forcible entry, equipment carry, ladder raise and extension and the rescue drag completion times.An increase in grip and leg strength was associated with a shorter completion time for the charged hose drag and pull, forcible entry, and equipment carry.In addition, grip strength was associated with shorter ladder raise and extension and rescue drag completion times.An increase in push-ups and situps capacity was associated with a shorter completion time for the step-up, charged hose drag and pull, forcible entry, equipment carry, forcible entry and rescue drag.An increase in LBM was associated with a shorter completion time in the charged hose drag and pull, forcible entry, equipment carry and rescue drag tasks.
For CVH, in the multivariable analyses, an increase in age was associated with an increase in the completion times of the step-up, charged hose drag and pull, ladder raise and extension, equipment carry, forcible entry and the rescue drag.An increase in height was associated with a decrease in completion times for the step-up, charged hose drag and pull, ladder raise and extension, equipment carry, forcible entry and the rescue drag.An increase in BMI and BF% was associated with an increase in the step-up completion time, only.An increase in SBP was associated with a shorter completion time in the charged hose drag and pull, only.An increase in weekly MET minutes was associated with a shorter completion time in the charged hose drag and pull, forcible entry, equipment carry and rescue drag, respectively.An increase in HRV, SDNN and RMSSD was associated with shorter completion times for all occupational-specific tasks (all p < 0.01).After adjustment for age, sex, height and weekly MET minutes, HRV and SDNN remained significantly associated with shorter completion times for all occupational-specific tasks.
In Table 3 we further delineate the interrelationships between physical fitness, cardiovascular health and occupational-specific task performance.For physical fitness, after adjustment for CVH and MSH, abVȮ 2max, abVȮ 2max, grip strength, leg strength, sit-ups and LBM remained significantly associated with all tasks (all p < 0.01).
Table 3 (continued)
Step Push-ups capacity remained significantly associated with all tasks, except the step-up (all p < 0.001).Based on CVH, after adjustment for physical fitness and MSH, an increase in age was associated with slower completion times in the charged hose drag and pull, equipment carry and the rescue drag tasks.An increase in BMI was associated with slower completion times in the charged hose and pull (p < 0.01) and the ladder raise and extension (p < 0.01).An increase in DBP was associated with slower completion times in the step-up (p < 0.05) and equipment carry (p < 0.05).Framingham risk score was associated with slower completion times in the charged hose drag and pull (p < 0.001), equipment carry (p < 0.01) and rescue drag task (p < 0.01).In Model 3, an increase in SDNN and RMSSD was associated with faster completion times in the step-up (p < 0.05) and for the equipment carry and increase HRV, SDNN and RMSSD were associated with faster completion times (all p < 0.05).
In Table 4, multivariable analysis is conducted to determine the association between we between physical fitness, cardiovascular health, musculoskeletal health and occupational-specific task performance, controlling for all covariates.Based on physical fitness, multivariable analysis in Model 1 showed that an increase in abVȮ2max remained significantly associated with faster completion times in the step-up, charged hose drag and pull, forcible entry, equipment carry, ladder raise and extension and rescue drag, and relVȮ2max remained significantly associated with the step-up task.An increase in grip strength was associated with faster completion times of the charged hose drag and pull, forcible entry, equipment carry, ladder raise and extension and the rescue drag task.Leg strength was associated with faster completion times in all tasks.Increased push-ups capacity was associated with faster completion times for all tasks (all p < 0.01), except the step-up.An increase in sit-ups capacity was associated with a decrease in completion times in the step-up, charged hose drag and pull, forcible entry and rescue drag tasks.Lean body mass was associated with a decrease in the completion in all tasks, except the step-up task.Based on CVH, in Model 2, an increase in BMI was associated with a decrease in completion times of the step-up and charged hose drag and pull.An increase in BF% was associated with faster completion times for the forcible entry, equipment carry and rescue drag tasks.In Model 3, an increase in SDNN and RMSSD was associated with a decrease in completion time of the step-up and an increase in HRV and RMSSD remained associated with faster completion times in the equipment carry task.
In Table 5 we describe the associations between physical fitness, CVH and pass rates, using the predetermined cut-off times for each of the individual tasks.Firefighters who had a good abVȮ 2max had increased odds of passing the step-up (OR = 4.0), equipment carry (OR = 2.9), ladder raise and extension (OR = 2.8) and the rescue drag (OR = 1.9), respectively.Firefighters with good leg strength had increased odds of passing the forcible entry (OR = 11.6), equipment carry (OR = 1.9) and ladder raise and extension (OR = 1.9), respectively.Firefighters with good push-ups capacity had increased odds of passing the equipment carry (OR = 3.1), ladder raise and extension (OR = 3.1) and rescue drag (OR = 3.1).Firefighters with good sit-ups capacity had increased odds of passing step-up (OR = 3.6), equipment carry (OR = 2.2), ladder raise and extension (OR = 4.3) and rescue drag (OR = 2.4), respectively.For CVH, in the multivariable analyses, obese firefighters had decreased odds of passing the step-up task, those with a high BF% had decreased odds of passing the step-up (OR = 0.3), ladder raise and extension (OR = 0.4) and rescue drag (OR = 0.4) respectively.Physically inactive firefighters had decreased odds of passing the step-up (OR = 0.1), ladder raise and extension (OR = 0.5) and the rescue drag (OR = 0.3), respectively.Firefighters with an intermediate CVHI had increased odds of passing the equipment carry (OR = 2.1), ladder raise and extension (OR = 1.6) and the rescue drag (OR = 2.9), respectively, compared to firefighters with a poor CVHI.For MSH, upper body injuries (OR = 0.5) and low back injuries (OR = 0.3) decreased the odds of passing the rescue drag task.Firefighters that reported MSD and lower limb discomfort had decreased odds of passing the step-up (OR = 0.4 and 0.2), respectively.Low back discomfort decreased the odds of firefighters passing the rescue drag (OR = 0.4).
In Table 6 we further describe the associations between physical fitness, CVH and task pass rates, using the predetermined cut-off times for each of the individual tasks.Multivariable analysis included additional variables of CVH and physical fitness.For physical fitness, firefighters that had a good abVȮ 2max had an increased odds (OR = 4.3) of passing the step-up and ladder raise and extension (OR = 2.5) tasks.Firefighters with a good grip strength had an increase in odds of passing the forcible entry (OR = 2.4) and ladder raise and extension (OR = 2.5).Leg strength was associated with an increased odds (OR = 2.2) of passing the ladder raise and extension task.Firefighters with a good push-ups capacity was associated with an increased odds of passing the charged hose drag and pull (OR = 2.9) and the forcible entry (OR = 2.9) tasks.For CVH, obese firefighters had a decreased odds of passing the step-up (OR = 0.13), charged hose drag and pull (OR = 0.12), equipment carry (OR = 0.4), and rescue drag (OR = 0.3).Firefighters with an intermediate CVHI had an increased odds of passing the equipment carry (OR = 2.9), ladder raise and extension (OR = 1.9)
Ladder raise and extension
Rescue drag (0.9, 5.0) -Grip strength 0.9 (0.9, 1.0) - -1.9 (0.9, 4.3) -Flexibility 1.7 (0.6, 4.9) -0.3 (0.0, 2.3) -1.6 (0.9, 3.In Table 7, the LASSO results for key indicators of physical fitness and CVH associated with occupationalspecific task performance in firefighters are delineated.The results of the LASSO regression reported that abVȮ 2max , grip strength, sit-ups, LBM, BF% and DBP were significant indicators for step-up completion times, explaining 26.6% of the variance.For the charged hose drag and pull, abVȮ 2max , grip strength, leg strength, push-ups, sit-ups, LBM, age, BMI and weekly MET minutes were significant indicators and explained 55.6% of the variance in the task.For the forcible entry, abVȮ 2max , grip strength, leg strength, sit-ups, LBM and weekly MET minutes remained significant indicators of completion time in the task, explaining 26.2% of the variance.AbVȮ 2max , grip strength, leg strength, push-ups, sit-ups, sit-and-reach, LBM, age, BMI, BF%, HDL-C and weekly MET minutes were significant indicators of performance on the equipment carry and explained 45.3% of the variance in the task.For the ladder raise and extension abVȮ 2max , grip strength, leg strength, sit-ups, LBM BF% and Weekly MET minutes were significant indicators of task completion times and explained 42.1% of the variance in the task.For the rescue drag, abVȮ 2max , grip strength, leg strength, push-ups, sit-ups, LBM, age and weekly MET minutes explain 47.2% of the variance in the task performance.
Discussion
The results of the study indicated that firefighters with higher levels of absolute cardiorespiratory fitness, muscle strength and endurance and favourable body composition, performed all occupational-specific tasks significantly faster and were more likely to pass each task.This is consistent with previous studies where higher levels of physical fitness was related to better occupationalspecific task performance in firefighters [3,5,6,52].In addition, the results indicated that firefighters aged 45 years and older who had a BMI over 30 kg•m −2 and those that had higher blood pressure, worse lipid profile and a low HRV were the poorest performers on all the individual occupational-specific tasks.These results corroborate previous research where older and obese firefighters had poorer performance on most occupational tasks [3,5,6].Moreover, higher levels of blood pressure and worse lipid profile have been shown to be associated with lower levels of physical fitness [53][54][55], providing a potential explanation for poorer performance on the individual tasks in this group.In the present study, firefighters that reported sustaining an MSI performed the rescue drag task significantly slower and those that reported more MSD performed the step-up, charged hose drag and pull and the rescue drag task significantly slower.This is consistent with previous studies where MSH was related to more physical and work functioning restrictions [26,27,30].
In the current study, an increase in absolute cardiorespiratory fitness was associated with faster completion times for all occupational-specific tasks and a key indicator in the performance of all occupational-specific tasks, which remained significant after adjustment for CVH and MSH.However, relative cardiorespiratory fitness was related to faster completion times for the step-up task, only.Schonfeld et al. [56] reported that relVȮ 2max was inversely related to a stair climb (r = -0.627),chopping task (r = -0.324)and the victim rescue (r = -0.447)tasks in firefighters.Similarly, Chizewski et al. [3] found estimated relVȮ 2max was inversely related to the self-contained breathing apparatus (SCBA) crawl (r = -0.530),victim rescue (r = -0.342),hose advance (r = -0.266)and the equipment carry (r = -0.361)tasks.Studies have suggested that occupational tasks that require more time to complete, that are also more strenuous, require higher levels of cardiorespiratory fitness to perform them adequately [3,5,6,52].Moreover, we found that after adjustment for age, sex, height and weekly MET minutes, CVH and MSH, absolute cardiorespiratory fitness remained significantly related to all tasks.Furthermore, absolute cardiorespiratory fitness, rather than relative cardiorespiratory fitness, contributed more significantly toward overall occupational-specific task performance.A study by Perroni et al. [57] also found that absolute cardiorespiratory fitness was more correlated to performance of the Queens College Step Field test compared to relative cardiorespiratory fitness (r = 0.76 vs r = 0.54) while performing the test wearing full PPE.The authors noted that using absolute oxygen may be a useful tool when evaluating cardiovascular strain in firefighters while firefighters are in PPE [57].It is possible that absolute cardiorespiratory fitness may be a valuable measure while firefighters are wearing full PPE, as higher levels of relative oxygen consumption may not necessarily relate to better performance if firefighters lack the necessary muscle mass and strength needed to overcome the additional weight [57,58].Although being leaner may be more favourable in many cases, a higher overall LBM reflecting a greater muscular mass/ strength and a greater ability to utilize oxygen (absolute oxygen utilisation) [59], may explain more favourable performances on each of the occupational-specific tasks.This would suggest that firefighters with a higher LBM, regardless of body weight, and a higher absolute VȮ 2max, would perform significantly better, likely due to greater oxygen uptake and additional muscular strength to overcome the weight of their PPE [3,33,39,[60][61][62].This is supported by the results of the present study, where we found that firefighters that had a higher LBM had significantly shorter completion times on all occupational-specific tasks.This was further corroborated by studies by Williford et al. [5], Davis et al. [34] and Henderson et al. [58] reported that higher LBM was negatively associated with individual task performance.It is likely firefighters with a higher LBM are taller and heavier, with more muscle mass, which has all been shown to be related to better performance on all tasks [5,17,39].Von Heimburg et al. [62] found that peak VȮ 2 could accurately predict occupational performance and more so when expressed as absolute cardiorespiratory fitness rather than relative cardiorespiratory fitness.Possibly peak VȮ 2 (absolute) may be important for faster occupational performance, and for slower, less fit firefighters, accumulated VȮ 2 or the ability to sustain a minimum VȮ2 may be crucial in completing their occupational-specific tasks.
We found that higher muscular strength and muscular endurance was associated with shorter completion times for all individual occupational-specific tasks.In addition, this remained significant when adjusted for CVH and MSH in the multivariable Models.Michaelides et al. [61] reported that push-ups stamina and muscular strength was related to better performance on individual tasks.Williford et al. [5] corroborated these findings, reporting that grip strength was negatively related to the forcible entry task (r = -0.53),equipment hoist (r = -0.55),hose advance (r = -0.41),victim rescue (r = -0.59)and stair climb tasks (r = -0.39).This was further supported by Skinner et al. [18] who reported that higher strength levels in the bench press (r = -0.471)and higher endurance capacity in the push-ups (r = -0.385)were negatively related to the hose drag task.Von Heimburg et al. [62] noted that there was a minimum standard of muscular strength and endurance are required to perform the occupational tasks acceptably and muscular strength exceeding this point had progressively less impact on the performance of each task.Moreover, overweight and obese firefighters with higher strength levels did not perform better than fighters who weighed less that had sufficient strength to overcome the task [62], which had been a finding that was reported by Phillips et al. [17].In the present study, we found that higher sit-and-reach scores were associated to lower completion times on the equipment carry task.A systematic review [52] reported that there was a significant effect for flexibility on the stair climb task in firefighters.However, results for the relationship between flexibility and task performance are inconsistent in the literature [3,18,52,60].
In the current study we found that as age (and hence years of experience) increased, the completion times for each of the occupational-specific performance tasks increased.However, when adjusted for physical fitness and MSH, significances were removed.Previous studies have found similar results, indicating aging was negatively related to occupational-specific task performance in firefighters [3,5,18].This may be due to the natural age-related decrease in cardiorespiratory fitness, muscular strength and endurance, negatively affecting occupational performance in firefighters [63][64][65][66].Researchers have argued that older and more experienced firefighters have learned superior techniques that could, at least partially, counteract the age-related decrease in cardiorespiratory fitness [39].
We found that an increase in BF% and BMI were associated with significantly slower completion times for all occupational-specific tasks in firefighters, which remained significant after adjustment for physical fitness and MSH.Previous study reported similar results where an increase in BF% was related to slower completion times for each task [3,5], particularly the stair climb task, where firefighters are required to traverse stairs, carrying their bodyweight in addition to a highrise pack [56,67].An increase in body fat represents non-functional mass that increases the effort firefighters are required to exert to successfully complete each task, which, subsequently, increases the time taken to complete each task [52,60,61].It is also plausible that obese firefighters ambulate more slowly and less efficiently [68], extending the time to complete each task that requires continual movement, such as the hose drag, equipment carry or victim drag, while also requiring additional time moving from task to task.In addition, it is likely that obese firefighters' fatigue quicker, consequently reducing their overall occupational performance [5,67,69].The findings of the present study indicated that a higher blood pressure was associated with an increase in the step-up, the charged hose drag and pull, and the equipment carry completion times.Similarly, Davis et al. [34] reported that diastolic blood pressure was positively related to occupational task performance (r = 0.233) in firefighters.In the present study, the step-up, charged hose drag and pull and the equipment carry tasks involved strong isometric and isotonic contractions, which leads to an exaggerated blood pressure response [20,70].
We found that an increase in HRV, SDNN and RMSSD was associated with faster completion times for all occupational-specific tasks, and LF range was associated with better performance on all tasks, except the forcible entry.After adjustment for physical fitness, CVH and MSH, SDNN and RMSSD remained significantly associated to certain occupational-specific tasks.A study by Lesniak et al. [71] reported that SDNN was negatively related to the hose drag (r = -0.745),ladder raise (r = -0.738)and rescue (r = -0.738)tasks and LF/HF ratio was negatively related to the forcible entry task (r = -0.718).Previous studies have also found that firefighters that had higher HRV was related to higher physical performance [72,73], sleepiness and higher levels of fatigue [74], and cardiovascular health [75].Theoretically, Firefighters with higher HRV indices would be fitter, and healthier, consequently performing better on of all the occupationalspecific tasks.The LF range has been reported to be associated with the physical fitness levels, the stress state and baroreceptor functioning in individuals [76].This suggests that firefighters that are in lower stress states are fitter and may perform their duties more efficiently than those that are in a more stressed state, which has been a proposed theory explaining the reasons for performance decrements in firefighters [76][77][78].This becomes particularly evident as firefighters age and become more stressed, as a result of being in the profession for a longer period [79,80].
We found that taller and heavier firefighters performed significantly better than their lighter and shorter counterparts.This was consistent with a study conducted by Phillips et al. [17] that reported heavier and, subsequently, taller, firefighters performed favourably on all simulation tasks, except the ladder climb test.Similarly, Williford et al. [5] reported that height and weight were significantly related to all occupational performance task completion times.Taller firefighters, inherently, would have a higher LBM, consequently, a higher overall muscle mass and VȮ 2max [17,18,81].Von Heimburg et al. [39] separated participants into fast and slow performers, and found that those who performed a rescue operation fastest were taller (9 cm) and heavier (10 kg more) than those who performed the task more slowly.
Firefighters that reported MSIs had slower completion times for the step-up and rescue drag tasks and those with MSD, particularly in the lower back region, had slower completion times for the step-up, charged hose drag and pull and rescue drag tasks, which remained significant after the addition of physical fitness and CVH as covariates.McDermid et al. [27] reported that MSD was not significantly related to the completion times of the stair climb or hose drag tasks.However, firefighters with severe discomfort took 10 s longer to perform the stair climb compared to those without discomfort.Similarly, Nazari et al. [82] reported that spine pain was related to firefighters reporting the most physical and work limitations.In addition, the current study showed that firefighters who experienced more overall MSD and, those specifically experiencing MSD in the shoulder, upper back, wrist and hand regions took significantly longer to complete the forcible entry task.Since the forcible entry task requires firefighters to swing a sledgehammer with maximal force [3,5], it is unsurprising that firefighters with MSD in the shoulder, upper back and wrist and hand regions would have the most physical limitations leading to worse performance.Azmi and Masuri [30] reported that MSD in the upper back, lower back, left wrist and left thigh contributed to 50% of the limitation to functional status in firefighters.Limitations, caused by previous injury or current discomfort, may contribute toward firefighters guarding the injured or discomforted area [26,27].Moreover, pain or previous injury may contribute toward reduced force production contributing toward worse performance on each task, particularly those requiring weight bearing, placing stain on the lower limbs and low back, such as the step-up, charged hose drag and pull and the rescue drag, as seen in the present study [83].
The results of the LASSO analysis indicated that firefighters with higher cardiorespiratory fitness, muscle endurance capacity, who are stronger, more physically active and had a lower BF% and higher LBM had the shortest completion time on the step-up, charged hose drag and pull, forcible entry, equipment carry, ladder raise and extension and the rescue drag tasks.Previous studies are consistent with these findings, and have shown that stronger, fitter and leaner firefighters performed the stair climb, hose drag and pull, forcible entry, equipment carry, ladder raise and rescue drag tasks significantly quicker than weaker, overweight/obese and less fit firefighters [3,6,18,34,61].
Strengths and limitations
This was the first study to investigate the association between physical fitness, cardiovascular and musculoskeletal health in relation to occupational-specific task performance through a physical ability test performed by firefighters in the CoCTFRS, adding novel findings, particularly in a South African context.The measures for physical fitness, cardiovascular health, and occupational-specific task performance were objectively measured by trained researchers, using standardized and validated instruments [35].There are, however, several limitations to the present study.The first limitation is the cross-sectional study design which precludes the inference of causal relationships.A second limitation was that female firefighters were underrepresented, limiting the generalizability to the female firefighter population.Cardiorespiratory fitness was measured using a non-exercise estimation, not using lab or field testing.Lastly, the multiple comparisons on the relatively small sample size may have increased the possibility of spurious findings.
Conclusion
The present study showed that multiple parameters of physical fitness, cardiovascular health, and musculoskeletal health were related to better occupational-specific task performance in firefighters.Fitter, more active, stronger, and leaner firefighters who had a more favourable cardiovascular health profile, and without musculoskeletal health concerns were the best performers on each occupational-specific task.Moreover, firefighters with higher HRV showed faster performance in all occupational-specific tasks, providing novel findings on the relationship between cardiovascular autonomic functioning and work performance in firefighters.The use of HRV may provide a useful, and relatively cost effective, criterion in assessing the physical fitness, cardiovascular health, and occupational performance of firefighters.Municipal fire departments may use the study's findings to emphasize the necessity for physical fitness and cardiovascular health standards to improve firefighters' occupational performance, as well as to protect the cardiovascular health and musculoskeletal health of firefighters, and increase the longevity of their careers.Fire departments can enhance the services they offer, lower the risk of civilian casualties, and prevent damage to vital infrastructure by instituting regular physical exercise programs and enforcing a basic fitness standard for all firefighters.
Table 3 kg•m − 2
Multivariable linear associations between physical fitness, cardiovascular and musculoskeletal health and occupational-specific task performance in firefighters Kilogram per meter squared, cm Centimetre, % Percentage, mm Hg Millimetres of mercury, mmol•L −1 Millimole per litre, MET Metabolic equivalents, ms Millisecond, Hz Hertz, BMI Body mass index, WC Waist circumference, SBP Systolic blood pressure, DBP Diastolic blood pressure, NFBG Non-fasting blood glucose, TC Total cholesterol, LDL-C Low-density lipoprotein, HDL-C High-density lipoprotein, SDNN Standard deviation of all normal-to-normal, RMSSD Root-mean-square of successive differences, LF Low-frequency, HF High frequency, LF/HF Low and high frequency ratio, rpm Repetitions per minute a Multivariable linear regression adjusted for covariates: cardiovascular health and musculoskeletal health b Multivariable linear regression adjusted for covariates: physical fitness and musculoskeletal health c Multivariable linear regression adjusted for covariates: physical fitness and cardiovascular health d Indicates statistical significance < 0.05 e Indicates statistical significance < 0.01 f Indicates statistical significance < 0.001
Table 5
Odds ratios describing the association between physical fitness, cardiovascular and musculoskeletal health and physical ability test task pass rates in firefighters
6 )
ab. CRF Absolute cardiorespiratory fitness, rel.CRF Relative cardiorespiratory fitness, LDL-C Low-density lipoprotein, HDL-C High-density lipoprotein, BF% Body fat percentage, UBMSI Upper body musculoskeletal injury, LBMSI Lower body musculoskeletal injury, LoBMSI Lower body musculoskeletal injury, ULMSD Upper limb musculoskeletal discomfort, LBMSD Lower body musculoskeletal discomfort, LoBMSD Lower back musculoskeletal discomfort a Univariable models using logistic regression b Multivariable logistic models adjusted for covariates: age, sex, height and weekly metabolic equivalent minutes c Indicates statistical significance < 0 (OR = 4.1) tasks.Firefighters with a good CVHI had an increased odds (OR = 3.4) of passing the rescue drag task.Firefighters with UBMSIs (OR = 0.4), LoBMSIs (OR = 0.2) and LoBMSD (OR = 0.4) had a decreased odds of passing the rescue drag task.Firefighters with MSD (OR = 0.3) and LLMSD (OR = 0.1) had a decreased odds of passing the step-up task.
Table 2
Linear associations between physical fitness, cardiovascular and musculoskeletal health and occupational-specific task performance in firefighters
Table 6 (
BF% Body fat percentage, UBMSI Upper body musculoskeletal injury, LBMSI Lower body musculoskeletal injury, LoBMSI Lower body musculoskeletal injury, ULMSD Upper limb musculoskeletal discomfort, LBMSD Lower body musculoskeletal discomfort, LoBMSD Lower back musculoskeletal discomfort continued) ab. CRF Absolute cardiorespiratory fitness rel.CRF Relative cardiorespiratory fitness, LDL-C Low-density lipoprotein, HDL-C High-density lipoprotein, a Multivariable logistic regression adjusted for covariates: age, sex, height, weekly metabolic equivalents, cardiovascular health and musculoskeletal health b Multivariable logistic regression adjusted for covariates: age, sex, height, weekly metabolic equivalent minutes, physical fitness and musculoskeletal health c Multivariable logistic regression adjusted for covariates: age, sex, height, weekly metabolic equivalent minutes, physical fitness and cardiovascular health
Table 7
LASSO-derived multivariable linear regression coefficients to discern key physical fitness and CVH parameters most associated with task performance in firefighters R 2 R squared, CHDP Charged hose drag and pull, FE Forcible entry, EC Equipment carry, LRF Ladder raise and extension, RD Rescue drag, kg•m −2 Kilogram per meter squared, cm Centimetre, % Percentage, mm Hg Millimetres of mercury, mmol•L −1 Millimole per litre, MET Metabolic equivalents, BMI Body mass index, WC Waist circumference, SBP Systolic blood pressure, DBP Diastolic blood pressure, NFBG Non-fasting blood glucose, TC Total cholesterol, LDL-C Low-density lipoprotein, HDL-C High-density lipoprotein, rpm Repetitions per minute | 2024-01-03T14:20:03.186Z | 2024-01-02T00:00:00.000 | {
"year": 2024,
"sha1": "1255993b30f7d031e60e81ba148d1586f86e155e",
"oa_license": "CCBY",
"oa_url": "https://bmcpublichealth.biomedcentral.com/counter/pdf/10.1186/s12889-023-17487-6",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e0eeebf0206890fc6f3b4576b00a5a2dab04ec63",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
209472541 | pes2o/s2orc | v3-fos-license | Safety Assessment of Methanol Extract of Melastoma malabathricum L. Leaves following the Subacute and Subchronic Oral Consumptions in Rats and Its Cytotoxic Effect against the HT29 Cancer Cell Line
Methanol extract of Melastoma malabathricum (MEMM) has been traditionally used by the Malay to treat various ailments. In an attempt to develop the plant as an herbal product, MEMM was subjected to the subacute and subchronic toxicity and cytotoxicity studies. On the one hand, the subacute study was performed on three groups of male and three groups of female rats (n = 6), which were orally administered with 8% Tween 80 (vehicle control group) or MEMM (500 and 1000 mg/kg) daily for 28 days, respectively. On the other hand, the subchronic study was performed on four groups of rats (n = 6), which were orally administered with 8% Tween 80 (vehicle control group) or MEMM (50, 250, and 500 mg/kg) daily for 90 days, respectively. In the in vitro study, the cytotoxic effect of MEMM against the HT29 colon cancer cell line was assessed using the MTT assay. MEMM was also subjected to the UHPLC-ESI-HRMS analysis. The results demonstrated that MEMM administration did not cause any mortality, irregularity of behaviour, modification in body weight, as well as food and water intake following the subacute and subchronic oral treatment. There were no significant differences observed in haematological parameters between treatment and control groups in both studies, respectively. The in vitro study demonstrated that MEMM exerts a cytotoxic effect against the HT29 colon cancer cell line when observed under the inverted and phase-contrast microscope and confirmed by the acridine orange/propidium iodide (AOPI) staining. The UHPLC-ESI-HRMS analysis of MEMM demonstrated the occurrence of several compounds including quercetin, p-coumaric acid, procyanidin A, and epigallocatechin. In conclusion, M. malabathricum leaves are safe for oral consumption either at the subacute or subchronic levels and possess cytotoxic action against the HT29 colon cancer cells possibly due to the synergistic action of several flavonoid-based compounds.
Introduction
Melastoma malabathricum (family Melastomaceae) has been applied in various folklore medicines to heal diverse forms of maladies [1][2][3]. Scientifically, various reports on M. malabathricum have been published and extensively reviewed by Joffry et al. [4]. With regard to the methanol extract of M. malabathricum leaves (MEMM) toxic effect, only the acute toxicity study has been carried out on MEMM using the OECD Guideline No. 423 [5]. Interestingly, MEMM was reported to show no toxic effect at the dose of 5000 mg/kg when given orally [5]. Meanwhile, with regard to the cytotoxic effect of M. malabathricum, MEMM has been reported to show cytotoxic activities against several murine (i.e., 3LL (Lewis lung carcinoma cells) and L1210 (leukemic cells)) and human (i.e., K562 (chronic myeloid leukaemia), DU145 (prostatic adenocarcinoma), MCF-7 (mammal carcinoma), and U251 (glioblastoma)) cancer lines [6], but not Vero cell line (African green monkey, Cercopitheus aethiops kidney cells) and mouse fibroblast cell line (L929) [7], respectively.
Along the course of developing a potent natural-based product, toxicity or the adverse effects of the plant extract on living organisms has been one of the major issues. Although M. malabathricum has been given the herbal status in Malaysia and several scientific studies have confirmed its medicinal values, further studies need to be carried out to establish its toxicity profile. In addition, despite the cytotoxic reports of MEMM described above, no attempt has been made to investigate the cytotoxic effect of MEMM against the HT29 colon cancer cell line. Taking into account that (i) MEMM was reported to be safe at 5000 mg/kg when assessed using the acute toxicity model MEMM and (ii) MEMM was cytotoxic only against 3LL, L1210, and U251 cancer cell lines with no report against HT29 colon cancer cell line, the present study was designed to determine the toxic effect of MEMM following its subacute and subchronic oral exposure for 28 or 90 days, respectively, and to evaluate the cytotoxic activity of MEMM against the HT29 cancer cell line.
Sample Collection and Preparation of Crude Extract.
e leaves of M. malabathricum were collected between June and July 2013 from its natural habitat around Serdang, Selangor, Malaysia, based on the voucher specimen (SK 2199/13) deposited earlier in the herbarium of Institute of Bioscience, Universiti Putra Malaysia. e crude extract of M. malabathricum was prepared as stated by Mamat et al. [8]. In brief, the dried leaves in powder form (200 g) were soaked in 4000 mL methanol for 72 h and this procedure was performed three times to obtain the supernatant. e supernatant was then evaporated at 40°C under reduced pressure to acquire the crude methanol extract (MEMM). e extract was left in the oven at 40°C to allow the solvent residue to dry and occasionally weighed until a constant weight was obtained.
Experimental Animals.
Sprague-Dawley rats (4 weeks of age; weighed between 120 and 150 g) were purchased from the Faculty of Veterinary Medicine, UPM, and acclimatized in the animal house of the Faculty of Medicine and Health Sciences (FMHS), UPM, under control temperature (22 ± 2°C), 70-80% humidity with 12 h light/dark cycle. e rats were allowed access to basal diet and water ad libitum and monitored according to the guidelines accepted by the Institutional Animal Care and Use Committee (IACUC), FMHS, UPM [8]. Ethics approval for animal care and use was approved by the IACUC (IACUC no: UPM/IACUC/ AUP-R007/2014).
Preparation and Administration of Different Dosages of MEMM.
MEMM was dissolved in 8% Tween 80 to the required dose using a sonicator at 40°C for 5-10 min. Animals were administered orally with the freshly prepared extract or vehicle daily according to their body weight [9].
Subacute Toxicity
Study. Eighteen (18) male and eighteen (18) female rats were each randomly allocated into three groups (n � 6) and treated via oral gavage daily for 28 days with 8% Tween 80, 500 mg/kg MEMM, or 1000 mg/ kg MEMM. Body weight was recorded weekly and on the 28th day prior to the termination of the experiment. e subacute toxicity study was performed according to the OECD Test Guideline 407 [10].
Subchronic Toxicity Study.
Twenty-four male rats were randomly allocated into four groups (n � 6) and treated via oral gavage daily for 90 days with 8% Tween 80, 50 mg/kg MEMM, 250 mg/kg MEMM, or 500 mg/kg MEMM. Body weight was recorded weekly and on the 90th day prior to the termination of the experiment. e subchronic toxicity study was performed according to the OECD Test Guideline 408 [11].
Sample Collection.
Depending on the model of toxicity studies, the rats were weighed and then anesthetized with ketamine (100 mg/kg; intramuscular (i.m.)) and xylazine (16 mg/kg; i.m.) on the respective 28 th or 90 th day prior to the collection of blood and tissues [12]. e whole blood was obtained through cardiac puncture and kept in the labeled plain and EDTA-containing vacutainer tubes (BD Plymouth, UK). Blood in the EDTA vacutainers was kept at 4°C, while blood in the plain (non-EDTA-containing) vacutainers was subjected to the centrifugation procedure (at 1500 g for 3 min) to obtain the serum, which was then kept at − 20°C. Following the blood collection, the rats were sacrificed by cervical dislocation and then dissected to collect the important organs, namely, liver, spleen, kidneys, stomach, heart, and lungs. Each organ was rinsed with normal saline and their weights were then recorded before being fixed in 10% buffered formalin for further histopathological study.
Haematological and Biochemical Analysis.
e toxic outcome of subacute or subchronic oral administration of MEMM was evaluated using the samples collected as described above. e whole blood in the EDTA-containing vacutainers was processed within 24 h and then subjected to the haematological analysis using the automated analyser (Coulter STKS, Beckman) to yield information on several haematological parameters (i.e., total red blood cell (RBC) count, haemoglobin (Hb), mean corpuscular haemoglobin concentration (MCHC), mean corpuscular volume (MCV), packed cell volume (PCV), total white blood cell (WBC) count, neutrophils, monocytes, lymphocytes, and eosinophils). e serum, which was earlier kept at − 20°C, was thawed at 25°C and treated within two days following the blood collection. Biochemical analysis was performed using an automated biochemical analyser (Hitachi 902, Japan) for several biochemical parameters (i.e., alanine aminotransferase (ALT), alkaline phosphatase (ALP), aspartate aminotransferase (AST), creatinine (Crea), urea, and total bilirubin (TBil)). e control group mean values were used as the baseline for comparison with treatment groups.
Histopathological Study.
e whole rat organs, collected following the subacute and subchronic studies and kept in 10% buffered formalin, were sectioned and prepared as previously described [13]. Sections were stained with Haematoxylin and Eosin (H&E) and then microscopically examined for pathological changes at 10x, 20x, and 40x magnifications.
In Vitro Study
2.3.1. Cell Line Cultivation. Normal mouse fibroblast (3T3) and human colon cancer (HT29) cell lines were procured from the American Type Culture Collection (ATCC). Cells were grown and maintained in Roswell Park Memorial Institute-1640 (RPMI) media added with 10% fetal bovine serum (FBS) and 1% penicillin-streptomycin (Pen-Strep). Cells were cultured at humidified atmosphere 5% carbon dioxide (CO 2 ) at 37°C in an incubator. Trypsin-EDTA was then used to detach the confluent cells at 80% confluency. e cells were then stained with trypan blue before the cell number, and viability was determined using a haemocytometer [14].
Cell Proliferation
Assay. 1 × 10 5 cells (3T3 or HT29) were seeded in a 96-well plate. Upon 24 h of incubation at 37°C with 5% CO 2 atmosphere, the media were discarded and replaced with fresh complete media (10% FBS and 1% Pen-Strep) containing MEMM dried extract. e dried extract was diluted in dimethyl sulfoxide (DMSO) to prepare the starting concentration of 200 μg/mL, which was then serially diluted to the lowest concentration of 0.156 μg/mL. Negative control was not treated with MEMM. After 72 h of incubation, 20 μL of 3-(4,5-dimethylthiazol-2-yl)-2,5diphenyltetrazolium bromide (MTT) in phosphate buffer saline (PBS) was added to each well. e plate was incubated again for 3 to 4 h. Next, the medium from each well was discarded, and 100 μL of DMSO was added, mixed thoroughly for 5 min to form purple formazan. e plate was then read using the ELISA reader. A graph of cell viability against concentration was plotted. Inhibitory concentration (IC 50 ) can be defined as the concentration of MEMM that caused approximately 50% of cell death [14].
Inverted Microscope
Study. HT29 cells were seeded at 1 × 10 5 cells in a 6-well plate and incubated overnight at 37°C with 5% CO 2 atmosphere. After incubation, the media was discarded and substituted with fresh complete media containing MEMM. After 72 h of incubation, morphological changes of the cells were examined by using an inverted microscope at 10x, 20x, and 40x actual magnification [15].
Phase-Contrast
Examination. HT29 cells were seeded in a 6-well plate with completed media and incubated at 37°C with 5% CO 2 atmosphere. e media was discarded and substituted with fresh complete media containing MEMM on the next day. After 72 h of incubation, the cellular structure in the bright field was examined by phase contrast using a light microscope at 10x, 20x, and 40x actual magnification [15].
Acridine Orange/Propidium Iodide (AOPI) Staining.
Approximately, 1 × 10 5 of HT29 cells were seeded in a 6-well plate and incubated for 24 h at 37°C with 5% CO 2 atmosphere. e media was discarded and substituted with fresh complete media containing MEMM and incubated again for 72 h. After incubation, the cells were trypsinized with trypsin-EDTA and washed with PBS. 10 μL of cells from each well were put on a glass slide and mixed with 10 μl of acridine orange (AO) (50 μg/mL) and 10 μl of propidium iodide (PI) (50 μg/mL). e nucleus of viable and dead cells was viewed under a fluorescence microscope for qualitative and quantitative approaches [16].
High-Resolution UPLC-ESI-HRMS Analysis of MEMM.
Chromatography separation of MEMM was performed on the Dionex Ultimate 3000 RS UHPLC system comprising a UHPLC pump, an auto sample operating at 4°C, and Exactive Orbitrap mass spectrometer with a heated electrospray ionization probe operating in the negative ionization mode ( ermo Fisher Scientific, San Jose, CA). In brief, reverse separations were carried out using the RP Max column (250 × 4.6 mm, particle size 4.0 μm; Synergi) maintained at 40°C and eluted at a flow rate of 0.3 mL/min with 25 min gradient of 10-50% of 0.1% acidic acetonitrile in 0.1% aqueous formic acid. e conditions were set as follows: sheath gas at 15 (arbitrary units), aux at 20 and sweep gas at 5 (arbitrary units), spray voltage at 3.0 kV, capillary temperature at 350°C, and s-lens RF level at 55 V. e mass range was from 100 to 1500 amu with a resolution of 17,000, FTMS AGC target at 2e5, FT-MS/MS AGC target at 1e5, isolation width of 1.5 amu, maximum ion injection time of 500 ms, and the normalization collision energy at 35%.
Statistical Analyses.
e results obtained were statistically analysed (Graph Pad Prism version 5.02) using the oneway analysis of variance (ANOVA) followed by the Dunnett post hoc test. Data were expressed as means ± standard error mean (SEM) with P < 0.05 as the limit of significance.
In Vivo Subacute and Subchronic Toxicity Studies.
In the present study, the subacute and subchronic toxicities of MEMM were assessed in rats after the daily oral consumption of MEMM for 28 and 90 days, respectively. e dosages used for the subacute toxicity study was 500 and 1000 mg/kg/day, while for the subchronic toxicity study, the dosages used were 50, 250, and 500 mg/kg/day.
Findings on the Physical Signs, Body Weight, and Food and
Water Consumption. All tested groups of rats were found to be equally healthy throughout both toxicity studies, wherein no signs of changes in behaviour were observed. Interestingly, the administration of MEMM for 28 or 90 days also did not cause any clinical signs of toxicity or mortality in rats. Although weight gain was detected in all rats administered with MEMM throughout the experimental periods, a comparison between the MEMM-treated groups against the control group showed insignificant (P > 0.05) changes in the weight gain at their respective interval (data not shown). In addition, insignificant (P > 0.05) changes in food and water consumption were seen when the MEMM-treated rats were compared to the normal control rats (data not shown).
Relative Organ Weight and Macroscopic Findings.
e relative organ weight of Sprague-Dawley rats following the subacute and subchronic toxicity studies is shown in Tables 1 and 2, respectively. ere were insignificant (P > 0.05) changes in the liver weight/body weight ratio for all the collected organs, namely, liver, kidneys, heart, spleen, lungs, or stomach of MEMM-treated rats in comparison to the normal control rats. Further macroscopic assessments of the harvested organs showed that the MEMM pretreatment did not change the colour or induce hypertrophy of those organs when compared against the respective organ of the normal control group (data not shown).
Haematological and Biochemical Findings.
To support the above observations, haematological and biochemical analyses were also performed on the blood, liver, and urine collected from the rats. Analysis of the haematological parameters of MEMM-treated groups also showed insignificant (P > 0.05) difference when compared to the normal control group for both the subacute (Table 3) and subchronic (Table 4) studies. A normal haematological profile was recorded in MEMM-treated groups in comparison to the normal control group for both toxicity studies.
On the contrary, biochemical analysis of the hepatic and renal function parameters demonstrated that MEMM treatment caused insignificant (P > 0.05) changes in all parameters evaluated in both the toxicity studies (Table 5), except for the hepatic function parameters of the subchronic toxicity study, whereby a significant (P < 0.05) increase in the level of ALP was observed only in the groups pretreated with 250 and 500 mg/kg MEMM in comparison to the normal control group (Table 6).
Histopathology and Microscopic Findings.
Further support on the above observations was obtained microscopically wherein all harvested organs pretreated with MEMM in the subacute and subchronic toxicity studies demonstrated no signs of toxicity when compared to the respective normal control group. is assumption is made based on the indication that there is no change in the architecture of cells when observed under the light microscope at numerous magnification powers. No pathological signs were documented in the histological analysis of the vital organs of the control group. us, not all figures of the collected organs were presented in the manuscript. Only micrographs of selected organs (i.e., liver and kidney) harvested from rats following the subchronic toxicity study were shown (Figures 1(a) and 1(b), respectively).
In Vitro Cytotoxic Study
e cytotoxic potential of MEMM, at the highest concentration of 200 μg/ ml, was evaluated against HT29 cells using the MTT assay. e extract was serially diluted to produce a concentration range of 0.156-200 μg/ml and then subjected to the MTT assay.
Antiproliferative Effect against HT29 Cell Line.
Treatment with MEMM had promoted antiproliferation of the HT29 cells. After 72 h, cell proliferation was decreased to 50% compared to the untreated HT29 cells. e IC 50 obtained for MEMM was approximately 100 μg/mL.
Inverted Microscope Observation.
After incubation with MEMM (50, 100, and 150 μg/mL), morphological modifications in HT29 cells were seen at 100 and 150 μg/mL and compared to the control untreated cells (Figure 2). Untreated HT29 cells were observed with normal morphology while those treated with 100 and 150 μg/mL MEMM exerted retraction and rounding of cells with some sensitive cells detached from the surface.
Phase-Contrast Examination.
After incubation with MEMM (50, 100, and 150 μg/mL), morphological modifications of HT29 cells were observed at 100 and 150 μg/mL in comparison to the control untreated cells (Figure 2). Untreated cells were seen to possess normal morphology. Instead, exposure of HT29 cells to 100 and 150 μg/mL MEMM was found to lead to retraction and rounding of cells. In addition, some sensitive cells were found to detach from the surface.
3.6.5. Acridine Orange/Propidium Iodide Staining. Viable HT29 cells were seen with intact DNA, round-shaped nucleus, and green nuclei, whereas the early apoptotic cells were indicated by green, fragmented nucleus. On the contrary, the cells that were undergoing the late apoptotic or necrotic phase were stained orange and red. It is clear from Figure 2 that an increase in the concentration of MEMM (50, 100, and 150 μg/mL) leads to a decrease in the number of viable HT29 cancer cells. Moreover, apoptotic cells also exhibited several other characteristics such as plasma membrane blebbing, nuclear shrinking, and fragmentation.
In addition, the percentage of viable, apoptotic, and necrotic cells was also quantified and presented in Table 7.
AO stained early apoptotic cells green, whereas PI stained late apoptotic and necrotic cells. e percentage of viable HT29 cells was decreased once incubated with MEMM. Apoptotic and necrotic cells, however, were increased after 72 h.
3.6.6. UHPLC-ESI-HRMS Profile of MEMM. MEMM was subjected to the reversed-phase UHPLC-ESI-HRMS analysis to determine its phytochemical constituents using a gradient mobile phase comprising 0.1% aqueous formic acid and 0.1% acetonitrile. is condition will allow for a comprehensive elution of plant analytes within 35 min. oroughness in identification was due in part to a higher sensitivity of the UHPLC-MS and processing Xcalibur software. Assignments of metabolites were carried out by comparing the retention time, MS data (accurate mass, isotopic distribution, and fragmentation pattern in the negative ion mode) of the peaks detected with those of the compounds detected in the literature or database. Identification of bioactive compounds was confirmed using standard compounds whenever obtainable in-house. e chromatogram profile of MEMM was obtained at three different wavelengths (250, 320, and 360 nm) (Figure 3(a)). Figure 3(b) shows some of the phytoconstituents identified in MEMM, which include gallocatechin, quercetin-3,4-diglucoside, quercetin, p-coumaric acid, procyanidin A, and epigallocatechin.
Discussion
For centuries, plant-based natural products have been used throughout the world in the treatment of various disorders.
In an attempt to screen for any pharmacological potential, these natural products either in the form of extract, fraction, or compound is usually subjected to the initial evaluation on their toxic characteristics. Despite the presence of various reports on pharmacological potentials of M. malabathricum as cited by Mohd. Joffry et al. [4], no thorough knowledge concerning the chronic toxicology of this well-known herb has been published. It is also worth mentioning that the traditional use of any medicinal plant does not guarantee the safety of such a plant for prolonged consumption. us, there is a need to obtain data from various models of toxicity studies such as the acute, subacute, subchronic, and, if possible, chronic toxicity on any medicinal plant so as to raise the certainty in its safety to humans especially when the plant has been considered to be developed as pharmaceutical products [17]. To achieve this, it is a crucial step to decide on suitable tests and dosage procedures that will show a sufficient margin of exposure in establishing human safety. Previously, MEMM has been reported to be safe when assessed using the acute toxicity assay. Using the OECD Test Guideline No. 420, the extract, at 5000 mg/kg, was found to cause no sign of toxicity on the tested rats following the 14 days of observation suggesting that the extract has a lethal dose (LD 50 ) value greater than 5000 mg/kg and that no further acute testing should be conducted [18]. In addition, Roopashree et al. [19] stated that the limit test method should not be used primarily as a means to determine the exact LD 50 value, but it should be a means to classify the crude plant extract as being safe or nontoxic depending on the expectation at which dose level the animals are anticipated to survive. Based on the OECD recommendation on chemical labeling and classification of acute systemic toxicity, MEMM can be consigned a class 5 status (LD 50 > 5000 mg/kg), which refers to the lowest toxicity class. In line with this recommendation, Erhirhie et al. [20] also stated that orally administered compounds with LD 50 > 5000 mg/kg are considered to be safe or essentially nontoxic.
Since MEMM did not show any sign of toxic effects in the acute toxicity study, additional assessment needs to be carried out to evaluate the subacute (28-day consumption) and subchronic (90-day consumption) toxicities of MEMM in rats to establish the complete toxicity data of MEMM. 16 22 Toxicological assessments after repeated exposures are required by regulatory agencies to distinguish the toxicological profile of any substance [10]. Subacute and subchronic investigations measure the unwanted effects of frequent or constant exposure of compounds/extracts over a fraction of the average life period of experimental animals, such as rats. Several objectives could be achieved through these studies such as (i) exclusive information on target organ toxicity, (ii) discovery of no observable adverse effect level, and (iii) determination of appropriate dose regimens for longer-term studies [18]. Subsequently, there is inadequate toxicological information in the literature to support and ensure M. malabatricum safe use, which triggered the present study with the hope of establishing the subacute and subchronic toxicity profiles of MEMM. To the best of our knowledge, this study reported for the first time on the absence of subacute and subchronic toxicity of MEMM in adult rats. A considerable decline in food and water intake, which indicates loss of appetite, will lead to a decrease in body weight due to interruptions in the metabolisms of carbohydrate, protein, or fat [21]. Interestingly, the food and water intake was not altered in the group receiving MEMM throughout the 28-or 90-day treatment periods in comparison to the control group, indicating that the extract failed to induce any changes in the metabolisms of carbohydrate, protein, or fat in those rats. Moreover, Yuet Ping [22] also stated that any extract, at higher doses, can be metabolized to a toxic end-product that might hinder the gastric role and decreased food conversion competency. Interestingly, this study also revealed that MEMM did not interfere with the weight gain and appetite stability as seen in the control group that is constantly provided with food and water ad libitum.
Several advantages can be drawn from the findings related to the weighed organs in toxicity studies, which includes (i) the organs sensitivity to acute injury, predict toxicity, physiologic perturbations, and enzyme induction, (ii) the organs are regularly used as target organs of toxicity, (iii) the toxicity effects associate well with histopathological changes, (iv) there is a slight interanimal changeability, and (v) historical control range data are available [23]. Moreover, Mirza and Pancha [24] claimed that the relative organ weights usually measured in toxicity investigations are comparatively sensitive markers for the specific organs and, subsequently, characterize toxicity as substantial changes detected in the particular organs.
e outcomes of the present study demonstrated that these essential organs were neither negatively affected nor exerted clinical signs for toxicity throughout the treatment. us, it is concluded that MEMM is nontoxic to the analysed organs. Macroscopic observations also further supported the above conclusion as pretreatment with MEMM did not cause changes in colour or presence of hypertrophy on the harvested organs. Hypertrophy of organs is an immediate sign of toxicity following the exposure to a biological or chemical substance. Microscopically, all organs that received MEMM showed no changes in the architecture of cells with any pathologies documented during the histological analysis when viewed under the light microscope.
Estimation of haematological parameters can be utilized to verify the level of detrimental outcomes of compounds/ extracts on the blood of tested animals. In addition, such investigation is pertinent to risk assessment as alterations in the haematological parameters have significant prognostic importance for human toxicity when the data are interpreted from animal studies [25]. Following the haematological analysis, MEMM was found to cause no significant changes in the level of several parameters measured. A normal haematological profile observed in MEMM-treated groups when compared to the control group further justified the nontoxic nature of MEMM [26].
Another important indicator of toxicity can be obtained by studying the liver following the administration of test substance. According to Zhang et al. [27], it is important to perform the liver and kidney function analyses during the toxicity assessment of compounds/extracts as data from both analyses are essential for determining the survival of an organism. e estimation of any substance level in the blood can help to facilitate in the early detection of liver injury. Other than the blood parameters, some biochemical markers (i.e., ALT, AST, and ALP) and total bilirubin (TBil) can be used to diagnose a liver injury. A number of enzymes, produced and generally found in the hepatocytes cells of the liver, are involved in the modulation of various chemical reactions in the body. Nevertheless, if the liver is injured or harmed, these enzymes will leak into the blood circulation resulting in the rise of liver enzymes' level, which can be considered as a significant sign of liver toxicity. Meanwhile, a rise in both the total and conjugated bilirubin levels could be used as a measure to determine the overall liver function. An increase in the levels of ALT or AST in combination with an increase to more than double the normal upper level of bilirubin is regarded as a warning sign for hepatotoxicity [28]. Principally, acute or chronic injury to the liver will eventually result in an increase in serum concentrations of AST and ALT. ALT, which is found mainly in the liver, has always been the most commonly relied biomarker of hepatotoxicity and its estimation is considered as a more specific test for detecting liver malfunctions and might indicate hepatocellular necrosis. On the contrary, AST, which is found in the liver as well as in the other organs (i.e., heart, brain, muscle, and kidney) and also assists in spotting hepatocellular necrosis, is regarded as a non-specific biomarker enzyme for liver injury since its serum level elevation can also indicate malfunctions in those other organs. e levels of ALT and AST or together with TBil in rodents and nonrodents are mostly commended for the evaluation of hepatocellular injury in nonclinical investigations. Furthermore, histopathological observations also allow confirmation of hepatotoxicity. With regard to renal dysfunction, concurrent determination on the level of urea, creatinine, and uric acid could be performed and if their normal levels were detected, it is conceivable to suggest the absence of renal problems [29]. In this study, the levels of urea, creatinine, and uric acid in MEMM-treated groups did not differ significantly in comparison to the control group, indicating a normal renal function. Bilirubin, which is an endogenous anion normally present in the blood in small quantities as a result of the normal degradation of haemoglobin, is removed from the liver via the bile system. However, injury to the hepatocytes results in the liver inability to excrete bilirubin in the usual manner, thus increasing the level of bilirubin in the blood and extracellular fluid. Due to this, bilirubin is also classified as a biomarker of hepatobiliary injury and often measured together during the hepatoprotective study. However, in the present study, MEMM was found to cause no change in ALT, AST, and TBil levels. ese observations indicated that MEMM did not cause liver damage following the oral administration. With regard to the kidney function, there were also insignificant changes in urea, creatinine, and uric acid levels following the subchronic oral administration of MEMM into rats when compared to the control group. is statement is also concurrent with the present histopathological findings of the kidney tissue, which showed normal architecture, as seen in the control group. us, the liver and renal function data supported by their histological findings suggest the nontoxic nature of MEMM.
Although the level of ALP is not considered important in the determination of liver injury, the significant level of ALP detected in the present subchronic study need to be discussed. ALP, which is predominantly found in the cells that line the biliary ducts of the liver, is also found in other organs (i.e., bone, placenta, kidney, and intestine) and removed in the bile. Its serum level may be raised of the normal value if bile excretion is hindered by liver injury due to the congestion or obstruction of the biliary tract (also known as cholestasis). Other than that, the level of ALP, an enzyme that transports metabolites across cell membranes, can also be elevated due to the presence of liver and bone diseases despite the fact that ALP may originate from other tissues, such as the placenta, kidneys, or intestines or from leukocytes [30]. In the present study, there was a significant increase in the level of ALP following pretreatment with MEMM, which was not accompanied by increase in ALTand AST levels, suggesting that the increase in ALP level was not associated primarily with hepatocellular injury [31]. Although cholestasis enhances the synthesis and release of ALP and accumulating bile salts increase its release from the cell surface, their presence can be ruled out as the microscopic examination demonstrated the normal architecture of the liver tissue at all doses of MEMM tested. Several reports have demonstrated the ability of plants' extracts to affect the level of ALP either at the tissue (i.e., liver) or serum level [32,33]. According to the report by Omage et al. [32], the oral administration of aqueous or ethanol extracts of Acalypha wilkesiana leaves significantly changed the level of serum ALP, ALT, and AST when measured at different day intervals (0, 7, 14, and 21 days). Based on their reports, the aqueous extract caused significant reduction in the serum ALP level at day 7 and day 21, whereas the ethanol extract caused significant increase in the serum ALT level only at day 14. As for the serum AST level, the aqueous extract caused significant reduction in the AST level at day 7 and day 21 with significant increase in the AST level seen only at day 14. In comparison, the ethanol extract caused reduction in the serum level of AST at all intervals measured with the significant reduction observed only at day 21. With regard to the serum ALP level, the aqueous extract caused significant increase in the ALP level at day 7 and day 21, while for the ethanol extract, significant increase in the level of serum ALP was observed only at day 21. In comparison to the report made by Omage et al. [32], MEMM did not significantly change or affect the levels of serum ALT and AST, thus suggesting that the increase in the serum ALP level did not directly relate to the MEMM ability to induce liver injury following the oral administration. On the contrary, Yakubu et al. [33] reported on the ability of ethanolic extract of Khaya senegalensis stem bark to increase the levels of liver tissue ALP without affecting the level of serum ALP when measured at day 6 and day 18 in comparison to the control group, and these findings were also observed for the liver tissue and serum levels of AST. However, the liver tissue level of ALT, which was decreased at day 6 and increased at day 18, was also accompanied by the increased in the serum ALT level at day 6, but not day 18. According to Yakubu et al. [33], the significant increase in the liver ALP activity following the administration of the plant extract may be due to increased functional activity of the liver, probably leading to de novo synthesis of the enzyme molecules. Such excess level of serum ALP, which was accompanied by no concomitant increase in the serum level of ALT, AST, as well a TBil, may suggest that the integrity of the liver plasma membrane was not compromised following the administration of the plant extract. However, since the ALP hydrolyses phosphate monoesters, which play role in the facilitation of the transfer of metabolites across the cell membrane, the subchronic and chronic use of M. malabathricum leaves need to be carried out with caution. Furthermore, the fact that the effect of MEMM on the serum ALP level was observed only in the subchronic study could also be associated with the ALP's longer half-life, which caused the enzyme serum level to decrease slowly after resolution and make the enzyme stay in the circulation longer in comparison to ALT and AST [34].
Chemotherapy has been successfully used in the treatment of several tumors such as testicular cancer and certain leukaemia's. However, its achievement against common epithelial tumors of the breast, colon, and lung has been less than impressive [35]. Preferably, chemotherapeutic agents should exclusively target just neoplastic cells and should reduce tumor burden by stimulating cytotoxic effects without injuring the normal cells. However, the efficacy of chemotherapy has experienced a variety of confounding factors including systemic toxicity, which is attributable to the unspecific action, instantaneous metabolism of drug, and both intrinsic and acquired drug resistance [30]. Due to these factors and worsened by other issues such as ineffective therapeutic strategies to control and treat colon cancer, the high financial burden incurred on the patients and their families as well as the nations have demanded the look for novel remedies largely from the natural product sources [36]. is might explain the increase in awareness in the usage of natural products as an alternative approach to effectively control cancers in recent years [37].
One of the most important methods for evaluating anticancer properties of any extract/compound is the cytotoxicity test, which uses cancer cells in vitro to watch the cancer cell growth, reproduction, and morphological effects upon exposure to the extract/compound [38]. Cytotoxicity has a series of advantages such as it is modest, precipitous, highly sensitive, and can protect the animals from toxicity [38]. With the constant progress in cytotoxicity testing, various procedures, including discovery of cell injury by observing the cells' morphological changes, determining the mode of cells damage, quantifying the cells' growth and metabolic properties, have emerged and had progressively been expanded from qualitative approaches to quantitative evaluations [39].
One of the mechanisms of anticancer is apoptosis, which is a very organized physiological machinery to abolish damaged or abnormal cells [40]. Apoptosis, which is a programmed cell death, is a smart screening endpoint in the discovery and development of novel anticancer drugs. An extensive range of natural substances present in plants has been documented to possess the capacity to cause apoptosis in numerous tumor cells of human origin [41,42]. MEMM possessed high content of phenolic compounds [8] and exerts many pharmacological and biological activities including antioxidative [8], anti-inflammatory [43], and antiproliferative activities [44]. Various bioactive compounds have been isolated from MEMM, such as ursolic acid, 2-hydroxyursolic acid, asiatic acid, gallic acid, phydroxybenzoic acid, kaempferol, kaempferol-3-O-(2″,6″ di-O-p-trans-coumaroyl)-β-glucoside, α-amyrin, uvaol, quercetin, quercitrin, and rutin [4]. Of these, ursolic acid [45], asiatic acid [46], kaempferol [47], quercetin [48], and rutin [49] have been reported to exert anticolon cancer activity against the HT29 cancer cell line. In the recent report, MEMM was analysed using the UHPLC-ESI procedure and revealed the presence of caffeic acid, chlorogenic acid, p-coumaric acid, gallocatechin, epigallocatechin, catechin, quercetin, quercetin-3-O-glucoside, and hesperidin [50]. Of these, caffeic acid [51], quercetin, and p-coumaric [52] have been reported to exert anticolon cancer activity against the HT29 cancer cell line. us, it is reasonable to propose that the observed anticolon cancer activity of MEMM involves, in part, the synergistic action of those bioactive compounds.
Conclusion
In conclusion, M. malabathricum leaves, in the form of MEMM, did not exert any signs of toxic effects on rats with regard to their behaviour, body weight, haematological and biochemical parameters, and relative organs weight following the subacute (28 days) or subchronic (90 days) oral administration of the extract. Hence, no observed adverse-effect level (NOAEL) was detected, and NOAEL for this extract has been determined to be greater than 500 mg/kg. MEMM also showed cytotoxic activity against the HT29 colon cancer cells partly via apoptosis and possibly through the synergistic action of several flavonoids presence in the extract. | 2019-11-28T12:26:40.491Z | 2019-11-26T00:00:00.000 | {
"year": 2019,
"sha1": "06434f0b68b5f8c1ab7e408537e1e835e42bac5d",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/ecam/2019/5207958.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b72a69e8823b67e94a39207ad9845ce71910e461",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
39126887 | pes2o/s2orc | v3-fos-license | Bifid mandibular canal: Report of 2 cases and review of literature
How to cite this article: Prakash S, Dhar M, Ranjan P, Gupta BK, Pai VK. Glass holding technique for bag and mask ventilation: An alternative in neonates and infants. Saudi J Anaesth 2016;10:487-8. Address for correspondence: Dr. Mridul Dhar, Department of Anesthesiology, Institute of Medical Sciences, Banaras Hindu University, Varanasi ‐ 221 005, Uttar Pradesh, India. E‐mail: mriduldhar@hotmail.com References
Sir,
The mandibular canal runs from the mandibular foramen to the mental foramen and contains the inferior alveolar artery, vein, and nerve. In medical imaging, its appearance has been described as "a radiolucent dark ribbon between two white lines." [1] White and Pharoah defined it as "dark linear shadow with thin radiopaque superior and inferior borders cast by the lamella of bone that bounds the canal." [2] Recognition of the mandibular canal variations is very important because of its clinical implications. Here, we represent incidental findings of the same in our 2 cases [ Figures 1 and 2].
The term "bifid" is derived from Latin, meaning a cleft into two parts or branches. Bifid mandibular canals originate at the mandibular foramen and might each contain a neurovascular bundle. The various types of bifid mandibular canals have been classified according to anatomical location and configuration. Smaller accessory canals might be seen in association with normal or bifid mandibular canals.
Results of previous anatomical and radiological studies demonstrate significant variation in the course of the mandibular canal. According to Chávez-Lomeli et al., during embryologic development, the three inferior dental nerves innervating the three groups of mandibular teeth fuse together and form a single unified nerve in one canal. This theory would explain the existence of accessory canals resulting from lack of fusion of these canals. [3] In 1973, Kiersch and Jordan annotated that an osteocondensation image produced by the insertion of the mylohyoid muscle into the internal mandibular surface, with a distribution parallel to the dental canal, may mimic a bifid mandibular canal. [4] The imprint of the mylohyoid nerve on the internal mandibular surface, where it separates from the inferior alveolar nerve and travels to the floor of the mouth, may also be a cause for confusion. [5] A two-dimensional Bifid mandibular canal: Report of 2 cases and review of literature radiograph, such as a panoramic view, cannot completely rule out the possibility of a deep mylohyoid groove on the medial aspect of mandibular surface as the image on these two-dimensional representations can be confused with the second mandibular canal. [6] The incidence of bifid mandibular canal seems to be very low. Recently, there were two reports of bifid mandibular canals (6 cases in all) being diagnosed with the use of volumetric imaging (multislice helical computed tomography [CT] and cone-beam CT). [7] It seems that for accurate observation of the location and configuration of the mandibular canals, it is necessary to use cross-sectional images, taken perpendicular to the axis of the canals. However, CT scan, due to its high cost and radiation exposure, cannot be performed for all patients. [8] The clinical relevance of this issue is to remind clinicians of the variable anatomy of the mandibular canal. Inadequate anesthesia may be possible with any bifurcation type, but especially when there are two mandibular foramina. It may lead to complications while performing an inferior alveolar nerve block for obtaining mandibular anesthesia. [9] The location and configuration of mandibular canal variations have important implications in surgical procedures involving the mandible such as dental implant treatment, sagittal split ramus osteotomy, and orthognathic and reconstructive surgeries; displacement of the third molar into the nerve canal during surgery, bleeding, and traumatic neuroma are some of its other complications. [10] In patients wearing prostheses, this condition can cause pain and discomfort due to bone resorption. Using implants in these patients can also cause damage to the second canal. Therefore, it is of considerable interest for dentists to identify the presence of bifid canals on the panoramic radiographs to provide better patient care.
Financial support and sponsorship
Nil.
Conflicts of interest
There are no conflicts of interest. | 2018-04-03T04:27:19.324Z | 2016-10-01T00:00:00.000 | {
"year": 2016,
"sha1": "44d4469dc6b92382ebeccc0ad2190c40b8bcc044",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4103/1658-354x.179123",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "fd05dc2a58d7f42aa6098bf0f3fe3611f3023c33",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
18449400 | pes2o/s2orc | v3-fos-license | Human Papillomavirus DNA in LEEP Plume
Objective: This study was undertaken to determine the prevalence of human papillomavirus (HPV) in loop electrosurgical excision procedure (LEEP) plumes. Methods: Forty-nine consecutive patients with colposcopic and cytologic evidence of cervical intraepithelial neoplasia (CIN) were tested. Smoke plumes were collected through a filter placed in the suction tubing. DNA was harvested by proteinase K digest of the filters and prepared for polymerase chain reaction (PCR) by L1 consensus primers. Results: Thirty-nine (80%) tissue samples were positive for HPV, with types 6/11 in 4, 16/18 in 19, 31/33/35 in 2, and other types in 6 patients. The tissue sample was inadequate for typing in 8 patients. HPV DNA was detected in 18 (37%) filters. Conclusions: Although the consequences of HPV in LEEP plume are unknown, it would be prudent to adopt stringent control procedures.
These techniques were recently introduced in the United States from Great Britain. The use of electrical current to excise dysplastic regions on the cervix results in the generation of a smoke plume. Previous studies have shown conflicting results regarding the presence of human papillomavirus (HPV) DNA in laser plume. Garden et al. 2 demonstrated the presence of HPV DNA in laser vapor samples from plantar warts. Kashima et al. 3 showed the presence of HPV in laser vapor from recurrent respiratory papillomatosis. Recently, Ferenczy et al. 4 demonstrated the presence of HPV in laser vapor from anogenital condylomas. However, Abramson et al. s did not detect HPV DNA in the smoke plume from laser vaporization of laryngeal papillomas. No data exist regarding the presence of HPV in LEEP plume. Therefore, we undertook a study to determine the prevalence of HPV in LEEP-generated plumes.
Patient Population
Our patient population consisted of 49 consecutive women enrolled in the study between February 1992 and January 1993 who had a LEEP performed at the University of Florida. All women had either histologic evidence of cervical intraepithelial neoplasia (CIN) II or CIN III or cytologic evidence of CIN II or CIN III. Endocervical curettage was negative in 42 patients. Seven patients with a positive endocervical curettage had the extent of the lesion defined by colposcopy and were deemed candidates for loop excision.
Sample Collection
The loop excision was performed in an outpatient clinic at the University of Florida. A commercially available sterile speculum with a smoke evacuation metal cannula welded on the inner surface of the upper blade was used for exposure of the cervix. The evacuation cannula was not in direct contact with the cervical lesion. The colposcope was used to define the lesion. The cervix and vagina were stained with Lugol's solution to define the squamocolumnar junction. The cervix was injected with 1% lidocaine with epinephrine (1 100,000) in a circumferential fashion using a 22-gauge spinal needle. The loop size was based on the size of the lesion and size of the transformation zone. If the lesion or transformation zone was too large for single-loop excision, then the transformation zone was removed in 2 passes. The procedure was performed under a blend of coagulation and cut. The loop was placed lateral to the edge of the lesion and pushed perpendicularly. It was slowly drawn across the lesion to the opposite side and pulled out of the tissue perpendicularly to produce a button-shaped specimen. The bleeding from the cervical wound was controlled using the diathermy ball. Monsel's solution was also applied to the base of the wound to increase the long-term hemostatic effect.
A small (2 mm x 2 mm) biopsy was taken from the LEEP specimen for HPV analysis. The plume of smoke generated by LEEP was evacuated with the Stackhouse smoke-evacuation system (Stackhouse Association, Inc., E1 Segundo, CA). The smoke was collected through an in-line filter (Filter disc #65651-801) supplied by Baxter, Inc. During each session, a new sterile suction tubing and filter were used. For controls, 2 new in-line filters were exposed to the air in the procedure room without a patient in the room.
Sample Analysis
The DNA was prepared from tissue samples by digestion with 50 Ixg/ml proteinase K in a buffer containing 0.01 M Tris (pH 7.8), 0.005 M EDTA, and 0.5% sodium dodecyl sulfate (SDS); (Sigma, St. Louis, MO). The digests were per-formed at 55C overnight, and the DNA was extracted with phenol and precipitated with 2 volumes of 100% ethanol. DNA was harvested from the filters by injection of 2 ml of the same digestion cocktail with a disposable 5.0-ml Luer-lock syringe, closing the filter on the opposite side with another syringe, and sealing both with UV-irradiated parafilm for digestion overnight. The digest was recovered in the same syringe after washing the liquid volume back and forth between the 2 syringes through the filter. The digests from the filters were prepared for DNA analysis both by direct heat inactivation at 95C for 10 rain and by phenol extraction. A control series was created by introducing graded amounts of HPV-6 plasmid onto a filter and making recovery as described.
DNA pellets from the ethanol precipitations described above were resuspended in TE buffer (10 mM Tris, pH 7.8, 0.1 mM EDTA) and the concentrations determined by optical density at 260 nm. The samples were amplified by the polymerase chain reaction (PCR) for human restriction fragment length polymorphism (RFLP) KM-19 and HPV L1 consensus as described in previous reports. 6 Briefly, for HPV analysis, oligonucleotide primers MY 09 and MY 11 were used as to amplify the L1 region of HPVs. The detection of HPV DNA was performed using 32p-labeled oligonucleotide GP1J as described by Gravitt et al. 7 and the labeling of amplified products from reference strains as described by Resnick et al.
The PCR product was purified using a gel filtration column (Stratagene, La Jolla, CA). Double-stranded sequencing of the purified product was performed using the Circumvent TM Thermal Cycle Dideoxy DNA Sequencing Kit (New England Biolabs, Beverly, MA). Primers were the same as those described for the primary PCR. The products from the sequencing reactions were analyzed using a 6% Long Ranger TM gel (AT Biochem, Malvern, PA). The isolates were sequenced in both strands.
RESULTS
We performed a LEEP biopsy on 49 patients. LGSIL, low-grade squamous intraepithelial lesion; HGSIL, high-grade squamous intraepithelial lesion. Table 2 presents the distribution of HPV DNA in tissue and vapor samples. Thirty-nine (80%) of the tissue samples were positive for HPV. The distribution of HPV is noted in Figure 1. HPV was detected in 18 (37%) filters. The 18 HPVpositive filters came from the 39 patients with HPV-positive tissue samples. DNA sequencing was performed on 8 samples, and in all cases the HPV subtype was identical in the tissue and in the filter. The remainder of the filter samples were inadequate for DNA sequencing. Ten tissue samples were negative for HPV DNA. The filters from these 10 cases were also negative for HPV DNA. The remaining 21 negative filter samples were from patients with HPV-positive tissue samples. Both control filters were negative for HPV DNA. 9 However, there are no data regarding the safety of the LEEP plume. In the present study, the prevalence ofHPV DNA was high (37% of filters).
DISCUSSION
PCR is a highly sensitive technique for the detection of HPV DNA. There is concern that the finding ofHPV DNA in vapor plume might be the result of contamination; however, the smoke-collection cannula was never in touch with the cervical lesion, and new sterile suction tubing, filter, and speculum were used for each patient. The control filter specimens were negative for HPV. In addition, DNA sequencing confirmed the presence of the same HPV types in both tissue and filter sampies.
It is not clear whether the HPV DNA in plume is viable. Studies with the laser have failed to show viability. However, at average power densities, the predominant mode of tissue destruction is very rapid boiling of histologic and cellular water to form steam, which ruptures cells and tissues, ejecting cellular constituents from the crater into the laser beam. There they absorb the direct rays of the laser and are heated to incandescence, thus burning in the presence of oxygen to form a thick, malodorous smoke. 10 vapor plume failed to grow in tissue culture. Electrosurgery uses low-voltage, high-frequency radio waves through a thin wire loop to accomplish surgical effects. Lateral necrosis or coagulation depends on the current mode, intensity, frequency, impedance, and operating mode. Electrosurgery may cause less tissue destruction than laser and may liberate more intact cells, including viral DNA, which may be more infectious. Future studies should focus on assessing the viability of cells and DNA in the smoke generated by LEEP. A long-term follow-up of gynecologic surgeons involved with LEEP would also be useful. Although the consequences of HPV in LEEP plume are unknown, it is prudent to reduce the risk of potential infection to the patient, surgeon, and other operating personnel by the use of appropriate gloves and masks and by effective smoke-evacuation methods. | 2014-10-01T00:00:00.000Z | 1994-01-01T00:00:00.000 | {
"year": 1994,
"sha1": "cd947f78694c81d7e9c3943046f335c186127f7e",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/idog/1994/436937.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a90c5db26f398d915d60dba859a16d11b1eaad5c",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
259260431 | pes2o/s2orc | v3-fos-license | Staphylococcus aureus Detection in Milk Using a Thickness Shear Mode Acoustic Aptasensor with an Antifouling Probe Linker
Contamination of food by pathogens can pose a serious risk to health. Therefore, monitoring for the presence of pathogens is critical to identify and regulate microbiological contamination of food. In this work, an aptasensor based on a thickness shear mode acoustic method (TSM) with dissipation monitoring was developed to detect and quantify Staphylococcus aureus directly in whole UHT cow’s milk. The frequency variation and dissipation data demonstrated the correct immobilization of the components. The analysis of viscoelastic properties suggests that DNA aptamers bind to the surface in a non-dense manner, which favors the binding with bacteria. The aptasensor demonstrated high sensitivity and was able to detect S. aureus in milk with a 33 CFU/mL limit of detection. Analysis was successful in milk due to the sensor’s antifouling properties, which is based on 3-dithiothreitol propanoic acid (DTTCOOH) antifouling thiol linker. Compared to bare and modified (dithiothreitol (DTT), 11-mercaptoundecanoic acid (MUA), and 1-undecanethiol (UDT)) quartz crystals, the sensitivity of the sensor’s antifouling in milk improved by about 82–96%. The excellent sensitivity and ability to detect and quantify S. aureus in whole UHT cow’s milk demonstrates that the system is applicable for rapid and efficient analysis of milk safety.
Introduction
Nutrition is a fundamental process in human life as it allows the body to obtain nutrients necessary for growth and survival. However, food can pose a risk to human health such as through its potential contamination with pathogens [1]. Milk is no exception; each step of its processing, from milking to distribution, can be affected by microbiological contaminants. Monitoring food for the presence of pathogens is important for identifying and regulating such contamination [2].
There are numerous bacteria that can be found in milk depending on different handling processes of food [2]. Immediately after milking, microorganisms in the milk come largely from environmental contamination and their concentrations can change during processing and manipulation [2]. There are many dangerous pathogenic bacteria capable of growing in milk and dairy products: in particular, those of the genus Campylobacter, Listeria, Salmonella, Brucella, Mycobacterium, Staphylococcus, Clostridium, Bacillus, and Pseudomonas.
Staphylococcus aureus (S. aureus) bacteria and their toxins can cause serious infections such as sepsis. S. aureus bacteria can be found in the environment, such as in dust or soil, and on living organisms. As the bacteria is present in normal human flora, individuals can also be a contamination source for food [3]. The low acidity and high protein content of milk provides an ideal environment for the rapid growth of S. aureus. Although cooking milk to in this work, we applied TSM with dissipation monitoring that provides information about dissipation, which is the energy that is lost due to variation in a viscoelastic layer. Dissipation is proportional to the damping of the resonance. As damping increases, more energy is dissipated, which indicates that a more viscoelastic layer has been adsorbed on the surface. For preparation of the aptasensor sensitive to Staphylococcus aureus, we used the same linker as in the previous work, namely, DTT COOH [10]. We also analyzed DTT COOH by Fourier transform infrared spectroscopy (FTIR) for functional group and structure confirmation. However, to improve the antifouling properties of the sensing layer, the aptasensor's SAM layer also included 2-(2-mercaptoethoxy)ethan-1-ol (HS-MEG-OH), a previously synthesized antifouling molecule [13]. HS-MEG-OH was used instead of β-mercaptoethanol as its ether moiety allows for the incorporation of interfacial water molecules. In addition, we also analyzed the viscoelastic properties of the aptamer-based sensing layers.
Materials
The synthesis of 3-dithiothreitol propanoic acid (DTT COOH ) and HS-MEG-OH ( Figure 1) followed previously published methods [12,14]. The functional groups of DTT COOH were analyzed by FTIR (see Supplementary Material, Figures S1 and S2). Sodium chloride, absolute analytical grade ethanol, hydrogen peroxide (30% in water w/w), 1-undecanethiol (UDT), 11-mercaptoundecanoic acid (MUA), dithiothreitol (DTT), N-hydroxysuccinimide (NHS), 1-(3-(dimethylamino)propyl)-3-ethylcarbodiimide hydrochloride (EDC), and ethanolamine were purchased from Sigma-Aldrich (St. Louis, MO, USA). Milli-Q water (specific resistance of 18.2 MΩ·cm) was used for preparing aqueous solutions. Ethanol was purchased from Caledon Laboratory Chemicals (Georgetown, ON, Canada). The chemicals were used without purifying. of the viscoelastic contribution in bacteria-aptamer interactions. Therefore, in this work, we applied TSM with dissipation monitoring that provides information about dissipation, which is the energy that is lost due to variation in a viscoelastic layer. Dissipation is proportional to the damping of the resonance. As damping increases, more energy is dissipated, which indicates that a more viscoelastic layer has been adsorbed on the surface. For preparation of the aptasensor sensitive to Staphylococcus aureus, we used the same linker as in the previous work, namely, DTTCOOH [10]. We also analyzed DTTCOOH by Fourier transform infrared spectroscopy (FTIR) for functional group and structure confirmation. However, to improve the antifouling properties of the sensing layer, the aptasensor's SAM layer also included 2-(2-mercaptoethoxy)ethan-1-ol (HS-MEG-OH), a previously synthesized antifouling molecule [13]. HS-MEG-OH was used instead of β-mercaptoethanol as its ether moiety allows for the incorporation of interfacial water molecules. In addition, we also analyzed the viscoelastic properties of the aptamer-based sensing layers.
The preparation of phosphate-buffered saline (PBS) involved 137 mM NaCl, 2.7 mM KCl, 10 mM Na 2 HPO 4 , and 1.8 mM KH 2 PO 4 at pH 7.4. The buffer was filtered with a
Cleaning and Surface Modification of Piezocrystals
The AT-cut quartz crystals (0.2 cm 2 sensing area, 8 MHz fundamental frequency) were purchased from Total Frequency Control Ltd., Storrington, UK. The crystals had gold electrodes deposited on both sides. Basic Piranha solution (7 mL of 1:1:5 v/v 28-30% NH 4 OH, 30% H 2 O 2 , Milli-Q water at 70 • C) was used to clean each crystal in three 25 min cycles. In between the cycles, the crystals were rinsed with Milli-Q water three times. After the third Piranha cycle, the crystals were rinsed with Milli-Q water twice, then with methanol twice.
After cleaning, the quartz crystals were functionalized in a solution of 2 mM UDT, 2 mM MUA, 50 µM DTT, or 50 µM DTT COOH in absolute ethanol overnight. For DTT COOHcoated crystals, the surfaces were further modified in 2 mM HS-MEG-OH in absolute ethanol (25 min The preparation of phosphate-buffered saline (PBS) involved 137 mM NaCl, 2.7 mM KCl, 10 mM Na2HPO4, and 1.8 mM KH2PO4 at pH 7.4. The buffer was filtered with a 0.22 µm membrane (Merck-Millipore, Darmstadt, Germany). Staphylococcus aureus KR3 was purchased from the University of Toronto Medstore (Toronto, ON, Canada). Specificity experiments were carried out with Escherichia coli DH5α and Pseudomonas aeruginosa PAO1. Whole UHT cow milk (3.5% fat) was bought from Walmart (Toronto, ON, Canada).
Cleaning and Surface Modification of Piezocrystals
The AT-cut quartz crystals (0.2 cm 2 sensing area, 8 MHz fundamental frequency) were purchased from Total Frequency Control Ltd., Storrington, UK. The crystals had gold electrodes deposited on both sides. Basic Piranha solution (7 mL of 1:1:5 v/v 28-30% NH4OH, 30% H2O2, Milli-Q water at 70 °C) was used to clean each crystal in three 25 min cycles. In between the cycles, the crystals were rinsed with Milli-Q water three times. After the third Piranha cycle, the crystals were rinsed with Milli-Q water twice, then with methanol twice. After
Contact Angle Goniometry
Static contact angle goniometry (CAG) experiments were conducted with the KSV CAM 101 goniometer (KSV Instruments Ltd., Helsinki, Finland). A 5 µL drop of Milli-Q water at room temperature was used for each measurement. Bare and coated TSM crystals were analyzed in triplicate.
Bacteria Preparation
Lysogeny broth was used to grow S. aureus bacteria at 37 °C overnight. The grown solution was serially diluted from 1/10 to 1/10 9 times in PBS. Each solution was spotted onto agar plates (3 × 10 µL), as well as measured using a UV-1600PC spectrometer (VWR International, Mississauga, ON, Canada) to measure the optical density at 600 nm (OD600). The plates were incubated overnight at 37 °C and spot counted to calculate CFU per OD600 (see Supplementary Material, Section S3).
Contact Angle Goniometry
Static contact angle goniometry (CAG) experiments were conducted with the KSV CAM 101 goniometer (KSV Instruments Ltd., Helsinki, Finland). A 5 µL drop of Milli-Q water at room temperature was used for each measurement. Bare and coated TSM crystals were analyzed in triplicate.
Bacteria Preparation
Lysogeny broth was used to grow S. aureus bacteria at 37 • C overnight. The grown solution was serially diluted from 1/10 to 1/10 9 times in PBS. Each solution was spotted onto agar plates (3 × 10 µL), as well as measured using a UV-1600PC spectrometer (VWR International, Mississauga, ON, Canada) to measure the optical density at 600 nm (OD600). The plates were incubated overnight at 37 • C and spot counted to calculate CFU per OD600 (see Supplementary Material, Section S3).
For TSM measurements, S. aureus, E. coli, or P. aeruginosa bacteria was grown overnight at 37 • C. The next day, 1 mL of the bacteria solution was centrifuged at 14,500× g RPM (5 min). A total of 1 mL of PBS was used to resuspend the bacteria pellet. The OD600 of the solution was used to calculate the base CFU, and then the solution was diluted to the desired CFU. To prepare milk samples, the diluted PBS bacteria solution was centrifuged at 14,500× g RPM (5 min) to pellet the bacteria, and then the bacteria was resuspended in milk.
TSM Measurements
Cleaned or modified crystals were inserted in an acryl flow-through cell (JKU, Linz, Austria), which was clamped by a holder, ensuring the internal conductors were in contact with the crystal's electrodes. Liquid flowed through the internal chamber using a GeniePlus syringe pump (Kent Scientific, Torrington, CT, USA) and a pulling syringe. The liquid flowing through the internal chamber was in contact with one face of the crystal. A SARK-110 vector analyzer (Seeed, Shenzhen, China) was used to collect data via Python software [16]. Experiments were measured at 8 MHz under ambient conditions and under a 50 µL·min −1 constant flow. The scheme of experimental setup is presented in Supplementary Material ( Figure S5).
Marketed whole milk was used as a fouling agent for antifouling experiments. The crystals were first rinsed with PBS buffer to wash off weakly adsorbed thiol molecules and to reach a stable baseline (about 50 min). After baseline was achieved, the solution was changed to milk samples for bare, UDT, MUA, DTT, and DTT COOH crystals (250 µL over 5 min of flow) then returned to flow under PBS buffer.
For the aptasensor, DTT COOH -modified crystals were functionalized according to the same procedure described above (Section 2.2). The crystals were exposed to HS-MEG-OH (25 min), rinsed with PBS (5 min), activated with NHS/EDC (35 min), rinsed with PBS (5 min), incubated with aptamer solution (90 min), rinsed with PBS (5 min), incubated with ethanolamine solution (40 min), then washed with PBS (at least 15 min). HS-MEG-OH was used to functionalize any possible exposed gold on the crystal surface, while ethanolamine neutralized activated carboxyl groups that did not react with aptamer. Once the final PBS wash reached a stable baseline following the aptasensor functionalization, whole milk was flowed (250 µL over 5 min of flow) to analyze the crystal's antifouling character. Alternatively, for S. aureus detection, 250 µL of milk sample containing a known concentration of bacteria (10 2 , 10 3 , 10 4 , 10 5 , 10 6 , or 10 7 CFU/mL) was poured over the crystal surface. After milk (with or without bacteria), PBS buffer was poured over the crystals to wash them, remove any remaining sample on the surface, and reach a final stable baseline. Each experiment was repeated three times.
TSM Data Analysis
By detecting variations in frequency and dissipation, it is possible to obtain information on the mass deposited on the electrode of the crystal using the Sauerbrey equation [17]: ∆f (Hz) is the frequency change, n is the number of harmonics, and f 0 is the fundamental frequency. ∆m is the mass adsorbed on the surface, A is the active area of the quartz crystal electrode, µ q is the shear modulus, 2.947 × 10 10 Pa, and ρ q is the density, 2.648 × 10 3 kg m −3 , of quartz.
The sensor sensitivity has been evaluated by determination of the limit of detection (LOD) according to [18] as follows: where SD lc is the standard deviation at low concentration of the sample, and the limit of blank (LOB) is the highest apparent analyte concentration: Biosensors 2023, 13, 614 6 of 14 The limit of quantification (LOQ) has been calculated according to the equation: It is also possible to analyze changes in the viscoelastic properties following the functionalization of the thiol SAM with the aptamer in order to study the density of the bioreceptor anchored on the surface and confirm the success of the functionalization. In particular, the shear modulus, µ A , the viscosity coefficient, η A , and the penetration depth, Γ A , of the evanescent acoustic after the aptamer incubation wave were calculated. To perform this analysis, the Voinova-Voigt viscoelastic model was employed [19]: where χ = µ 1 η 1 ω ; Γ is the penetration depth of the shear wave in the liquid medium; ρ 0 is the density of quartz; h 0 is the thickness of the quartz crystal; h 1 , µ 1 , η 1 , and ρ 1 are the thickness, the shear elastic modulus, the viscosity, and the density of the adsorbed film, respectively; and ω = 2πf is the angular frequency of oscillation. DNA has an average density of 1.7 g × cm −3 , and the thickness can be obtained by dividing the calculated mass per unit area by the density.
A Python code based on Yoon et al.'s equation [20] was used to analyze the data. Excel Office 365 (Microsoft Corporation, Albany, NY, USA) and OriginPro 8 (OriginLab Corporation, Northampton, MA, USA) were then used to plot and statistically process the data. The aptasensor was incubated with various concentrations of S. aureus in triplicate.
Contact Angle Goniometry
Different SAM functionalizations of the gold electrode surfaces were confirmed with CAG. The same values were obtained as the previous work: approximately 55 • , 105 • , 48 • , 43 • , and 35 • for bare gold, and UDT-, MUA-, DTT-, and DTT COOH -modified crystals, respectively [12]. As MUA is hydrophilic, its wettability is higher compared to UDT and bare crystals [12]. UDT crystals showed low wettability compared to bare crystals as UDT is non-polar. The contact angle of DTT was similar to our previous work, as well as to that in the literature [12,21], which confirmed that the crystal was successfully functionalized as DTT-modified TSM discs are more polar than bare gold. As DTT COOH is a derivative of DTT, it was expected that DTT COOH -modified surfaces would also be hydrophilic, which was confirmed with CAG. As DTT COOH has a carboxylic acid group, this made DTT COOH SAMs more wettable compared to DTT SAMs.
As DTT SAMs have disordered binding to gold, DTT COOH likely experiences similar binding [22]. Such less dense SAMs can form vacancy islands due to rearranging gold atoms [23]. As a result, we used SH-MEG-OH as an antifouling linear thiol to functionalize any exposed areas of the gold surface.
Milk Antifouling Test with TSM Method
To determine the aptasensor's antifouling ability, different SAM-modified quartz crystals were exposed to marketed whole milk ( Figure 3). As whole milk contains high protein and fat content, it strongly adsorbs to nonpolar surfaces such as MUA and UDT SAMs. MUA, which is often used as a thiol linker on gold surfaces, experienced slightly less fouling compared to the more hydrophobic UDT SAM crystals.
SAMs. MUA, which is often used as a thiol linker on gold surfaces, experienced slightly less fouling compared to the more hydrophobic UDT SAM crystals. As Table 1 summarizes, UDT had the most fouling (158 ± 16 Hz and (8.0 ± 2.4) × 10 −6 frequency and dissipation shifts, respectively) as its structure is only hydrophobic, while MUA's hydrocarbon chain and carboxylic acid functional group caused slightly less fouling (136 ± 4 Hz and (7.6 ± 0.8) × 10 −6 frequency and dissipation shifts, respectively). Bare gold experienced high milk adsorption (105 ± 5 Hz and (6.1 ± 0.6) × 10 −6 frequency and dissipation shifts, respectively), which is less than for UDT-and MUA-functionalized discs. UDT, MUA, and bare gold demonstrated 64%, 58%, and 45% more fouling relative to DTTCOOH, respectively. Table 1. Frequency, Δf, and dissipation, ΔD, changes after exposure to whole milk and washing with running buffer. % fouling relative to DTTCOOH has been calculated as (Δf/ΔfDTTCOOH)x100, where Δf denotes the frequency changes for corresponding SAM surface. Average and standard deviations (SD) were determined from three independent experiments.
SAM
Δf ( In comparison, DTT and the aptasensor were, respectively, 31% and 88% less fouling than DTTCOOH-modified TSM crystals. DTT is likely less fouling (40 ± 8 Hz and (2.8 ± 1.4) × 10 −6 frequency and dissipation shifts, respectively) than DTTCOOH as it does not have a carboxylic acid group that can become negatively charged; instead the DTT SAM remains neutral in milk.
The aptasensor was significantly less fouling compared to DTTCOOH as the carboxylic acids are likely deprotonated when exposed to the slightly acidic milk environment. The negatively charged DTTCOOH SAM can electrostatically interact with positively charged species, which causes adsorption of milk (58 ± 14 Hz and (3.4 ± 2.7) × 10 −6 frequency and dissipation shifts, respectively). As the aptasensor's carboxylic acid groups are modified with aptamers or ethanolamine, the lack of exposed carboxylic acid groups significantly decreases fouling to a minimal amount (7 ± 1 Hz and (0.9 ± 0.2) × 10 −6 frequency and dissipation shifts, respectively). Modifying the DTTCOOH SAM to an aptasensor significantly As Table 1 summarizes, UDT had the most fouling (158 ± 16 Hz and (8.0 ± 2.4) × 10 −6 frequency and dissipation shifts, respectively) as its structure is only hydrophobic, while MUA's hydrocarbon chain and carboxylic acid functional group caused slightly less fouling (136 ± 4 Hz and (7.6 ± 0.8) × 10 −6 frequency and dissipation shifts, respectively). Bare gold experienced high milk adsorption (105 ± 5 Hz and (6.1 ± 0.6) × 10 −6 frequency and dissipation shifts, respectively), which is less than for UDT-and MUA-functionalized discs. UDT, MUA, and bare gold demonstrated 64%, 58%, and 45% more fouling relative to DTT COOH , respectively. In comparison, DTT and the aptasensor were, respectively, 31% and 88% less fouling than DTT COOH -modified TSM crystals. DTT is likely less fouling (40 ± 8 Hz and (2.8 ± 1.4) × 10 −6 frequency and dissipation shifts, respectively) than DTT COOH as it does not have a carboxylic acid group that can become negatively charged; instead the DTT SAM remains neutral in milk.
The aptasensor was significantly less fouling compared to DTT COOH as the carboxylic acids are likely deprotonated when exposed to the slightly acidic milk environment. The negatively charged DTT COOH SAM can electrostatically interact with positively charged species, which causes adsorption of milk (58 ± 14 Hz and (3.4 ± 2.7) × 10 −6 frequency and dissipation shifts, respectively). As the aptasensor's carboxylic acid groups are modified with aptamers or ethanolamine, the lack of exposed carboxylic acid groups significantly decreases fouling to a minimal amount (7 ± 1 Hz and (0.9 ± 0.2) × 10 −6 frequency and dissipation shifts, respectively). Modifying the DTT COOH SAM to an aptasensor significantly improves the layer's antifouling properties, as the neutral and polar surface is likely more effective for interacting with water molecules. Furthermore, the dithiol structure of DTT COOH provides spacing for the water molecules to hydrogen bond with the layer, creating a thermodynamically favorable "water barrier" and largely preventing milk components from adsorbing. Compared to DTT, bare gold, MUA, and UDT crystals, the aptasensor's fouling significantly decreased by 82%, 94%, 95%, and 96%, respectively.
Sensing of Staphylococcus Aureus in Milk
The aptasensor showed high sensitivity to S. aureus in milk. Different bacteria concentrations caused proportional decreases in the resonant frequency. The dissipation changes confirmed the sensitivity as they increased with increasing bacteria concentrations, indicating that S. aureus adsorbed to the crystal's surface by binding with the aptamer. As Figure 4 shows, the crystal experiences minimal frequency and dissipation shifts when exposed to milk without bacteria (8.9 ± 3.4 Hz and (2.1 ± 0.8) × 10 −6 frequency and dissipation shifts, respectively). Bacteria concentrations in milk were proportional to changes in the frequency and dissipation; increasing cell concentration caused decreasing frequency and increasing dissipation shifts.
improves the layer's antifouling properties, as the neutral and polar surface is likely more effective for interacting with water molecules. Furthermore, the dithiol structure of DTTCOOH provides spacing for the water molecules to hydrogen bond with the layer, creating a thermodynamically favorable "water barrier" and largely preventing milk components from adsorbing. Compared to DTT, bare gold, MUA, and UDT crystals, the aptasensor's fouling significantly decreased by 82%, 94%, 95%, and 96%, respectively.
Sensing of Staphylococcus Aureus in Milk
The aptasensor showed high sensitivity to S. aureus in milk. Different bacteria concentrations caused proportional decreases in the resonant frequency. The dissipation changes confirmed the sensitivity as they increased with increasing bacteria concentrations, indicating that S. aureus adsorbed to the crystal's surface by binding with the aptamer. As Figure 4 shows, the crystal experiences minimal frequency and dissipation shifts when exposed to milk without bacteria (8.9 ± 3.4 Hz and (2.1 ± 0.8) × 10 −6 frequency and dissipation shifts, respectively). Bacteria concentrations in milk were proportional to changes in the frequency and dissipation; increasing cell concentration caused decreasing frequency and increasing dissipation shifts. Figure 5 illustrates the proportionality of the changes in frequency and dissipation due to increasing S. aureus concentrations in milk. The logarithmic trend indicates that quantification of the bacteria is possible, particularly from measuring frequency changes. The frequency and dissipation variations were calculated from the differences of the stable baselines (before and after sample exposure). Figure 5 illustrates the proportionality of the changes in frequency and dissipation due to increasing S. aureus concentrations in milk. The logarithmic trend indicates that quantification of the bacteria is possible, particularly from measuring frequency changes. The frequency and dissipation variations were calculated from the differences of the stable baselines (before and after sample exposure). improves the layer's antifouling properties, as the neutral and polar surface is likely more effective for interacting with water molecules. Furthermore, the dithiol structure of DTTCOOH provides spacing for the water molecules to hydrogen bond with the layer, creating a thermodynamically favorable "water barrier" and largely preventing milk components from adsorbing. Compared to DTT, bare gold, MUA, and UDT crystals, the aptasensor's fouling significantly decreased by 82%, 94%, 95%, and 96%, respectively.
Sensing of Staphylococcus Aureus in Milk
The aptasensor showed high sensitivity to S. aureus in milk. Different bacteria concentrations caused proportional decreases in the resonant frequency. The dissipation changes confirmed the sensitivity as they increased with increasing bacteria concentrations, indicating that S. aureus adsorbed to the crystal's surface by binding with the aptamer. As Figure 4 shows, the crystal experiences minimal frequency and dissipation shifts when exposed to milk without bacteria (8.9 ± 3.4 Hz and (2.1 ± 0.8) × 10 −6 frequency and dissipation shifts, respectively). Bacteria concentrations in milk were proportional to changes in the frequency and dissipation; increasing cell concentration caused decreasing frequency and increasing dissipation shifts. Figure 5 illustrates the proportionality of the changes in frequency and dissipation due to increasing S. aureus concentrations in milk. The logarithmic trend indicates that quantification of the bacteria is possible, particularly from measuring frequency changes. The frequency and dissipation variations were calculated from the differences of the stable baselines (before and after sample exposure). For all concentrations of bacteria in milk tested, a change of approximately 8.9 Hz occurred, which is the mean blank. The standard deviation of the blank (SD blank ) is 3.4 Hz; therefore, the LOB = 14.5. The SD lc of the low concentration sample is 11.49 Hz; therefore, the LOD was found to be 33.4 CFU/mL. The sensor also showed a dynamic range of 10 2 to 10 6 CFU/mL, allowing it to quantify S. aureus in a wide range of concentrations.
According to the US Food and Drug Administration (FDA), a concentration of S. aureus greater than 10 4 CFU/mL is considered injurious to health [24]. As this amount is within the dynamic range of the developed sensor and well above the calculated limit of detection, this aptasensor is practical for monitoring milk contamination. The limit of quantification (LOQ) has been calculated according to Equation (4) as 101.2 CFU/mL. Bacteria-aptamer binding was analyzed by fitting the data to the Langmuir isotherm [25] (Figure 6): where the maximal frequency change is (∆f ) max , the dissociation constant is K d , and bacteria concentration is c. As the K d decreases, the binding strength of bacteria-aptamer increases. The calculated K d and (∆f ) max values for bacteria-aptamer binding in whole milk were found to be 270.9 ± 42.9 CFU/mL and 9.8 ± 3.5 kHz, respectively. For all concentrations of bacteria in milk tested, a change of approximately 8.9 Hz occurred, which is the mean blank. The standard deviation of the blank (SDblank) is 3.4 Hz; therefore, the LOB = 14.5. The SDlc of the low concentration sample is 11.49 Hz; therefore, the LOD was found to be 33.4 CFU/mL. The sensor also showed a dynamic range of 10 2 to 10 6 CFU/mL, allowing it to quantify S. aureus in a wide range of concentrations.
According to the US Food and Drug Administration (FDA), a concentration of S. aureus greater than 10 4 CFU/mL is considered injurious to health [24]. As this amount is within the dynamic range of the developed sensor and well above the calculated limit of detection, this aptasensor is practical for monitoring milk contamination. The limit of quantification (LOQ) has been calculated according to Equation (4) as 101.2 CFU/mL. Bacteria-aptamer binding was analyzed by fitting the data to the Langmuir isotherm [25] (Figure 6): where the maximal frequency change is (Δf)max, the dissociation constant is Kd, and bacteria concentration is c. As the Kd decreases, the binding strength of bacteria-aptamer increases. The calculated Kd and (Δf)max values for bacteria-aptamer binding in whole milk were found to be 270.9 ± 42.9 CFU/mL and 9.8 ± 3.5 kHz, respectively. Thus, the binding of bacteria to the aptamers resulted in a significant decrease in the frequency and increase in dissipation, which is evident of the viscosity contribution. In this case, the Sauerbrey equation cannot be directly applied for evaluation of the changes of the mass. However, certain rough estimation of the surface density of the aptamers and bacteria can be obtained. According to our data the changes of the frequency following immobilization of aptamers on the antifouling linker can be denoted by Δf = −21.51 ± 3.79 Hz, and corresponding changes in dissipation are ΔD = (3.94 ± 0.52) × 10 −6 . If we consider an aptamer molecular weight of about 18.6 kDa, we obtain the surface density of the nucleic acid, equivalent to about 4.8 × 10 12 molecules per cm 2 . This result is very similar to those that can be found in the literature for an aptamer monolayer specifically bonded to a surface [26].
The changes in frequency and dissipation following the addition of the S. aureus at concentration 10 7 CFU/mL are 139.6 ± 22.6 Hz and (13.2 ± 3.7) × 10 −6 , respectively. Then, the surface mass density of bacteria can be estimated as (0.96± 0.16) µg·cm −2 . Considering Thus, the binding of bacteria to the aptamers resulted in a significant decrease in the frequency and increase in dissipation, which is evident of the viscosity contribution. In this case, the Sauerbrey equation cannot be directly applied for evaluation of the changes of the mass. However, certain rough estimation of the surface density of the aptamers and bacteria can be obtained. According to our data the changes of the frequency following immobilization of aptamers on the antifouling linker can be denoted by ∆f = −21.51 ± 3.79 Hz, and corresponding changes in dissipation are ∆D = (3.94 ± 0.52) × 10 −6 . If we consider an aptamer molecular weight of about 18.6 kDa, we obtain the surface density of the nucleic acid, equivalent to about 4.8 × 10 12 molecules per cm 2 . This result is very similar to those that can be found in the literature for an aptamer monolayer specifically bonded to a surface [26].
The changes in frequency and dissipation following the addition of the S. aureus at concentration 10 7 CFU/mL are 139.6 ± 22.6 Hz and (13.2 ± 3.7) × 10 −6 , respectively. Then, the surface mass density of bacteria can be estimated as (0.96± 0.16) µg·cm −2 . Considering that S. aureus has a spherical shape of a diameter approximately 0.5 µm, and that the density of the cytoplasm is approximately 1 g·cm 3 [27], one can estimate that the average mass of one bacterium is 0.065 pg. Therefore, the surface density of the bacteria can be estimated as 1.48 × 10 7 bacteria·cm −2 . Considering that the cross-sectional area of one bacterium is approximately 0.2 µm 2 , at full coverage, the surface density will be 5 × 10 8 bacteria·cm −2 , which is much higher in comparison with those estimated from frequency changes. However, considering that S. aureus forms colonies, the surface density can be lower in comparison with those estimated from frequency changes. It is also evident that the surface density of bacteria is much lower than those of the aptamers. This means that a large number of aptamers bind to one bacterium.
In the literature, there have been a limited number of aptasensors for S. aureus explored. Table 2 summarizes some of the most sensitive aptasensors, which have LODs that are similar to this work's TSM-based aptasensor. However, some of the aptasensors were tested in culture, which is less fouling than whole milk. Our developed TSM aptasensor has excellent sensitivity that is better or comparable to other reported aptasensors. Moreover, our sensor has the advantage of operating in raw milk due to its antifouling thiol linker, DTT COOH , making detection rapid and efficient for the analysis of milk.
Effect of Aptamer/Ethanolamine Immobilization and Bacteria Binding on the Viscoelastic Properties of the Sensing Layers
It is possible to estimate the mass of aptamer immobilized on the electrode with the Sauerbrey equation, as well as ethanolamine (used to deactivate the activated carboxyl groups of the SAM and increase the antifouling characteristics of the surface). The mass of aptamer was found to be 30.0 ± 5.1 ng, while ethanolamine was 4.5 ± 2.9 ng. These values were then used for the subsequent calculations.
We also analyzed changes in the viscosity coefficient, η, shear modulus, µ, and penetration depth, Γ, following incubation of the DTT COOH SAM with aptamer, ethanolamine, and bacteria. Since it was not possible to measure the viscoelastic properties of the thiol layer (the functionalization of the thiols was conducted in vial, not in a flow), the values of the viscoelastic parameters are relative and not absolute. The viscoelastic parameters are shown in Table 3. The analyses were performed on three independent measurements and the data are represented as mean and standard deviation. The variation of the shear modulus, µ, for the aptamer layer has a magnitude that is comparable with biomolecules, and certainly smaller than the quartz crystal (~10 10 Pa) [37]. Regarding the viscosity coefficient, η, this was lower than compact protein layers such as those obtained with β-casein (~10 −4 Pa·s). This may suggest that the aptamer layer is not quite compact, but rather that the nucleic acid tends to have a loose, potentially globular three-dimensional structure which does not allow for high density packing. This would agree with the idea that aptamers need to fold in order to perform their receptor function. The penetration depth, Γ, of the mechanical evanescent wave is around 200 nm for a crystal with a fundamental frequency of 8 MHz [38]. This value varies substantially little following the formation of a compact SAM, while by immobilizing polymeric molecules, one should see a significant reduction. In fact, following the immobilization of the aptamer, the penetration depth is reduced by about 1/6, emphasizing how effectively the aptamer is present on the layer. However, due to its steric characteristics, the aptamer does not pack to form a dense layer. This is an important prerequisite for the formation of an aptasensor as the aptamer has greater freedom to rearrange itself in the presence of the specific analyte and bind more effectively to it.
The variation of the shear modulus for ethanolamine is less than that obtained following the incubation of the aptamer, meaning that ethanolamine is precisely bound to the SAM, contributing to the formation of a dissipative and non-elastic mix layer. However, being a small molecule, it has less influence on the elastic properties. In the case of the viscosity coefficient, η, this is also varied to a very small extent, indicating that this small molecule has very little influence on the density characteristics of the functionalized layer. However, as one would expect, the coefficient of viscosity increases slightly (the mix layer, with the contribution of ethanolamine, becomes more viscous). The penetration depth changed very little compared to the aptamer, confirming that it is precisely the ethanolamine that binds to the activated carboxyl groups of DTT COOH , which, being a small molecule, has little influence on the path of the evanescent acoustic wave. Additionally, the small variation in penetration depth also demonstrates that ethanolamine molecules bind distantly from each other (i.e., only where the DTT COOH tails are) and therefore not densely.
Following the incubation of the sensing layer with bacterial cells, the shear modulus increases to values greater than those of aptamer/ethanolamine alone, while the viscosity coefficient decreases. It is likely that the bacterial cell, by binding to the free end of the aptamers, reduces the movement of the latter, causing the formation of a more tightly packed layer and, consequently, one which is more elastic and slightly less dissipative (hence the increase in the shear modulus and the reduction of the coefficient of viscosity). However, the penetration depth does not vary significantly, increasing only slightly compared to the aptamer/ethanolamine heterolayer, since the acoustic wave propagates better in the presence of a more compact structure.
Specificity of Staphylococcus aureus Detection
To determine if the developed aptasensor is specific to S. aureus, it was also tested against 10 7 CFU/mL of P. aeruginosa and E. coli in whole milk. The frequency and dissipation were measured for each (Figure 7). We used relatively high concentrations of bacteria to be sure that no binding of bacteria other than S. aureus occurs, even at high concentrations.
Specificity of Staphylococcus aureus Detection
To determine if the developed aptasensor is specific to S. aureus, it was also tested against 10 7 CFU/mL of P. aeruginosa and E. coli in whole milk. The frequency and dissipation were measured for each (Figure 7). We used relatively high concentrations of bacteria to be sure that no binding of bacteria other than S. aureus occurs, even at high concentrations. The overall change in both frequency and dissipation are much higher for the aptasensor in response to S. aureus compared to the other bacteria. With S. aureus, the overall frequency variation was approximately 140 Hz, while E. coli was 27 Hz, and P. aeruginosa was only 16 Hz, representing an 80% and 89% greater signal for S. aureus compared to these other bacteria, respectively. Though some frequency change is observed for the other bacteria (due to a minimal fouling of milk), the much stronger signal for S. aureus shows that this sensor is still quite specific to the target bacteria and can be used to determine the specific contamination of samples caused by S. aureus.
Conclusions
A thickness shear mode (TSM) acoustic aptasensor was developed for detecting Staphylococcus aureus in whole UHT cow's milk. Detection was achieved without treating the milk due to the antifouling layer of the aptasensor, which employs the thiol linker 3dithiothreitol propanoic acid (DTTCOOH). We have shown that after linker synthesis, Fourier transform infrared spectroscopy (FTIR) can be used to analyze and confirm the structure of the desired molecule. Once DTTCOOH was linked to the aptamer, the antifouling The overall change in both frequency and dissipation are much higher for the aptasensor in response to S. aureus compared to the other bacteria. With S. aureus, the overall frequency variation was approximately 140 Hz, while E. coli was 27 Hz, and P. aeruginosa was only 16 Hz, representing an 80% and 89% greater signal for S. aureus compared to these other bacteria, respectively. Though some frequency change is observed for the other bacteria (due to a minimal fouling of milk), the much stronger signal for S. aureus shows that this sensor is still quite specific to the target bacteria and can be used to determine the specific contamination of samples caused by S. aureus.
Conclusions
A thickness shear mode (TSM) acoustic aptasensor was developed for detecting Staphylococcus aureus in whole UHT cow's milk. Detection was achieved without treating the milk due to the antifouling layer of the aptasensor, which employs the thiol linker 3-dithiothreitol propanoic acid (DTT COOH ). We have shown that after linker synthesis, Fourier transform infrared spectroscopy (FTIR) can be used to analyze and confirm the structure of the desired molecule. Once DTT COOH was linked to the aptamer, the antifouling character of the sensor significantly improved. Relative to the DTT COOH layer, the DTT COOH -aptamer layer experienced less milk fouling by approximately 88% and the frequency and dissipation variations were minimal in whole milk.
We used the frequency variation and dissipation data obtained during the construction of the aptasensor to demonstrate the correct immobilization of the components. In particular, the DNA aptamers bind to the surface in a non-dense manner, which favors recognition and binding with the analyte. Ethanolamine plays a marginal role in the variation of the viscoelastic parameters, which was expected, as it is a small molecule. However, the binding of bacteria to the sensing surface resulted in an increasing shear modulus, evidencing the reduction in the mobility of the aptamer layers.
The developed aptasensor achieved excellent sensitivity and specificity and was able to successfully detect and quantify S. aureus in milk. As the 33 CFU/mL limit of detection and 101 CFU/mL limit of quantification are significantly below the EU's safe limit for bacteria in milk products, the aptasensor is practical for rapid and sensitive detection in the milk industry. Additionally, the sensor was found to be specific to S. aureus compared to other tested bacteria. However, to make TSM more user-friendly, its design requires further engineering. The TSM aptasensor developed in this work demonstrates that a sensor based on the novel antifouling linker DTT COOH can rapidly detect and quantify S. aureus directly in raw whole UHT milk samples. Further work will explore the use of this sensor for different bacteria using their unique aptamers. | 2023-06-28T06:17:25.953Z | 2023-06-01T00:00:00.000 | {
"year": 2023,
"sha1": "58d6daae8c337c1ee4c3d11e1cab39ef5d2f91ed",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/bios13060614",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7e000c5c1908e0ef14a572d1a7d7efff047591d8",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
} |
232077327 | pes2o/s2orc | v3-fos-license | Effects of compound probiotics and aflatoxin-degradation enzyme on alleviating aflatoxin-induced cytotoxicity in chicken embryo primary intestinal epithelium, liver and kidney cells
Aflatoxin B1 (AFB1) is one of the most dangerous mycotoxins for humans and animals. This study aimed to investigate the effects of compound probiotics (CP), CP supernatant (CPS), AFB1-degradation enzyme (ADE) on chicken embryo primary intestinal epithelium, liver and kidney cell viabilities, and to determine the functions of CP + ADE (CPADE) or CPS + ADE (CPSADE) for alleviating cytotoxicity induced by AFB1. The results showed that AFB1 decreased cell viabilities in dose-dependent and time-dependent manners. The optimal AFB1 concentrations and reactive time for establishing cell damage models were 200 µg/L AFB1 and 12 h for intestinal epithelium cells, 40 µg/L and 12 h for liver and kidney cells. Cell viabilities reached 231.58% (p < 0.05) for intestinal epithelium cells with CP addition, 105.29% and 115.84% (p < 0.05) for kidney and liver cells with CPS additions. The further results showed that intestinal epithelium, liver and kidney cell viabilities were significantly decreased to 87.12%, 88.7% and 84.19% (p < 0.05) when the cells were exposed to AFB1; however, they were increased to 93.49% by CPADE addition, 102.33% and 94.71% by CPSADE additions (p < 0.05). The relative mRNA abundances of IL-6, IL-8, TNF-α, iNOS, NF-κB, NOD1 (except liver cell) and TLR2 in three kinds of primary cells were significantly down-regulated by CPADE or CPSADE addition, compared with single AFB1 group (p < 0.05), indicating that CPADE or CPSADE addition could alleviate cell cytotoxicity and inflammation induced by AFB1 exposure through suppressing the activations of NF-κB, iNOS, NOD1 and TLR2 pathways.
Introduction
Mycotoxins are toxigenic fungal secondary metabolites that mainly produced by Aspergillus, Penicillium and Fusarium to have great threat to human and animal health globally. The Food and Agriculture Organization (FAO) showed that approximately 25% of worldwide agricultural raw materials were contaminated with mycotoxins, leading to health problems and enormous economic losses (FAO 2013). So far, at least 400 kinds of mycotoxins such as aflatoxins, zearalenone, deoxynivalenol, fumonisin, patulin, T-2 toxin and ochratoxins have been identified (Cimbalo et al. 2020). There are more than 20 types of aflatoxins including aflatoxin B 1 (AFB 1 ), B 2 , G 1 , G 2 and M 1 , among them AFB 1 is the most toxic mycotoxin with high frequency of contamination in various cereals such as nuts, corn and rice (Negash 2018). AFB 1 is able to cause poor feed efficacy, hepatotoxic, carcinogenic, teratogenic, immunosuppressive and other devastating effects on humans and animals (Meissonnier et al. 2008;Trebak et al. 2015;Zhang et al. 2016). Therefore, it is classified as the category one carcinogen by the International Agency for Research on Cancer (IARC 2012).
Poultry is more sensitive to AFB 1 than the other kinds of animals. AFB 1 residues in poultry body will cause potential health hazard for humans and itself . It is known that moldy food contains large amounts of AFB 1 , especially in moldy peanuts and cereals. In poultry farming, AFB 1 can severely affect the immune system to cause immunosuppression . AFB 1 can also cause apoptosis, gross and histopathological lesions in different organs, especially in liver, kidney, muscles and bursa of Fabricius Peng et al. 2014). It was reported that AFB 1 intoxication could increase mortality, liver and kidney pathology, and decrease bodyweight and feed intake for broilers (Saleemi et al. 2019). Therefore, it is necessary to develop effective detoxification strategies to increase AFB 1 degradation and alleviate AFB 1 -induced inflammatory and immunosuppression in chickens.
Up to date, several strategies have been reported to alleviate AFB 1 toxicity including physical, chemical and biological methods. The physical detoxification methods (absorption, heating and irradiation) and chemical detoxification methods (ammonization, solvent extraction and oxidation) have many defects such as nutritional losses, expensive equipment requirement and low efficiency (Gregorio et al. 2014;Arzandeh and Jinap 2015;Zhu et al. 2016). It was found that the biological method was more effective to degrade mycotoxins than other ones (Das et al. 2014;Melvin et al. 2014;Fernández et al. 2015). Many species of microbes such as bacteria, molds and yeasts have demonstrated the capability to alleviate AFB 1 toxicity, due to their metabolic transformation or adsorption ability for AFB 1 . It was reported that addition of lactic acid bacteria and S. cerevisiae to AFB 1 -contaminated diet could reduce AFB 1 residues and prevent degenerative changes in the liver and kidney of broilers (Śliżewska et al. 2019). Aspergillus oryzae has been reported to be able to degrade AFB 1 (Alberts et al. 2009). The other reports showed that the cooperation of compound probiotics (CP) and AFB 1 -degradation enzyme (ADE) could degrade AFB 1 effectively (Zuo et al. 2013;Huang et al. 2019).
It was reported that liver and kidney were the primary target organs attacked by AFB 1 (Gholami-Ahangaran et al. 2016;Pérez-Acosta et al. 2016). In addition, the small intestine is the physical barrier which usually first contacts with and absorbs AFB 1 , as a result intestinal heath is seriously influenced by AFB 1 (Pinton and Oswald 2014). However, the optimal strategies for alleviating the negative effects of AFB 1 on intestine, liver and kidney cells of chickens have not been reported. Therefore, small intestine, liver and kidney cells of chickens were selected in this study to investigate the toxic effects of AFB 1 on chicken embryo primary cells, and explore the efficacy of CPADE or CPSADE for alleviating AFB 1 -induced cytotoxicity and inflammatory of chickens.
The AFB 1 -degradating enzyme was extracted from solid-state fermentation of Aspergillus oryzae (A. oryzae, CGMCC3.4437) according to the previous protocol (Huang et al. 2019). The crude enzyme solution of 10% AFB 1 -degrading enzyme was diluted with cell medium and stored at 4 °C for further use. The AFB 1 -degrading enzyme activity in 10% crude enzyme solution was determined to be 51 U/mL according to the previous protocol (Gao et al. 2011).
Primary chicken embryo intestinal epithelium, liver and kidney cell preparation
The 14-day-old fertilized chicken eggs were purchased from Kaifeng Breeding Chicken Co., Ltd. Kaifeng, China, which were cleaned by 75% alcohol, placed in a verticalflow clean bench ultra-clean, and handled with ultraviolet irradiation for 20 min. The air chamber of embryo was carefully broken with the tweezers, the chicken embryo was taken out and quickly decapitated, followed by taking out small intestine, liver and kidney tissues, and rinsed in PBS containing 1% penicillin (10,000 U/mL)-streptomycin (10 mg/mL) (Beijing Solarbio Biotechnology Co., Ltd. Beijing, China).
The mesentery of small intestine was carefully exfoliated in PBS solution, cut into 1 mm size, put into 5 mL centrifuge tube, and washed with PBS until the supernatant was clear. After removing the washing solution, 1 mL 0.25% pancreatin was added to digest the tissues at 37 °C for 10 min with shaking once every 2 min. The tissues were centrifuged at 1000 r/min for 5 min to remove supernatant, and then 2 mL DMEM/F12 medium supplemented with 10% FBS and 1% penicillin-streptomycin were added. The filtrate was collected using 200-mesh sieve, and the cells were cultured in a 5% CO 2 incubator at 37 °C for 2 h. The supernatant was removed after centrifuged with 1000 r/min for 10 min, the cells were adjusted to 5.0 × 10 5 cells/mL with DMEM/F12 supplemented with 2.5% FBS and 1% penicillin-streptomycin. 0.2 mL or 2 mL cells were put in a 96-well or 12-well culture plate, and cultured at 37 °C in a 5% CO 2 incubator. The incubating cell medium was replaced every 2 days.
Liver cells were prepared as above and modified as following: 1 mL collagen protease and 1 mL neutral protease were added to digest the tissues at 37 °C for 30 min with shaking once every 3 min. Then 2 mL M199 medium supplemented with 10% FBS and 1% penicillin-streptomycin were added. After shaking up and down, the filtrates were collected with a 200-mesh sieve, and then centrifuged with 1000 r/min for 10 min to remove the supernatant. 1.5 mL M199 medium supplemented with 10% FBS and 1% penicillin-streptomycin were added to the centrifuge tube, and then 3 mL 50% percoll separation solution were added and mixed well, centrifuged for 15 min at 3000 r/ min. After centrifugation, the upper layer was removed, and the middle layer was taken out and put into a new centrifuge tube, then equivalent volume M199 medium was added to the new centrifuge tube, centrifuged for 10 min at 1000 r/min. At last the liver cells were resuspended with M199 medium supplemented with 10% FBS and 1% penicillin-streptomycin, adjusted and cultured as above. Kidney cells were prepared with the same protocol as liver cells, modified by using DMEM/F12 medium to replace M199 medium.
Cell viability assay and experimental design
Three kinds of primary cells were seeded into 96-well plates. Cell viability was measured by MTT assay every 2 days (Fotakis and Timbrell 2005). The growth curves of three kinds of cells were plotted with time as the abscissa and absorbance value as the ordinate. The following experiments were carried out in the logarithmic phase of cells. The experimental designs were as follows: 1. Effect of different AFB 1 concentrations on cell damage: three kinds of cells were seeded into 96-well plates with a density of 5.0 × 10 5 cells/mL, cultured to their logarithmic phases, followed by removing the culture medium and washing twice with PBS, and subsequently incubated with different concentrations of AFB 1 for 6, 12, 24 and 48 h, respectively. AFB 1 concentrations were 0, 40, 80, 120, 160 and 200 µg/L for intestinal epithelium cells; 0, 10, 20, 40 and 80 µg/L for the liver and kidney cells. AFB 1 was diluted with the corresponding cell media without serum and antibiotics. 2. Effect of CP or CPS on cell viability: the cells were prepared as above. CP and CPS were diluted with the corresponding cell media without serum and antibiotics. The cells were incubated with the different concentrations of CP or CPS for 12, 24 and 48 h, respectively. 3. Effect of ADE on cell viability: ADE was diluted with the cell medium without serum and antibiotics to make the final concentrations at 0, 0.0001%, 0.001%, 0.01%, 0.1% and 1%, which was incubated with cells for 6, 12, 24 and 48 h, respectively. 4. The functions of CPADE and CPSADE for alleviating cytotoxicity: The cell culture was 12 h. The detail design was listed in Table 1. The previous report in our laboratory showed that CPADE and CPSADE were more effective than CP, CPS and ADE for degrading AFB 1 (Huang et al. 2018); therefore, CP, CPS and ADE were not considered for alleviating cytotoxicity induced by AFB 1 in this study.
At the end of above cell incubations, each well was added with 10 µL 5 mg/mL MTT and incubated for 4 h. Then the cell supernatants were removed and 150 µL DMSO was added to each well. Thereafter, the plates were shaken for 10 min at room temperature. The absorbances (A) were determined at 490 nm wavelength with a reference wavelength of 630 nm by an ELx 800 microplate reader (BIO-TEK Instruments Inc., Winooski, VT, USA). The cell viability (%) = (A 490nm − A 630nm value in the experimental groups)/(A 490nm − A 630nm in the control groups) × 100%.
Reverse transcription PCR and quantitative real-time PCR
The primary intestinal epithelium, liver and kidney cells were seeded with a density of 5.0 × 10 5 cells/mL in 12-well culture plates and allowed to adhere for 24 h, respectively. After four treatments (Control, AFB 1 , CPADE or CPSADE, CPADE or CPASDE + AFB 1 ) for three kinds of primary cells for 12 h respectively, total RNA was extracted using Trizol (Invitrogen, Carlsbad, CA, USA) according to the standard manufacturer's instructions, and then dissolved in 50 µL RNase-free water, stored at − 80 °C. The quality and concentration of RNA samples were measured by NanoDrop ND-1000 Spectrophotometer (Nano-Drop Technologies, Wilmington, DE, U.S.). Approximately 1 µg total RNA from each sample was reversely transcribed into cDNA by TB GREEN kit (TaKaRa, Dalian, China). Quantitative RT-PCR was performed with CFX Connect ™ Real-Time PCR Detection System (Bio-Rad, Hercules, CA, USA). All the primers used in this study were listed in Table 2. The β-actin was used as a house-keeping gene, and the relative gene abundances in chicken embryo primary intestinal epithelium, liver and kidney cells were
Statistical analysis
All experimental data were presented as means ± standard deviations. The data were analyzed using one-way analysis of variance (ANOVA) by the Duncan method with SPSS 20.0 software (Sishu Software, Shanghai Co., Ltd. Shanghai, China). All graphs were generated using GraphPad Prism 8. Differences were considered as statistically significance at p < 0.05.
The growth curves of primary intestinal epithelium, liver and kidney cells of chicken embryo
Figure 1 demonstrated that the logarithmic growth phases of intestinal epithelium, liver and kidney cells appeared during the incubation periods of 8-12, 6-12 and 6-12 days, and reached the logarithmic peak on the 10th, 12th and 6th day, respectively (p < 0.05).
Effects of AFB 1 on the viabilities of primary intestinal epithelium, liver and kidney cells Table 3 showed that AFB 1 decreased cell viability in dosedependent and time-dependent manners. The higher AFB 1 concentrations and longer incubation time caused more serious damages for three kinds of cells. AFB 1 had insignificant effect on intestinal epithelium cell viability when its concentration was below 80 μg/L within 48 h incubation (p > 0.05); however, it was significantly influenced when AFB 1 concentration were more than 80 μg/L (p < 0.05), especially under the condition that the incubation time was 48 h. Liver and kidney cells of chicken embryo were more sensitive to AFB 1 than intestinal epithelium cells. They were significantly influenced by 80 μg/L AFB 1 within 6 h incubation, 40 μg/L AFB 1 within 12 h incubation, 20 μg/L AFB 1 within 24 h incubation, 10 μg/L AFB 1 within 48 h incubation (p < 0.05), compared with the control group. After considering the above results, AFB 1 concentrations and reaction time were confirmed as 200 μg/L and 12 h for intestinal epithelium cells, 40 μg/L AFB 1 and 12 h for liver and kidney cells in the subsequent experiments. (p < 0.05) at CPS3 levels after 12 h incubation for kidney cells. According to the above results, the optimal incubation time was selected as 12 h in the subsequent experiment. In general, the liver and kidney cells can't directly contact with microbes; therefore, CPS was selected in the subsequent experiments for liver and kidney cell incubations. Figure 2 showed that the relative viabilities of three kinds of cells were significantly decreased (p < 0.05) when ADE concentrations were between 0.01 and 1%; however, the cell viabilities were significantly increased when ADE concentrations were between 0.001 and 0.0001% (p < 0.05). Therefore, the optimal ADE content was selected as 0.001% in the subsequent experiment.
Discussions
Aflatoxins are the ubiquitous dietary contaminants all over the world, which lead to low feed intake, low efficiency and substantial economic losses (Tedesco et al. 2004). Aflatoxin B 1 is frequently detected in cereals, Guo et al. AMB Expr (2021) 11:35 feedstuffs and diets to cause liver damage and immune inhibition of domestic animals (Kraieski et al. 2016;Yuan et al. 2016). AFB 1 residues in domestic animal products will be harmful to human and public health. Liver is the main target organ of AFB 1 , but AFB 1 is also detected in kidney and intestinal tract of animals. Therefore, it is necessary to find an effective and safe method to alleviate AFB 1 for animal and human. Nowadays, probiotics have been widely used to degrade mycotoxins. It was reported that Bacillus subtilis could germinate in intestinal tract, and reduce AFB 1 absorption and residues in the internal organs of broilers (Salem et al. 2018). The compound probiotics of B. subtilis, L. casei and C. utilis were reported to increase production performance, alleviate histological lesions, degrade mycotoxins and decrease mycotoxin residues in broilers (Chang et al. 2020). In order to increase the efficiency of alleviating AFB 1 -induced cell damage, the compound probiotics was combined with AFB 1 -degrading enzyme in this study.
This result showed that the viabilities of three kinds of primary cells were decreased with increasing AFB 1 concentrations and incubation time, suggesting that both of them are the main factors for determining the extent of AFB 1 toxicity. In general, liver and kidney cells are more sensitive to AFB 1 than intestinal cells, which may be related to the different responses from the different cell types and organs (Zain 2011). AFB 1 can be metabolized to high reactive metabolites by cytochrome P450 enzyme system in liver cells, resulting in formation of AFB 1 -DNA adducts to cause carcinogenesis and mutations (Valeria et al. 2020;Owumi et al. 2020). The kidney cells can be directly damaged by AFB 1 through increasing cell apoptosis and death . For the intestinal epithelium cells, AFB 1 damage was mainly presented from barrier function loss and inflammatory response (Hernández-Ramírez et al. 2019). Because intestinal epithelium cells usually contact with AFB 1 directly, the long-term adaptation makes them be insensitive to AFB 1 than liver and kidney cells. The addition of compound probiotics and mycotoxin-degrading enzyme could contribute to cell proliferations and alleviate the toxicity induced by AFB 1 , which might be from mycotoxin biodegradation (Huang et al. 2018). It was found that the different concentrations of CP or CPS at different reaction time had different effects on the viabilities of three kinds of cells; therefore, the optimal CP or CPS concentrations and reaction time were selected for improving viabilities of different cell types. It was also indicated that CP was more effective than CPS for increasing cell viabilities, maybe due to the interaction between primary cells and microbes.
The previous researches have indicated that lactic acid bacteria can synthesize a wide variety of polysaccharides during their growth process (Round et al. 2011;Poole et al. 2018). These polysaccharides can be classified into two kinds, one kind can be tightly linked to the cell surface forming the capsular polysaccharides, which are loosely attached to the extracellular surface, or secreted to the environment as exopolysaccharides (Castro-Bravo et al. 2018). Capsular polysaccharide adhesion to intestinal epithelial cells is believed to help probiotic bacteria to transiently colonize and persist on epithelial cells for decreasing the colonization of intestinal pathogens (Castro-Bravo et al. 2018). Another kind is called extracellular polysaccharides, which can modulate intestinal immunity and reduce the secretion of proinflammatory cytokines (Laiño et al. 2016). Enterococcus faecalis can directly produce extracellular polysaccharide (Rossi et al. 2015), which may be the reason why CP is able to improve cell vitality more than CPS in this study. However, the long-term incubation of CP or CPS was harmful to cells, the reason may be due to the secondary metabolites produced by probiotics to influence cell growth.
Aspergillus oryzae can produce many kinds of enzymes such as protease and amylase except for AFB 1 -degradation enzyme, which may affect cell paste and growth. The reason why high ADE concentrations could influence cell viability might be due to the high levels of enzymes existing in ADE to damage cells, so low ADE concentration was selected in this study. It was reported that supplementation of L. bulgaricus or L. rhamnosus could produce significant protective effect against AFB 1 -induced liver damage and inflammatory response . Moreover, the addition of compound probiotics and mycotoxin-degradation enzyme could prevent broilers from damages induced by AFB 1 (Zuo et al. 2013). In this study, four kinds of compound probiotics plus AFB 1 -degradation enzyme additions significantly increased the cell viability induced by AFB 1 , inferring that CPDE or CPSADE could alleviate the toxicology induced by AFB 1 in three kinds of primary cells.
The previous studies have demonstrated that AFB 1 exposure can induce inflammation response in different cells and organs Wang et al. 2019;Zhao et al. 2019). Inflammation is a response against infection, illness and injury by the excessive expressions of chemokines and inflammatory cytokines such as TNF-α, IL-6 and IL-8 ( Barutta et al. 2015;Guo et al. 2015). TNF-α is a proinflammatory cytokine, which can stimulate various kinds of cells to produce chemokines to cause tissue damage and inflammation response (Shanmugam et al. 2016). It can be speculated that the degree of AFB 1 -induced damage may be decreased by suppressing the overexpression of inflammatory cytokines. In this study, AFB 1 exposure significantly up-regulated the mRNA abundances of IL-6, IL-8 and TNF-α in the three kinds of primary cells, but CPADE or CPSADE addition significantly down-regulated their mRNA abundances in the intestinal and kidney cells except for TNF-α in liver cells, indicating that probiotic combined with ADE could suppress gene expressions of some pro-inflammatory cytokines such as IL-6 and IL-8 (Weninger and Andrian 2003).
NF-κB is an important nuclear transcription factor and a major regulator for anti-inflammatory. The activated NF-κB plays a vital role in inflammatory response by regulating multiple cytokines (Zhang et al. 2018). In response to the inflammation cytokines, inducible nitric oxide synthase (iNOS) can catalyze the production of NOD which is a potent pro-inflammatory mediator (Surh et al. 2001). NOD1 is an innate immune sensor, which consists of a C-terminal leucine-rich region (LRR), central NOD and N-terminal caspase-activating domain (CARD) (Ma et al. 2020). NOD1 plays an important role in response to pathogen infection to induce activation of intracellular signaling pathway, leading to pro-inflammatory response (Caruso et al. 2014;Robertson et al. 2016). Several studies have showed that TLRs and NODs can participate in production of pro-inflammatory molecules to enhance immune responses ( Van-Heel et al. 2005;Fritz et al. 2005). It was reported that NLRs, NOD1 and NOD2 had the similar domain architectures and functions, but had the different CARD domain numbers (Trindade and Chen 2020). It was confirmed that NOD1 and NOD2 could activate the classical NF-κB and MAPK pathways related to cell inflammation and apoptosis (Seger and Wexler 2016).
TLRs play the vital roles in innate immune system. The effects of different mycotoxins on gene expression of TLR2, TLR4 and TLR7 have been reported (Chen et al. 2013). It was reported that 600 μg/kg AFB 1 in broiler diet could simultaneously down-regulate the expressions of TLR2, TLR4 and TLR7 genes in the intestinal tissues of broilers, and decrease the expressions of cytokines such as IFN-γ and TNF-α to reduce the innate immunity of broilers ). However, another research showed that mixed aflatoxins B and G could up-regulate TLR2 and TLR4 transcripts (Malvandi et al. 2013), corresponding with this study, which may due to the dosedependent effect of aflatoxins .
In this study, AFB 1 exposure could up-regulate NF-κBp65, iNOS, NOD1 and TLR2 mRNA abundances in intestinal, kidney and liver cells to cause to the multiple inflammatory pathway responses, in agreement with the previous report (Yan et al. 2020); however, CPADE or CPADE addition could down-regulate their mRNA abundances except for NOD1 and TNF-α in liver cells, indicating that CPADE or CPADE was able to alleviate cell inflammations and damages induced by AFB 1 through suppressing the pathway activations of NF-κB, iNOS, NOD1 and TLRs.
It can be concluded that CPADE or CPSADE is able to alleviate AFB 1 -induced cytotoxicity and inflammation of chicken embryo primary intestinal epithelium, liver and kidney cells by down-regulating mRNA abundances of inflammation cytokines through suppressing the activations of NF-κB, iNOS, NOD1 and TLRs signal pathways. These findings provide insights into the future development of strategies for CPADE or CPSADE to protect the primary cells from AFB 1 -induced damages. | 2020-12-24T09:10:50.857Z | 2020-12-17T00:00:00.000 | {
"year": 2021,
"sha1": "77be31cc59f1a16e7276715b305c56620666b9f3",
"oa_license": "CCBY",
"oa_url": "https://amb-express.springeropen.com/track/pdf/10.1186/s13568-021-01196-7",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "71a3aeeffb8a275a067d2663e224fafddd557506",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
6013551 | pes2o/s2orc | v3-fos-license | Knee awareness and functionality after simultaneous bilateral vs unilateral total knee arthroplasty Retrospective Study
AIM: To investigate knee awareness and functional outcomes in patients treated with simultaneous bilateral vs unilateral total knee arthroplasty (TKA). METHODS: Through a database search, we identified 210 patients who had undergone unilateral TKA (UTKA) and 65 patients who had undergone simultaneous bilateral TKA (SBTKA) at our institution between 2010 and 2012. All TKAs were cemented and cruciate retaining. The mean follow-up period was 3.2 (2 to 4) years. All the patients had symptomatic and debilitating unilateral or bilateral osteoarthritis for which all conservative and non-surgical treatments were failed, thus preoperatively the patients had poor functionality. All patients were asked to complete Forgotten Joint Score (FJS) and Oxford Knee Score (OKS) questionnaires. The patients were matched according to age, gender, year of surgery, Kellgren-Lawrence score and pre- and postoperative overall knee alignment. The FJS and OKS questionnaire results of the two groups were then compared. RESULTS: A mixed-effects model was used to analyze differences between SBTKA and UTKA. OKS: The mean difference in the OKS between the patients who had undergone SBTKA and those who had undergone UTKA was 1.5, which was not statistically significant (CI = -0.9:4.0, P-value = 0.228). The mean OKS of the SBTKA patients was 37.6 (SD = 9.0), and the mean OKS of the UTKA patients was 36.1 (SD = 9.9). FJS: The mean difference in the FJS between the patients who had undergone SBTKA and those who had undergone UTKA was 2.3, which was not statistically significant (CI = -6.2:10.8, P-value = 0.593). The mean FJS of the SBTKA patients was 59.9 (SD = 27.5), and the mean FJS of the UTKA patients was 57.5 (SD = 28.8). CONCLUSION: SBTKA and UTKA patients exhibited similar joint functionality and knee awareness. Our results support the use of SBTKA in selected patients suffering from clinically symptomatic bilateral osteoarthritis. artificial joint as a result of successful treatment; this result is considered as the ultimate goal of joint replacement surgery. No differences in final outcomes were observed between the groups. Therefore, individuals for whom bilateral TKA is indicated should be offered this option.
postoperative overall knee alignment. The FJS and OKS questionnaire results of the two groups were then compared.
RESULTS:
A mixed-effects model was used to analyze differences between SBTKA and UTKA. OKS: The mean difference in the OKS between the patients who had undergone SBTKA and those who had undergone UTKA was 1.5, which was not statistically significant (CI = -0.9:4.0, P -value = 0.228). The mean OKS of the SBTKA patients was 37.6 (SD = 9.0), and the mean OKS of the UTKA patients was 36.1 (SD = 9.9). FJS: The mean difference in the FJS between the patients who had undergone SBTKA and those who had undergone UTKA was 2.3, which was not statistically significant
INTRODUCTION
The number of patients undergoing simultaneous bilateral total knee arthroplasty (SBTKA) has steadily increased. Currently, approximately 6% of all total knee arthroplasties (TKAs) performed in the United States are simultaneous bilateral procedures [1] . The potential benefits of SBTKA compared with staged procedures include a decreased length of hospitalization, decreased time under anesthesia, decreased rehabilitation time, and decreased cost to the healthcare system [2][3][4][5] . The disadvantages of SBTKA include an increased need for blood transfusions and increased physiological stress induced by simultaneous surgery [1,[6][7][8][9] . Although these benefits and disadvantages are accepted in the medical community, it remains a matter of debate whether functional outcomes, pain relief and patient satisfaction are equivalent between bilateral and staged procedures.
The need to rehabilitate two knees after SBTKA could be hypothesized to result in inferior functional outcomes for each knee compared with those achieved following rehabilitation of a single knee, as in unilateral TKA (UTKA) [10] . Furthermore, the increased length of time required to perform SBTKA compared with UTKA could result in inferior technical performance toward the end of the procedure. This decreased performance could possibly be reflected in functional outcomes and knee awareness [11] . Hence, functional outcomes and knee awareness following SBTKA could be inferior following UTKA according to the two aforementioned hypotheses. If functional outcomes and knee awareness are indeed inferior after SBTKA relative to UTKA, then the indications for performing SBTKA will be limited, and reconsideration of current SBTKA treatment strategies will be warranted.
The purpose of this study was to compare knee awareness and functional outcomes between patients who had undergone SBTKA and those who had undergone primary UTKA.
MATERIALS AND METHODS
This study was performed in accordance with the Declaration of Helsinki of the World Medical Association.
In the current retrospective, matched, case-control cohort study, we identified 69 patients who had undergone SBTKA with insertion of prostheses with the same TKA design in both knees at our institution between January 2010 and December 2012. During that period, the same TKA design was used in 240 UTKA procedures. Selected patients had symptomatic and debilitating unilateral or bilateral osteoarthritis for which conservative and nonsurgical treatments were failed. Hence, preoperatively all the patients had poor functional performance. These UTKA patients were enrolled in the study as controls. The large size of the UTKA group ensured that as many patients as possible could be matched. Patients who had undergone knee surgery before primary TKA or who had undergone revision surgery with replacement of the prosthetic components after primary TKA were excluded. All TKA procedures had been performed using a medial para-patellar approach and were cemented and cruciate retaining (AGC, Biomet, Warsaw, Indiana). Additionally, all procedures included patellar resurfacing. The AGC prosthesis is a widely used TKA system that demonstrated good clinical results and longevity in earlier studies [12][13][14] . All patients had undergone surgery in a fasttrack setting and had followed the same standardized postoperative rehabilitation program [15] . Patients had been selected for SBTKA if they had bilateral disabling osteoarthritis and no cardiopulmonary comorbidity (ASA 1 to 2).
Gender, age at the time of surgery and year of surgery were documented for all patients. Preoperative radiographs were available for all knees and were analyzed for the degree of osteoarthritis using the Kellgren-Lawrence (KL) grading scale [16,17] . Pre-and postoperative anteroposterior knee anatomical alignment was measured using short-film radiographs according to the method described by Petersen et al [18] .
The same observer performed all radiographic assessments.
SBTKA patients and UTKA controls were invited to participate in this study in January 2014. Each patient received Forgotten Joint Score (FJS) and Oxford Knee Score (OKS) questionnaires. The patients in the UTKA group received one set of questionnaires, whereas those in the SBTKA group received two sets of questionnaires, with one clearly marked for each knee. The questionnaire responses left 65 SBTKA and 210 UTKA patients eligible for matching and further analysis.
Each knee in the SBTKA group was matched 1:1 to the knees in the UTKA group regarding gender, age at the time of surgery, year of surgery, KL grade and preand postoperative anatomical knee alignment (Table 1). This resulted in a study cohort of 94 knees in 47 patients in the SBTKA group and 94 knees in 94 patients in the UTKA group. The FJS and OKS were then calculated and compared between the matched groups. The follow-up period in this study was 2 to 4 years (mean 3.2 years). A flow chart describing the study's participants can be found in Figure 1.
For all participants, the OKS was calculated. The range of the OKS is 0 to 48, with 48 being the best possible score [19] . The FJS [20] is based on a 12-item questionnaire that evaluates a patient's ability to forget about his or her artificial joint in everyday life (awareness of the knee).
The range for the FJS is 0 to 100, with 100 being the best possible score; the properties of the FJS questionnaire have been reported in earlier studies [20][21][22] . The data used in the current study were sufficiently anonymized, and The Danish National Data Protection Agency approved the project (AHH-2014-010).
Statistical analysis
The statistical methods used in this study were reviewed by Thomas Kallemose, a biomedical statistician from Clinical Research Center, Copenhagen University Hospital Hvidovre, Kettegaard Alle 30, DK-2650 Hvidovre, Copenhagen, Denmark.
Matching was performed for all patients who completed both the FJS and the OKS questionnaires. The matching was prioritized by operation year, gender, KL score, age at the time of surgery, postoperative anatomical knee alignment and preoperative anatomical knee alignment. The year of surgery was most highly prioritized because of the small amount of overlap between the UTKA and the SBTKA patients. Pooled squared differences corresponding to age at the time of surgery, postoperative anatomical knee alignment and preoperative anatomical knee alignment were used to determine the best matching; 100000 permutations were used, and the best was selected based on the smallest pooled squared difference.
Sample size estimation was based on the ability to detect an inter-group difference (power 90%, P-level 0.05, SD: 10) of 5 points or more (considered to be clinically relevant) in the OKS. This resulted in a need for 85 cases per group.
A mixed-effects model was used to assess the differences between UTKA and SBTKA patients in terms of both the FJS and the OKS. The score difference between the UTKA and the SBTKA patients within each matched knee pair was used as an outcome in the model. Because of the assumed within-patient variance in the SBTKA group, a random effect corresponding to the SBTKA patients' scores was added. Because of the matching, no other factors were added to the model. A P-value less than 0.05 was considered statistically significant. All matching and analyses were performed using R 3.02 (R Foundation for Statistical Computing, Vienna, Austria).
OKS
The mean difference in the OKS between the SBTKA and the UTKA groups was 1.5, which was not statistically significant (CI = -0.9:4.0, P-value = 0.228). The mean OKS was 37.6 (SD = 9.0) in the SBTKA group, and it was 36.1 (SD = 9.9) in the UTKA group (Table 2).
FJS
The mean difference in the FJS between the SBTKA and the UTKA groups was 2.3, which was not statistically significant (CI = -6.2:10.8, P-value = 0.593). The mean FJS was 59.9 (SD = 27.5) in the SBTKA group, and it was 57.5 (SD = 28.8) in the UTKA group (Table 2).
However, the majority of these studies have not focused on long-term functional outcomes or knee awareness. In the current study, we investigated functional outcomes and knee awareness during daily-living activities using patient-reported outcome measures following SBTKA and UTKA.
We hypothesized that the longer duration required to perform SBTKA could lead to inferior technical performance when operating on the second knee and that difficulties during postoperative rehabilitation of two knees after SBTKA could result in an overall inferior functional outcome and higher knee awareness compared with those for UTKA. We found that the SBTKA group did not significantly differ from the UTKA group with respect to functional outcomes or knee awareness at 2 to 4 years post-surgery. This result is consistent with the findings from a previous study [34] , in which 150 consecutive, but selected, SBTKA cases were compared with 271 UTKA cases in a standardized fast-track setting between 2003 and 2009. Husted et al [34] demonstrated that the outcome at three months and two years was similar or better in the SBTKA group with regard to satisfaction, the range of motion, pain, the use of a walking aid and the ability to work and perform activities of daily living. However, this previous study did not use a validated PROM, such as those used in the present study.
In a retrospective review of 697 TKAs in 511 consecutive patients (SBTKA: 186, UTKA: 325) with bilateral knee arthritis and a follow-up period of 2 to 8 years, using the Knee Society Score and its subscales as endpoints, Bagsby and Pierson [35] demonstrated a statistically significant better postoperative functional outcome, including an increased total range of motion (P = 0.001), improved flexion (P = 0.003), and an increased function score (P < 0.001) associated with SBTKA. They presumed that this finding was related to the absence of contralateral arthritis, which would produce pain and restrict rehabilitation. This contradicts our hypothesis that simultaneous surgeries on two knees would make rehabilitation more difficult and potentially result in an inferior outcome compared with that associated with UTKA. However, their findings are ultimately consistent with our conclusions regarding the performance of SBTKA. In a study by Zeni and Snyder-Mackler [36] , 15 subjects who had undergone SBTKA were observed prospectively for a period of 2 years. Subjects in this group were matched with subjects who had undergone UTKA by age, sex and BMI, providing equal samples of 15 subjects in each group. These 2 groups were then compared with a group of 21 orthopedically healthy subjects, which served as the control group. Pre-and post-operative self-reported functional measures and objective clinical tests were then applied to the groups. At 2 years, the long-term outcomes of the bilateral group were similar to those of the matched sample of patients who had undergone UTKA and to those of the control subjects. These findings are again in accordance with the findings of the current study, which unanimously support the practice of SBTKA according to the long-term outcomes.
Seo et al [11] reviewed SBTKA outcomes in 420 patients at 1 year post-surgery. Similar to what was hypothesized in the current study, they hypothesized that the postoperative results produced by SBTKA would vary as a result of disparate surgical scenarios between knees. In support of their hypothesis, they found that the second TKA had a greater incidence of outliers in limb coronal alignment (16.2% vs 9.0%, P = 0.003), more blood loss (735 mL vs 656 mL, P < 0.001) and a slightly longer operation time (61 min vs 58 min, P < 0.001) compared with the first TKA. This supports our hypothesis that lengthier surgeries could lead to inferior technical performance near the end of a procedure, possibly resulting in inferior functional outcomes and higher knee awareness of the knee operated on last. However, at the 1-year follow-up, neither knee showed a difference in its range of motion after surgery (P = 1.000).
The postoperative flexion angle improved equally, to 129° and 127°. Moreover, no significant differences in the postoperative Knee Society Function Score or the total Western Ontario and McMaster Universities Arthritis Index scores were observed between the sides (P = 0.316 and 1.000, respectively). However, a significant difference in the postoperative Knee Society Knee Score was observed (P < 0.001). Concerns that SBTKA produces inferior functional outcomes in one or both knees thus appear to be unwarranted.
A review of previous SBTKA studies concluded that there are no sound counterarguments against the orthopedic advantages of SBTKA [8] . Any remaining debate centers on medical and anesthetic contraindications. Age and preoperative comorbidities play important roles in postoperative morbidity and mortality. In addition, 81% of the participants in the Consensus Conference on Bilateral Total Knee Arthroplasty Group [37] agreed that SBTKA is associated with an increased risk of perioperative adverse events when performed on unselected patients. The consensus group also agreed that physicians and hospitals should consider using more restrictive patient selection criteria and should exclude those with a modified cardiac risk index greater than 3 to mitigate the potentially increased risk of adverse events. Furthermore, the entire group agreed that when there is a conflict between orthopedic need and medical adequacy with regard to SBTKA, the medical concern for a patient's safety should prevail over the orthopedic need. Hence, only patients with no evidence of cardiopulmonary disease, ASA scores of 1 or 2 and bilateral disabling osteoarthritis are considered as acceptable candidates for SBTKA at our institution. In the current study, because cardiopulmonary disease tends to increase with age, we found that patients in the SBTKA group were younger before matching was performed. It can be argued that younger patients might experience fewer degenerative changes in the knees. To account for this, we matched the SBTKA and UTKA groups in terms of gender, age at the time of surgery, year of surgery, KL grade and pre-and postoperative anatomical knee alignment to minimize potential bias.
The FJS has recently been introduced and validated as a post-surgical assessment tool for total joint replacement [20] . The FJS specifically evaluates a patient's level of awareness of their artificial joint in 12 scenarios commonly encountered in daily life. Joint awareness includes strong sensations, such as pain, and the ability to perform activities of daily living, as well as more subtle feelings, such as mild stiffness, subjective dysfunction and any other discomfort that a patient might encounter.
The forgotten joint concept, which is based on the level of knee awareness, is a more discerning assessment method that has shown better discriminatory power and less of a ceiling effect than traditional questionnaires measuring pain or function do. These features are especially appealing for more active patients with good to excellent outcomes after TKA. The FJS also allows detection of potential subtle differences between patients and between follow-up time points [20,21] . The current study has certain limitations. We matched patients according to the abovementioned parameters, whereas other factors (e.g., BMI, social status, psychological profile, preoperative duration and pain intensity, comorbidities and ASA score) that may potentially affect functional outcomes and knee awareness, were not accounted for in this study. However we have chosen parameters, which have a high influence on functional outcomes and involved in this study. The primary strength of the present study is the matching of patients between the study groups in terms of gender, age at the time of surgery, KL grade and pre-and postoperative knee alignment. Because of this matching procedure, we believe that our study groups are comparable, counteracting the study's limitations.
SBTKA and UTKA patients exhibit similar knee function and knee awareness. Our results support the use of SBTKA in selected patients without cardiopulmonary comorbidity who suffer from clinically symptomatic bilateral osteoarthritis.
Background
The potential benefits of simultaneous bilateral total knee arthroplasty (TKA) include a decreased overall length of hospitalization, shorter overall anesthesia time and decreased cost to both the patient and the institution. Although many prior studies examining differences between unilateral and bilateral TKA have focused on short-term postoperative outcomes, costs, and complications, few have assessed differences in long-term results and functional outcomes.
Research frontiers
To the authors' knowledge, this is the first review to analyze patient-reported outcomes (PROs) after simultaneous bilateral TKA using the newly introduced Forgotten Joint Score (FJS). The FJS was validated in Danish in a parallel study at the authors' institution and was used to compare PROs between simultaneous bilateral TKA patients and unilateral TKA patients.
Innovations and breakthroughs
Several reports have shown the potential effects of knee alignment on PRO measures after TKA. Therefore, the authors measured osteoarthritis severity and pre-and post-operative overall knee alignment based on the radiographs of 340 patients. Moreover, the authors used the forgotten joint concept, which is a more discerning assessment method that has shown better discriminatory power and less of a ceiling effect than traditional questionnaires measuring pain or function do. These features are especially appealing for more active patients with good to excellent outcomes after TKA. The authors obtained perfect matching regarding age, gender, year of surgery, Kellgren-Lawrence score and pre-and post-operative overall knee alignment, which allowed comparison of parameters of interest without confounding by other elements. Concurrently with the FJS, the authors also used the well-known Oxford Knee Score (OKS) to investigate patient functionality after joint replacement.
Applications
The results support the use of simultaneous bilateral TKA in selected patients without cardiopulmonary comorbidity who suffer from clinically symptomatic bilateral osteoarthritis. | 2018-04-03T03:29:13.149Z | 2016-03-18T00:00:00.000 | {
"year": 2016,
"sha1": "f286a6d127108b14ae08187736cf8807765c393e",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.5312/wjo.v7.i3.195",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "37a42e3cd8cd76edec36b4bc6f1dd8250ac66b4a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
257056003 | pes2o/s2orc | v3-fos-license | Genomic studies controvert the existence of the CUX1 p75 isoform
CUX1, encoding a homeodomain-containing transcription factor, is recurrently deleted or mutated in multiple tumor types. In myeloid neoplasms, CUX1 deletion or mutation carries a poor prognosis. We have previously established that CUX1 functions as a tumor suppressor in hematopoietic cells across multiple organisms. Others, however, have described oncogenic functions of CUX1 in solid tumors, often attributed to truncated CUX1 isoforms, p75 and p110, generated by an alternative transcriptional start site or post-translational cleavage, respectively. Given the clinical relevance, it is imperative to clarify these discrepant activities. Herein, we sought to determine the CUX1 isoforms expressed in hematopoietic cells and find that they express the full-length p200 isoform. Through the course of this analysis, we found no evidence of the p75 alternative transcript in any cell type examined. Using an array of orthogonal approaches, including biochemistry, proteomics, CRISPR/Cas9 genomic editing, and analysis of functional genomics datasets across a spectrum of normal and malignant tissue types, we found no data to support the existence of the CUX1 p75 isoform as previously described. Based on these results, prior studies of p75 require reevaluation, including the interpretation of oncogenic roles attributed to CUX1.
Protein isoforms and splice variants often have important and distinct biological functions. Since protein isoforms, by definition, have considerable sequence homology and may be expressed at different levels within the cell, it can be challenging to accurately differentiate between and functionally characterize such isoforms 1 . One such protein reported to have multiple isoforms is CUT-like homeobox 1 (CUX1), a HOX-family transcription factor with critical roles in development and tumorigenesis. In vertebrates, the CUX1 locus contains two distinct genes that partially share exons: CUX1, which encodes a transcription factor localized to the nucleus, and CASP, which encodes a golgi-associated transmembrane protein involved in retrograde transport [2][3][4] . Altered levels and mutations of the CUX1 transcription factor have been implicated in cancer across several tumor types and species 5,6 . CASP, on the other hand, has not been implicated in human disease 7,8 . The RefSeq database documents seven mRNA isoforms for the human CUX1 locus; five of these are CASP transcripts and two are CUX1 (Fig. S1). Due to its relevance to human health, we focus our attention herein on CUX1. For the sake of simplicity, our subsequent references to the CUX1 gene or mRNA allude to those isoforms that encode CUX1, unless stated otherwise.
CUX1 is highly conserved, ubiquitously expressed, and essential for survival in mice and Drosophila 9 . CUX1 controls many cellular processes including determination of cell identity, cell cycle progression, cell-cell communication, and cell motility 9 . In cancer, however, there are conflicting reports of CUX1 acting alternately as an oncogene or tumor suppressor gene 6 . To resolve this discrepancy, we hypothesized that distinct CUX1 protein isoforms explain these disparate functions.
The two RefSeq-annotated CUX1 mRNA transcripts vary only by alternative first exons and encode a fulllength protein of 1505 amino acids length, described in the literature as p200 (Figs. 1a, S1). p200 CUX1 has four DNA-binding domains, comprised of three CUT-repeat domains and one homeodomain (Fig. 1a). A truncated p110 CUX1 isoform is generated by post-translational proteolytic processing of full-length p200 CUX1 by cathepsin L (Fig. 1a) 10 . This cleavage occurs during the S phase in normal cells, and can become constitutive in transformed cells 10,11 . p110 CUX1 lacks one CUT-repeat domain and the N-terminal region but retains the
Results
Human hematopoietic cells only express the p200 CUX1 isoform. Given the relevance of CUX1 to myeloid malignancies, we first sought to identify the CUX1 isoforms expressed in human acute myeloid leukemia (AML) cells. Immunoblotting with an antibody that recognizes an epitope shared across all CUX1 isoforms (clone B-10, Fig. 1b) reveals six of eight AML cell lines express a dominant p200 CUX1 band (Fig. 1c). We also observed a less prominent 75 kDa band in five cell lines (Fig. 1c). p200 was also the predominant isoform in primary human CD34+ hematopoietic stem and progenitor cells (HSPCs), the normal counterpart thought to give rise to myeloid malignancies (Fig. 1d). To discern if the p75 band we observed corresponds to that described previously, we assessed cell lines reported to express p75 or p110: murine NIH-3T3 fibroblasts (p110), and MCF7, T47D, and MDA-MB-231 human breast cancer cell lines (p75) 12,26 . In contrast to prior findings, we did not observe a short isoform protein band in any of these cell lines, using both the B-10 and the PUC antibodies (Fig. 1e, S2). The absence of p110 in NIH-3T3 cells was also previously observed 12 .
We next blotted AML cell lines to determine if we could detect the p75 band with other CUX1 antibodies. We used a polyclonal antibody we previously generated, PUC, that recognizes amino acids 1223-1242 of CUX1 27 , and ABE217, a polyclonal antibody raised against an epitope spanning amino acid 861 of CUX1 (Fig. 1b). There were several background bands observed with PUC, but no dominant p75 band (Fig. 1f). The faint band at 75 kDa seen with the ABE217 antibody cannot be the p75 isoform, as the antibody recognizes an epitope of CUX1 upstream of the p75 protein sequence (Fig. 1b, g). Thus, p200 is the predominant CUX1 isoform in human hematopoietic cells, and we did not detect p75 or p110 in cells previously reported to express short isoforms.
To circumvent potential antibody artefacts, we took an alternative approach to determine if the p75 band is in fact CUX1, by tagging the endogenous CUX1 allele with an in-frame C-terminal GFP tag by CRISPR/Cas9 homology-mediated repair. We used KG-1 cells, which express both p75 and p200 bands. KG-1 cells have a partial deletion of chromosome arm 7q that includes CUX1, thus they are mono-allelic for CUX1, enabling facile Human hematopoietic cells only express the p200 CUX1 isoform. (a) Schematic representation of the CUX1 mRNA. There are two CUX1 mRNA transcripts that vary only by the alternative first exons (1a and 1b). CUX1 encodes a full-length protein of 1505 amino acids which runs at 200 kDa (p200). A truncated p110 CUX1 protein is reported to be generated by proteolytic cleavage by cathepsin L. The p75 CUX1 isoform is reported to arise from an alternative transcription site embedded within intron 20. (b) Schematic representation of the predominant CUX1 protein isoforms, with protein domains indicated, and the CUX1 antibodies used in this study. (c) Immunoblot of CUX1 in the indicated human AML cell lines, using the B-10 antibody (n = 3). 10 μg of protein was loaded for the K562 and Kasumi-1 cell line, and 15 μg of protein was loaded for all other cell lines. (d) Immunoblot of CUX1 in primary human CD34 + HSPCs using the B-10-HRP antibody (n = 3). (e) Immunoblot of CUX1 in the NIH-3T3 fibroblast line and several human breast cancer cell lines previously reported to express p75 CUX1 using the B-10-HRP antibody (n = 3). (f) Immunoblot of CUX1 in indicated human AML cell lines, using the PUC antibody (n = 3). (g) Immunoblot of CUX1 in indicated human AML cell lines, using the ABE217 antibody (n = 3). (h) Immunoblot of GFP in a KG-1 cell line where endogenous CUX1 is C-terminally tagged with GFP. Protein from unedited KG-1 cells is also included (n = 3). Blot is cropped from the same gel to remove an intervening irrelevant lane. www.nature.com/scientificreports/ CRISPR/Cas9 editing (Fig. S3). GFP-tagged CUX1 migrated at a higher molecular weight than endogenous CUX1, as expected (Fig. S4). Probing these cells with an anti-GFP antibody only identified a single unique p200 CUX1 band in the tagged cell line (Fig. 1h). Together, these data indicate that human AML and primary HSPCs express p200 and not shorter CUX1 isoforms.
CRISPR/Cas9 genomic editing precludes the existence of a CUX1 p75 isoform. To further interrogate if the p75 band is encoded by the CUX1 locus or is a non-specific western blotting artefact, we devised several CRISPR/Cas9 strategies to selectively target KG-1 genomic DNA encoding p75, p200, or both (Figs. 2a, d, 3a). As illustrated in Fig. 2a, we designed a gRNA targeting exon 4 of CUX1 which is only expressed in the genomic region encoding the p200 isoform. This would be expected to selectively introduce a frameshift muta- www.nature.com/scientificreports/ tion in the p200 protein only and thereby induce nonsense-mediated decay of only p200, leaving p75 intact. We identified two single-cell CRISPR-edited clones that had a complete loss of p200 CUX1 (C9 and H9), best appreciated with the ABE217 antibody (Fig. 2b). The B-10 antibody also shows a loss of p200, with a residual non-specific band migrating at a slightly higher molecular weight (Fig. 2c). The expression level of the p75 band was unchanged, as expected based on our targeting strategy (Fig. 2c).
As a control, we designed a gRNA targeting exon 23 of CUX1 (gEx23.1) to introduce frameshift mutations in and edit all CUX1 transcripts (Fig. 2d). As exon 23 is shared by all isoforms, including p75, it would be expected to disrupt all bona fide CUX1 proteins. Transfection with gEx23.1 followed by single cell cloning identified a clone with a single base pair insertion generating a frameshift mutation. We blotted the gEx23.1-edited KG-1 clone with ABE217 and observed that the band for p200 CUX1 shifted downward, consistent with a predicted C-terminal truncation of ~ 28 kDa (Fig. 2e). The B-10 antibody binds an epitope after the exon 23 gRNA cut site, thus all gEx23-edited isoforms will be undetectable with B-10 ( Fig. 1b). Indeed, probing with B-10 demonstrates abolished expression of the p200 CUX1 band, yet persistent p75 (Fig. 2f). The fact that the 75 kDa band remained indicates that it is not encoded by the CUX1 locus.
We similarly targeted CUX1 using two different exon 23 gRNAs in primary human CD34+ HSPCs, and saw no change in expression of any bands other than p200 CUX1 (Fig. S5). The residual p200 protein in the gEx23 edited lanes is consistent with ~ 75-80% editing in these bulk populations. Overall, these data indicate that hematopoietic cells express p200 CUX1 and not p75.
We considered the possibility that p75 does not contain exon 23, perhaps due to alternative splicing. As a different approach, we designed a pair of gRNAs flanking the predicted p75 intronic TSS reported to be ~ 2.5 kb upstream of exon 21 (Fig. 3a) 12 . We reasoned that eliminating the putative intronic TSS would eliminate transcription of the p75 isoform while leaving p200 unperturbed. We deleted approximately 2.56 kb of intronic DNA, leaving 79 base pairs intact proximal to exon 21. Using a PCR strategy to screen the expected deletion, we generated a successfully deleted single-cell clone (KG-1 Δp75 #21, Fig. 3b). Immunoblotting, however, indicated no change in p75 (Fig. 3c). This suggests that this 2.56 kb segment of DNA does not harbor the putative p75 TSS, and is incongruent with the prior report 12 . In summary, these experiments show that the presumptive p75 isoform contains neither exon 23 nor a TSS within 2.56 kb upstream of exon 21 as originally described, further implicating the p75 band as a western blotting artefact.
Proteomics approaches do not support the existence of p75 CUX1. We considered that the p75 artefact results from the denaturing conditions of western blotting. To test this, we performed a CUX1 immunoprecipitation which is performed with proteins in their native state. After immunoprecipitation of CUX1 in KG-1 cells with the B-10 antibody, we probed the resulting blots again with B-10. While the p75 band is present in the input control, we do not observe it in the immunoprecipitate (Fig. 4a). This result is consistent with p75 being an artefact of western blotting.
We turned to unbiased proteomics to search for any CUX1 peptides within the 75 kDa region. We subjected KG-1 whole cell lysates and CUX1 immunoprecipitates to SDS-PAGE. We excised regions corresponding to 200 kDa and 75 kDa and performed LC-MS/MS (Fig. 4b). In the 200 kDa region after CUX1 immunoprecipitation, we observed 51 peptides (21 unique, shown as blue rectangles, Fig. 4c; Table S1) summed across replicates that mapped to CUX1/CASP protein sequences. Because p200, p75 and CASP partially share exons, some peptides are ambiguous and thus map to both p200 and CASP or both p200 and p75. However, as CASP is a 77 kDa protein and p75 is 75 kDa, we can infer that all ambiguous peptides in the 200 kDa region are in fact p200. In the 75 kDa region of the immunoprecipitated samples, we observed one peptide that ambiguously mapped to the N-terminal region of CUX1/CASP, but observed no peptides mapping to the p75 region ( Fig. 4c; Table S2). We think this might be a degradation product from the p200 band for this sample, as it was a low-scoring peptide of low intensity, compared to the corresponding peptide in the p200 sample. These data confirm that p75 CUX1 does not immunoprecipitate with anti-CUX1, consistent with Fig. 4a.
We assessed the proteomic analysis of whole cell lysates, which are agnostic to antibody selection (Fig. 4b, first two lanes). The p200 band contained 8 peptides (5 unique) which, based on a priori knowledge of relative protein migrations, we ascribed to p200 ( Fig. 4d; Table S3). Within the p75 band isolated from lysates, we observed 7 peptides (3 unique) that all unambiguously map to CASP. We did not detect any peptides from the p75 region in the lysates that mapped to p75 CUX1 ( Fig. 4d; Table S4). Taken together, we conclude that there is no proteomic evidence of the p75 isoform in a representative AML cell line that possesses the p75 protein band.
No p75 CUX1 is detected at the RNA level in human AML and breast cancer cell lines. It remained possible that the p75 protein is below the level of detection of mass spectrometry. As a more sensitive test, we looked for the presence of p75 CUX1 at the mRNA level by RT-PCR, using two primer sets (pairs 2 and 4 and pair 3 and 4) spanning intron 20 and exon 22 (Fig. 5a, b). Primer pair 3 and 4 was previously reported to detect the p75 transcript 12 . We assessed cDNA from three AML cell lines (K562, Kasumi-1, and KG-1) and three human breast cancer cell lines described to express p75 (T47D, MDA-MB-231, and MCF-7). Primers for GAPDH and all CUX1 isoforms (primer pair 1 and 4 and pair ex23 and ex24) served as positive controls. We did not detect any bands with the previously reported p75 primers (3 and 4) or with our primers (2 and 4, Fig. 5c). We confirmed that primer 4 was functional, as it successfully amplified p200 (primer pair 1 and 4), indicating that the lack of a p75 transcript was not due to primer design issues. Overall, we do not detect a p75 transcript in either human AML cell lines or the breast cancer cell lines previously reported to express p75 12 .
Functional genomics consortia datasets lack epigenetic or transcriptional evidence for a CUX1 p75 intronic transcriptional start site. Heretofore, our analysis has encompassed thirteen cell types. We www.nature.com/scientificreports/ www.nature.com/scientificreports/ sought to extend our analysis to comprehensively assess additional normal and malignant tissue types. As such, we leveraged consortia-generated functional genomics datasets for evidence of p75 CUX1 across a variety of cell types. We first assessed epigenetic marks that canonically decorate promoters. Promoters and enhancers both have H3K27ac, H3K4me3 and H3K4me1 deposition, although enhancers have higher levels of H3K4me1 28 . All three of these marks are present at the p200 TSS across seven Tier 1 ENCODE cell lines (Fig. 6a). However, we did not observe any H3K4me3 peaks in the intron 20 region of CUX1, including in MCF7, previously reported to express p75 (Fig. 6a). We observed some H3K27ac and H3K4me1 peaks in intron 20, but these peaks were not at the predicted TSS and not in MCF7 cells (Fig. 6a). Additionally, we also looked at the ChromHMM track in MCF7 cells. ChromHMM is a computational approach that integrates experimental ChIP-seq datasets for different histone marks into a hidden Markov model to assign chromatin states genome-wide 29 . The MCF7 ChromHMM track shows an active TSS (red region) at exon 1a and 1b but only weak transcription (dark green) Fig. 6a). Thus, epigenetic features of promoters are not present at the putative p75 TSS, even in a cell type documented to express p75. Promoters are also characterized by accessible chromatin, transcription factor binding, RNA-polymerase II (POL2RA) occupancy, and CpG islands [30][31][32][33] . In line with this, the p200 TSS has pronounced DNase hypersensitivity, transcription factor occupancy, POLR2A peaks, and a CpG island (Fig. 6a). However, there is a paucity of these signals in intron 20 and they are not enriched in the previously mapped p75 TSS. There is one DNase accessibility site in MCF7 near the predicted p75 TSS, but it does not have any of the other features of a promoter. These data are all inconsistent with an intron 20 alternative TSS.
Efforts have been made to comprehensively annotate promoters by synthesizing functional genomic datasets and curated databases. One such catalog is the Eukaryotic Promoter Database (EPD), which correctly annotates both exon 1a and 1b but not any promoters within intron 20 ( Fig. 6a "EPDnew Promoters" track) 34 .
We confirmed this observation with next-generation sequencing-based cap-analysis gene expression (CAGEseq), which captures 5′ capped mRNA transcripts to map TSSs 35 . CAGE-seq identifies 5′ capped mRNA in 5 different ENCODE cell lines (GM12878, H1-hESC, K562, HepG2 and MCF7) at exons 1a and 1b of CUX1 yet none within intron 20, including in MCF7 (Fig. 6a). Similar results were observed from CAGE-seq data from an atlas of 975 human primary cells and cancer cell lines (Fig. 6b) 36 . In aggregate, these data do not support an alternative intronic TSS.
The presumptive p75 transcript expresses intronic sequence proximal to exon 21 12 . We mined RNA-seq datasets to identify reads spanning this region in MCF7 and T-47D human breast cancer cell lines reported to express p75 6,12,37 . All exon 21 sequencing reads abruptly end at the intron 20/exon 21 border, inconsistent with the p75 transcript (Fig. 6c). In the MCF7 cell line, we observed some sequencing reads within the intron 20 region, but these were not conserved across four different replicates and were not contiguous with exon 21 reads 37 (Fig. 6c). These data do not support the existence of a p75 mRNA containing intronic sequence.
Discussion
Next-generation sequencing and CRISPR/Cas9 genome editing has revolutionized biomedical research in many ways. Perhaps less appreciated, however, is the role of these technologies in establishing the legitimacy of research findings. CRISPR/Cas9 editing, for instance, has invalidated cancer dependencies, drug targets, and viral receptors, as some illustrative examples [43][44][45] . In this report, we employ the power of functional genomics and CRISPR editing to demonstrate that the CUX1 gene does not encode a p75 isoform as described 12 . This conclusion is buttressed via multiple orthogonal approaches including biochemical studies with several antibodies, proteomics and extensive mining of functional genomic datasets across a plethora of cell types. Taken together, our data is inconsistent with a p75 CUX1 isoform arising from an intronic TSS and suggests that prior reports were based on a western blotting artefact 12 .
Other studies support our conclusion. p75 was first identified in HeLa cells, HEK293 cells, breast cancer cell lines and mouse thymus 12 . The original manuscript identifying human CUX1 generated antiserum against the entire CUX1 protein, yet western blotting of HeLa cells identified only p200 46 . Probing HEK293 cells with an antibody against the C-terminus region of CUX1 shows no p75 in another study 19 . In a report that p75 CUX1 causes polycystic kidney disease, the endogenous p75 expression was never documented at the protein level; all subsequent experiments were performed by over-expressing p75 cDNA 47 . Probing a western blot of entire Drosophila embryos for the highly conserved CUX1 ortholog, Cut, only reveals the full-length protein 48 . While p200 CUX1 protein increases after TGF-β treatment in normal lung fibroblasts, p75 does not 49 . Finally, in a study that reported that androgen-resistant prostate cancer cell lines upregulate p200, p75 was unchanged 50 . Collectively, these studies either fail to document an endogenous p75 protein, or uncouple the biology of p200 from p75.
It is unclear what the p75 cDNA product previously reported represents 12 . Perhaps it is a result of recursive splicing, where long introns are spliced in a sequential manner in tissue-specific contexts leading to a lag in www.nature.com/scientificreports/ splicing of intron 20 material 51 . Alternatively, studies of nascent transcription indicate that splicing does not always occur in the order of transcription, and introns that are spliced later temporally tend to be longer and have higher RNA-binding protein occupancy 52 . In keeping with this notion, intron 20 is the sixth longest intron in CUX1, and has an elevated RNA-binding protein occupancy compared to most other introns in the gene and thus may be spliced later than other introns in CUX1 52 . It is conceivable that the p75 cDNA product previously observed represents an intermediate, incompletely spliced p200 cDNA. In our analysis of hematopoietic cells, we only document the expression of the p200 CUX1 isoform. We cannot comment on the validity of p110 or other short isoforms (p80, p90 and p150). As these isoforms are generated post-translationally, functional genomics datasets are ineffective in determining their legitimacy. Future studies of these putative isoforms in other tissue types should employ stringent techniques such as CRISPR/Cas9 mutagenesis to ensure against being misled by western blotting artefacts.
Our finding calls into question studies that ascribed oncogenic functions to p75 CUX1. Many of these publications did not study endogenous p75, but instead employed overexpression models, which can confound results 12,18,19,53,54 . We speculate that overexpression of CUX1 has dominant negative effects. For instance, in mice, both overexpression of p75 and knockout or knockdown of CUX1 leads to myeloproliferative disease 19,24,55 . One interpretation of these seemingly incongruent findings is that artificial overexpression of p75 either interferes with the stoichiometry of endogenous CUX1 protein complexes or blocks full-length CUX1 from binding to its target genes. Indeed, p75 CUX1 has increased DNA-binding affinity compared to endogenous p200 CUX1 56 . The net effect of CUX1 overexpression may be the disruption of endogenous CUX1 tumor suppressor activity.
In support of this model, the p150 CUX1 isoform was found to exert a dominant negative phenotype upon p200 CUX1 14 . In this light, the use of overexpression systems to characterize p110 may also misattribute this isoform with oncogenic properties 16,18,53,57,58 .
There are relatively fewer reports that p200 CUX1 is oncogenic. p200 CUX1 has been shown to promote cell line migration, invasion, and evasion of apoptosis 17,59 . p200 transgenic mice develop organ hyperplasia 60 . 7q copy number gains and CUX1 overexpression has been documented in primary cancers 6,61,62 . However, these later findings should be interpreted with caution. Chromosome 7 also encodes oncogenes, including EGFR (on 7p), and BRAF, CDK6, and EZH2 (on 7q). Thus, in cancers with chromosome 7 copy number gains, the driver may be a true oncogene, while CUX1 is a passenger. Indeed, rigorous pan-cancer gene-level analysis of copy number alterations and mutation patterns in primary patient samples reveal CUX1 genetic changes are significantly characteristic of a tumor suppressor gene [20][21][22] . There is now a growing body of work that CUX1 is tumor suppressive 21,[23][24][25]27,63 .
Given the importance of CUX1 in development and disease across a wide variety of tissue types, it is critical to carefully dissect and understand the genomic structure of the CUX1 locus and encoded protein. The complexity of the gene has led to confusion in the field resulting in serious inaccuracies, most recently by Xu, et. al. 64 . We expect that our current study will help rectify these obstacles going forward.
Human mobilized peripheral blood CD34+ HSPCs from multiple healthy donors were purchased from the Fred Hutchinson Co-operative Center for Excellence in Hematology (Seattle, WA, USA). CD34+ HSPCs were expanded in StemSpan SFEMII base media supplemented with CC110 culture supplement for 1-3 days prior to electroporation.
Generation of CUX1-GFP tagged KG-1 cell line. pCUX1.1.0-gDNA (Addgene plasmid #112434; RRID: Addgene_112434) and pCUX1-donor plasmids (Addgene plasmid #112338; RRID: Addgene_112338) were a gift from Kevin White. The CUX1 homology arms were chemically synthesized by Gibson assembly and comprised 0.6 kb upstream and 1 kb downstream CUX1 sequence flanking the stop codon. The GFP tag is a LAP tag that contains a TEV protease cleavage site, an S peptide, the EGFP coding sequence, followed by an IRES and a kanamycin resistance element 65 . The LAP-GFP tag was PCR amplified from a separate plasmid, and assembled with the CUX1 donor sequence using Gibson assembly. KG-1 cells were transfected with 0.5 μg of each plasmid using the Neon® Transfection System (Invitrogen by Life Technologies, Waltham, MA, USA). The electroporation settings used for transfection was 1650 V, 20 ms pulse width, 1 pulse, and electroporation was performed according to manufacturer instructions. Cells were cultured in 0.5 mg/ml G418 after 7 days for 3 weeks to select for a transfected population. Primers used to confirm correct integration of the GFP tag are listed below: www.nature.com/scientificreports/ for 20 min, with frequent vortexing, and clarified by centrifugation (5000 g, 10 min at 4 °C). Total protein of the resulting supernatant was quantified using the Bradford assay at 595 nm wavelength, with BSA used to generate the standard curve. 10-15 µg of protein was subjected to SDS-PAGE and probed with anti-CUX1 antibody conjugated to HRP (B-10-HRP, mouse mAb derived against aa 1308-1332, Santa Cruz, 1:1000 in 5% milk/TBST) and visualized using ECL substrate. β-Actin was detected with anti-β-actin-HRP (C4, Santa Cruz, 1:3000 in 5% milk/TBST). Other antibodies used to probe for CUX1 expression include ABE217 (rabbit polyclonal antibody derived against aa 861, 1:1000 in 5% milk/TBST), and PUC (rabbit polyclonal antibody generated in-house that recognizes aa 1223-1242, 1:1000 in 5% milk/TBST). GFP (D5.1) rabbit mAb #2956 (Cell Signaling Technology, Product #2956S, 1:1000 in 5% FBS/TBST) was used to probe for GFP-tagged CUX1. Nitrocellulose membrane (Thermo Scientific, Catalog number: 88018) was used for protein transfer and ECL substrate (Thermo Scientific, Catalog number: 34579) was used for visualizing the proteins of interest.
gRNA design. All gRNAs were designed using the Broad Institute's sgRNA designer tool, and generated gRNA sequences were verified using Synthego's Verify Guide Design tool. gRNA sequences used in this study were purchased from Synthego, and the sequences are listed below: Reverse-transcriptase PCR. RNA was extracted from 500,000 cells of the K562, Kasumi-1, KG-1, T47D, MDA-MB-231 and MCF-7 cell lines using Trizol, precipitated using chloroform and 70% ethanol, and purified using the RNeasy Mini kit (QIAGEN cat no 74104). 500 ng of RNA from each cell line was used to synthesize cDNA using the Thermo Scientific Maxima™ H Minus cDNA Synthesis Master Mix Kit (Thermo Scientific cat no M1661). 1 µL of this synthesized cDNA was then used in the qPCR reaction. Primers were used specific to the p200 CUX1 isoform, the p75 CUX1 isoform, and GAPDH primers as a housekeeping control. The primers for the p75 transcript were obtained from the paper that originally described the existence of this isoform 12 . In addition, we also designed our own primers spanning various regions of intron 20 of CUX1. Controls tested include a no template water control, and a no RT enzyme control. 30 cycles of PCR were performed. Primer sequences are listed below: GAPDH forward primer: 5′-ACC ACA GTC CAT GCC ATC AC-3′. GAPDH reverse primer: 5′-TCC ACC ACC CTG TTG CTG TA-3′. p200 + p75 CUX1 forward primer: 5′-CCG GAG GAG AAG GAG GCG CT-3′. p200 + p75 CUX1 reverse primer: 5′-AGC TGT CGC CCT CCG AGC TG-3′. Immunoprecipitation. CUX1 was immunoprecipitated from the KG-1 cell line, and then blotted for CUX1 again to query whether short CUX1 isoforms were able to be pulled down by the anti-CUX1 antibody. 100 × 10 6 cells were spun down for a CUX1 pulldown and a control IgG pulldown each. Cells were lysed in hypotonic buffer (5 mM EDTA, 5 mM EGTA, 5 mM Tris-Cl) with protease inhibitor added (Roche complete mini-EDTA free). Pellets were passed through a 20-gauge needle and incubated on ice, then spun down. The supernatant was removed, and the pellet was resuspended in RIPA buffer with protease inhibitor added (Roche Complete). Pellets were again passed through a 27-gauge needle, incubated on ice and subsequently spun down. The supernatant was collected, and then incubated overnight at 4 °C on a rocker with either 12 µg of the anti-CUX1 antibody (B-10, Santa Cruz) or a rabbit IgG antibody. Protein A/G beads (Santa Cruz) were then added the following day to the supernatant, and incubated at 4 °C on a rocker for 1 h. The immunoprecipitated protein was then spun down, washed in cold PBS, resuspended in loading buffer, and subjected to SDS-PAGE.
LC-MS/MS via MaxQuant.
LC-MS/MS was performed using adapted methods previously published 67 .
In brief, peptide samples were re-suspended in Burdick & Jackson HPLC-grade water containing 0.2% formic acid (Fluka #60-006-17), 0.1% TFA (Pierce #28903), and 0.002% Zwittergent 3-16 (Millipore Sigma #693023), a sulfobetaine detergent that contributes the following distinct peaks at the end of chromatograms: MH+ at 392, and in-source dimer [2 M+ H+] at 783, and some minor impurities of Zwittergent 3-12 seen as MH+ at 336. The peptide samples were loaded onto a 100 μm × 40 cm PicoFrit column self-packed with 2.7 μm Agilent Poroshell 120, EC-C18, washed, then switched in-line with a 0.33 uL Optimize EXP2 Stem Traps, packed spray tip nano column packed with Halo 2.7 μm Pep ES-C18, for a 2-step gradient. Mobile phase A was water/acetonitrile/ formic acid (98/2/0.2) and mobile phase B was acetonitrile/isopropanol/water/formic acid (80/10/10/0.2). Using a flow rate of 350 nL/min, a 90 min, 2-step LC gradient was run from 5% B to 50% B in 60 min, followed by 50-95% B over the next 10 min, hold 10 min at 95% B, back to starting conditions and re-equilibrated. Electrospray tandem mass spectrometry (LC-MS/MS) was performed at the Mayo Clinic Proteomics Core on a Thermo Q-Exactive Orbitrap mass spectrometer, using a 70,000 RP (70 K Resolving Power at 400 Da) survey scan in profile mode, m/z 340-1800 Da, with lockmasses, followed by 20 MSMS HCD fragmentation scans at 17,500 resolution on doubly and triply charged precursors. Single charged ions were excluded, and ions selected for MS/MS were placed on an exclusion list for 60 s. An inclusion list (generated with in-house software) consisting of expected Cux1 sequences was used during the LC-MS/MS runs. | 2023-02-22T15:41:47.690Z | 2022-01-07T00:00:00.000 | {
"year": 2022,
"sha1": "ea3779b9e175f5ca15035d6d12c3724bc5ace73d",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-021-03930-4.pdf",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "ea3779b9e175f5ca15035d6d12c3724bc5ace73d",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
} |
2674991 | pes2o/s2orc | v3-fos-license | Density of critical clusters in strips of strongly disordered systems
We consider two models with disorder dominated critical points and study the distribution of clusters which are confined in strips and touch one or both boundaries. For the classical random bond Potts model in the large-q limit we study optimal Fortuin-Kasteleyn clusters by combinatorial optimization algorithm. For the random transverse-field Ising chain clusters are defined and calculated through the strong disorder renormalization group method. The numerically calculated density profiles close to the boundaries are shown to follow scaling predictions. For the random bond Potts model we have obtained accurate numerical estimates for the critical exponents and demonstrated that the density profiles are well described by conformal formulae.
I. INTRODUCTION
In a critical system the correlation length is divergent and correlated domains appear in all length scales. This phenomenon is seen for percolation 1 where the domains are connected clusters. In discrete spin models, such as in the Ising and the Potts models, domains of correlated spins can be identified in different ways. One possibility is to use geometrical clusters 2 (also called Ising or Potts clusters) which are domains of parallel spins. In two dimensions (2d) geometrical clusters percolate the sample at the critical temperature and their fractal dimension can be obtained through conformal invariance 3 . This value is generally different from the fractal dimension of Fortuin-Kasteleyn (FK) clusters 4 which are represented by graphs of the high-temperature expansion. From a geometrical cluster the FK cluster is obtained by removing bonds by a probability, 1 − p = e −Kc , K c being the critical value of the coupling. The fractal dimension of a FK cluster is directly related to the scaling dimension of the magnetization.
In a finite geometry, such as inside strips or squares, one is interested in the spanning probability and different crossing problems of the critical clusters. For 2d percolation many exact and numerical results have been obtained in this field 5,6,7,8,9,10,11,12,13 . Another interesting problem is the density of clusters in restricted geometries 14 , which is defined by the fraction of samples for which a given point belongs to a cluster with some prescribed property, such as touching the edges of infinite and half-infinite strips, squares, etc. This latter problem is analogous to the calculation of order parameter profiles in restricted geometries, which has been intensively studied through conformal invariance and numerical methods 15,16,17,18,19,20,21,22,23,24,25,26,27,28,29 .
Correlated clusters can be defined also in models in the presence of quenched disorder. In a random fixed point one generally considers such quantities, which are averaged first over thermal fluctuations and afterwards over quenched disorder. In isotropic random systems conformal symmetry is expected to hold at the critical point so that average operator profiles and average cluster densities are expected to be invariant under conformal transformations. Among disordered systems an interesting class is represented by such models in which the transition in the pure version is of first-order, but in the disordered version the transition softens into second order 30 . This type of random fixed point can be found, among others, in the two-dimensional random bond Potts model (RBPM) for q > 4, q being the number of states 31,32 .
If the distribution of the disorder is not isotropic, e.g. it has a layered structure, then the scaling behavior of the disordered system can be anisotropic, which is manifested in the fact that the critical clusters have an elongated shape. This means that the characteristic sizes of the clusters parallel, ξ , and perpendicular, ξ ⊥ , to the layers are generally related as ξ ∼ ξ z ⊥ , with an anisotropy exponent, z = 1. These essentially anisotropic models are not conformally invariant. A well known example in this class is the McCoy-Wu model 33 , which is a twodimensional Ising model with layered disorder. Study of this system, as well as its one-dimensional quantum version the random transverse-field Ising chain (RTFIC) has shown 34 that the critical behavior is controlled by a so called infinite disorder fixed point (IDFP), in which scaling is strongly anisotropic 35 . The characteristic lengths are related as ln ξ ∼ ξ 2 ⊥ so that the anisotropy exponent is formally infinite. The same IDFP is found to control the critical behavior of the randomly layered q-state Potts model 36 , as well as for strong enough layered disorder the critical behavior of percolation 37 and directed percolation 38 . Operator profiles in the RTFIC have been studied numerically 23 and the obtained data could be well fitted by curves which are obtained by analogy of the conformal results.
In this paper we study the density of critical clusters in two problems in which the critical properties are dominated by strong disorder effects. The first model is the two-dimensional RBPM in the large-q limit. In this model for a given realization of disorder the high-temperature series expansion is dominated by a single graph 39 , the so called optimal diagram, thus thermal fluctuations are indeed negligible. This optimal diagram is calculated for each finite sample by a combinatorial optimization algorithm 40 . Clusters in the optimal diagram are isotropic and density of clusters is obtained through averaging over disorder realizations. The second model we consider is the RTFIC, i.e. a random quantum model which is related to the classical McCoy-Wu model, in which the Fortuin-Kasteleyn clusters are strongly anisotropic. In the RTFIC clusters of correlated spins can be defined and calculated by the so called strong disorder renormalization group (SDRG) method 35 . During renormalization the system is transformed into a set of effective spin clusters and for a finite system with a given realization of the disorder one obtains the final cluster, which contains the mostly correlated sites. The fractal dimension of the final cluster at the critical point is directly related to the scaling dimension of the magnetization of the RTFIC. Here we calculate the density of these final clusters which are confined in a (one-dimensional) strip.
The two models we study in this paper are expected to be closely related, as far as their critical properties are concerned. Based on numerical and analytical studies 41,42 the scaling dimension of the magnetization, x b , and that of the surface magnetization, x s , are conjectured to be the same for both systems and given by 34 : On the other hand the correlation length exponents are related by a factor of two: it is ν = 2 for the RTFIC and ν = 1 for the RBPM. Here we are interested in a possible analogy in terms of the densities of the critical cluster. The structure of the paper is the following. Sec.II is devoted to the RBPM. Here we define the model, outline the calculation of the optimal diagram and then analyze the statistics of the distribution of the clusters. The numerically calculated densities are then compared with formulae which are obtained by modifying conformal results for percolation. In Sec.III we define the RTFIC, recapitulate the essence of the SDRG method and then numerically calculate final clusters at the critical point. The numerically calculated densities are compared with analytical formulae in this case, too. The paper is closed with a discussion.
A. Model
The q-state Potts model 43 is defined by the Hamiltonian: in terms of the Potts-spin variables, σ i = 0, 1, · · · , q − 1, at site i. The summation runs over all edges of a lattice i, j ∈ E, and in our study the couplings, J ij > 0, are independent and identically distributed random numbers. To write the partition sum of the system it is convenient to use the random cluster representation 4 : where β = 1/(k B T ln q), the sum runs over all subset of bonds, G ⊆ E and c(G) stands for the number of connected components of G. In the following we restrict ourselves to the square lattice, in which case the phase transition in the non-random model is of second order (first order) for q ≤ 4 (q > 4) 44 , but for random couplings the phase transition softens to second order for any value of q 45,46 . For conceptional simplicity we consider the large-q limit, where q βJij ≫ 1, and the partition function can be written as which is dominated by the largest term, φ * = max G φ(G).
Consequently for a given realization of disorder the thermal fluctuations play a completely negligible role and the critical properties of the system are dominated by disorder effects. The optimal diagram of the RBPM plays a completely analogous role as the geometrical clusters in percolation theory. For example at the critical point there is a giant cluster in the optimal diagram the fractal dimension of which, d f , is related to the scaling dimension of the (average) magnetization as d = d f + x b , where d = 2 is the dimension of the system. One can also study other questions, such as distribution of the mass of the connected clusters, spanning probability, surface scaling exponent, etc. Here we are going to investigate the density of clusters in strip geometry. During our study we use a bimodal form of the disorder, when the reduced couplings, K ij = βJ ij take two values: K 1 = K−∆ and K 2 = K+∆ with equal probability. Generally we study the critical point of the system which is located at K = K c = 1/2 47 independently of the value of 0 ≤ ∆ ≤ 1/2. Note that the pure system is obtained for ∆ = 0, whereas for ∆ = 1/2, when just the strong bonds are present in the system we have the traditional percolation problem. The evaluation of the optimal diagram with decreasing values of ∆ is shown in Fig.1. Here one can see that with decreasing ∆ the clusters become more compact. More precisely one can define a finite length-scale, the so called breaking-up length, l b , which is rapidly increasing with decreasing ∆. For small ∆ the breaking-up length has been calculated in Ref. 42 : which is divergent for ∆ → 0, i.e. in the non-random system limit. In a numerical calculation on a finite sample of linear size, L, one should have the relation, L ≫ l b , thus ∆ should be not too small. On the other hand one should also be sufficiently far from the percolation limit, ∆ = 1/2, in order to get rid of cross-over effects. This means that the optimal choice of ∆ is a result of a compromise, which in our case seems to be around ∆ = 5/12, when the typical breaking-up length is about l b ∼ 14.
Most of our studies are made for this value, but in order to check universality, i.e. disorder independence of the results we have made also a few calculations for ∆ = 21/48, too.
Calculation of the optimal diagram for a given realization of disorder is a non-trivial optimization problem, for which very efficient combinatorial optimization algorithm have been developed 40 , which works in strongly polynomial time. Application of this method made it possible to obtain the exact optimal diagram for comperatively large finite systems. In order to have an effective strip geometry we have considered lattices of rectangle shape with an aspect ratio of four. The strips have open boundaries along the long direction and periodic boundary condition was used in the other direction. We mention that the same geometry has been used before for percolation, too 14 . The width of the lattices we considered are from L = 32 up to 256. Typically we have considered several thousand samples, for the largest system we have thousand samples.
B. Densities of critical clusters
We start to study the density of crossing clusters, ρ b (l/L), which is given by the probability that a point in the position, l, measured perpendicular to the strip, belongs to a cluster which touches both boundaries of the strip. For percolation in the continuum limit, l ≫ 1, L ≫ 1 and y = l/L the density, ρ b (y), is calculated trough conformal invariance 14 : For the RBPM the numerically calculated normalized densities, ρ b (l/L), for different widths are shown in Fig.2. All the data fit to the same curve and the finite breakingup length, l b , seems to have only a small effect.
In the surface region, l ≪ L, but l > l b one expects from scaling theory: ρ b (l) ∼ l xs−x b , which is in accordance with the limiting behavior of the conformal prediction in Eq.(6) with the conjectured scaling exponents for x b and x s in Eq.(1). In Fig.3 we have presented ρ b (l) in a log-log plot in the surface region for the largest finite system. Indeed, for l > l b the points are well on a straight line the slope of which is compatible with the conjectured value: x s − x b = 0.309. We have also estimated the asymptotic slope of the curve by drawing a straight line through the points in a window [l b + l/2, l b + 3l/2] by least square fit. Fixing l b = 15 the estimates with varying l seem to have a ∼ l 2 correction (see the inset of We have also checked if the conformal result for percolation in Eq.(6) using the conjectured scaling exponents for x b and x s in Eq.(1) can be used to fit the scaling curve for the RBPM for the whole profile. As seen in Fig.2 the agreement between the numerical results and the formula in Eq.(6) is indeed very good. We have also calculated the ratio of the simulation to the theoretical results. In this case for the theoretical curve we used the variable: where a = O(1) measures the effective position of the boundary in the lattice model. By varying a one can obtain a better fit in the boundary region, as seen in the inset of Fig.2.
Next we consider the density of those clusters which are touching one boundary of the strip, say at y = l/L → 0, irrespective to the other. This density, denoted by ρ 0 (l/L), in the continuum approximation is calculated for percolation by conformal methods 14 as: This density is analogous to the order parameter profile in the system with fixed-free boundary conditions 19,20 . The numerically calculated densities are shown in Fig.4 for different widths, where we used such a normalization that the curves have the same asymptotics at the free boundary, i.e. around y = 1. Close to the free boundary the densities for different L fall to the same curve, which is well described by the conformal formula in Eq.(8) in which we have used the conjectured exponents in Eq.(1). In larger distance from the free surface the numerically calculated densities start to deviate from the conformal result for some y <ỹ L andỹ L is a decreasing function of L. If we extrapolate the simulated profiles with L the region of agreement with the conformal result is extended to the interval 0.2 < y ≤ 1., as seen in Fig.4. The finite-size dependence of the densities in this case can be attributed to the effect of the finite breaking-up length, l b . Close to the touched surface in the continuum limit, l b ≪ l ≪ L, the density is described by the scaling result by Fisher and de Gennes 48 : ρ 0 (l) ∼ l −x b . However by approaching the breaking-up length, l b , the increase of the profile is stopped and for l < l b ρ 0 (l) start to decrease. This is due to the structure of the connected clusters close to the surface. As seen in Fig.1 the number of touching sites in a cluster is comperatively smaller for the RBPM with ∆ < 1/2 (upper and middle panel of Fig.1), than for percolation with ∆ = 1/2 (lower panel of Fig.1). Also for finite widths the small and medium size touching clusters are rarely represented for the RBPM. By approaching with l the other, free side of the strip the crossing clusters start to bring the dominant contribution to the density, ρ 0 (l/L), which is then well described by the conformal formula.
Finally we consider ρ e (l/L), which is the density of points in such clusters which are touching either the boundary at l = 1 or at l = L or both. For percolation this density is predicted through conformal invariance as 14 : and it is analogous to the order parameter profile with parallel fixed spin boundary conditions 15 . Note that we have the relation: ρ b (y) = ρ 0 (y) + ρ 1 (y) − ρ e (y), with ρ 1 (y) = ρ 0 (1 − y). For the RBPM this density is strongly perturbed by the finite breaking-up length at both boundaries as can be seen in the inset of Fig.4. In this case we did not try to perform an extrapolation and conclude that even larger finite systems would be necessary to test the conformal predictions in a direct calculation. In order to try to test the result in Eq.(9) we studied another density which is defined on crossing clusters, so that one expects to be represented correctly in smaller systems, too. Here we define a density, ρ line e (l/L), in crossing clusters and consider points only in such vertical lines, where at both ends of the given line the cluster touches the boundaries. Since ρ line e (l/L) is related to the operator profile with fixed-fixed boundary conditions we expect that it has the same scaling form as the previously defined density, ρ e (l/L). In Fig.5 we show the calculated densities for the RBPM, which is compared with the analytical prediction in Eq.(9). A similar analysis for percolation is shown in the inset of Fig.5. In both cases we found that the numerical and analytical results for this type of profile are in satisfactory agreement, although the statistics of the numerical data is somewhat low, since just a fraction of ∼ L −2xs ∼ L −1 lines can be used in this analysis.
We can thus conclude that all the critical densities we considered for the RBPM are found in agreement with the theoretical prediction, which is obtained from the corresponding conformal results for percolation by replacing the scaling dimensions with the appropriate (conjectured) values for the RBPM. From an analysis of the profile, ρ b (l), close to the boundary we have obtained new accurate estimate of the critical exponent, x s − x b , giving further support of the conjecture in Eq.(1).
A. Model
The random transverse-field Ising chain is defined by the Hamiltonian: in terms of the Pauli matrices, σ x,z i , at site i. The couplings, J i , and the transverse fields, h i , are independent and identically distributed random numbers. The critical point of the system is located at: [ln h] av = [ln J] av , where we use the notation [. . .] av to indicate the average over quenched disorder.
We note that the RTFIC is the Hamiltonian version 49 of the McCoy-Wu model 33 , which is a 2d Ising model with layered disorder. In the i-th layer of this model the couplings in the vertical and horizontal directions are given by: K 1 (i) and K 2 (i), respectively, which are related to the parameters of the RTFIC as: h i = −τ −1 tanh −1 exp(−2K 1 (i)) and J i = −τ −1 K 2 (i), where in the Hamiltonian limit τ → 0.
B. SDRG method
The RTFIC can be efficiently studied within the frame of a renormalization group approach 35,50 , which is expected to lead to asymptotically exact results 34 . The basic feature of this procedure is to successively eliminate those degrees of freedom which have the largest local energy scale and thus represents the fastest local mode. At a given step of the renormalization the global energy scale is defined by Ω = max{J i , h i } and the local term of value Ω is eliminated from the Hamiltonian. Here we have two different elementary renormalization steps: cluster formation and cluster decimation.
i) Cluster formation: if the largest local parameter is a coupling, say J 2 ≫ h 2 , h 3 (h 2 and h 3 being the transverse fields acting at the two ends of J 2 ) then a new spin cluster is formed in an effective transverse field:h 23 ≈ h 2 h 3 /J 2 , which is calculated in second-order perturbational calculation. The moment of the new cluster is given bỹ µ 23 = µ 2 + µ 3 , in terms of the moments of the original clusters, µ 2 and µ 3 . In the starting Hamiltonian all spins have the same moment of unity.
ii) Cluster decimation: if the largest local parameter is a transverse field, say h 2 ≫ J 2 , J 3 (J 2 and J 3 being the couplings which are connected to the site with h 2 ) then the spin cluster is decimated out and an effective couplingJ 23 ≈ J 2 J 3 /h 2 is formed between the remaining sites. If the decimated spin is at the boundary of an open chain no new couplings are formed.
During renormalization we repeat the elementary decimation steps, which at the starting period are only approximative, but as the energy scale is reduced and the fixed point, Ω * = 0, is approached they become asymptotically exact. In this limit the renormalization group equations can be solved analytically. The length-scale of the clusters (and bonds), defined by the linear size of the original region which is renormalized to the new variable is shown 34 to scale as where Ω 0 is a reference energy scale. On the other hand the average cluster moment behaves as 34 : Note that the average magnetization at the critical point behaves as: m(ℓ) ∼ µ/ℓ ∼ ℓ −x b as lengths are rescaled by a factor, ℓ, and x b = 1 − Φ/2 is just the scaling dimension introduced in Eq.(1).
C. Densities of critical clusters
Having a finite chain with length, L, we perform the decimation until the final cluster, which have a moment of µ(L) ∼ L Φ/2 . Sites of the original chain which belong to the final cluster are very strongly correlated and we can ask questions about the density of sites in the final cluster, i.e. about the probability that a given site is contained in a final cluster. The structure of spins in the final clusters are illustrated in Fig.6. Note that the final cluster in the 1d space is disconnected, and the correlations are realized along the (imaginary) time direction. Densities of critical clusters in the RTFIC is studied numerically. We have considered a large number of (3×10 7 ) chains of length L = 2 13 = 8192 with open boundary conditions. We used the same type of uniform disorder: p(u) = 1, for 0 ≤ u ≤ 1 and p(u) = 0, for u > 1, both for the couplings and the transverse fields, in this way we have satisfied the criticality condition. The strong disorder renormalization procedure is performed for each chain up to the final spin cluster and then the statistics of the sites belonging to the final clusters are investigated. We have studied the density of three different class of clusters, which have somewhat analogous definitions to the clusters studied for the RBPM. In terms of all final clusters (see the upper panel of Fig.6) we defineρ(l/L). If we consider those final clusters which have the boundary point l = 1 (see the middle panel of Fig.6) we obtain ρ 0 (l/L). Finally, if the clusters contain both boundary points, l = 1 and l = L (see the lower panel of Fig.6) we defineρ 01 (l/L). We note that for these densities no analytical conjecture is available, since the system is not conformally invariant.
The density of all final clusters is shown in Fig.7. First we note that close to the boundaries the behavior of the profile is predicted by scaling theory asρ(y) ∼ y xs−x b , y ≪ 1, orρ(ℓ) ∼ ℓ xs−x b , ℓ ≪ L. This relation is indeed satisfied as shown in inset a) of Fig.7. From this inset we can notice that the microscopic length-scale of the model, l m , is just a few lattice spacing and for l > l m the calculated profile is well described by the asymptotic scaling result. The scaling result is valid for the formula in Eq.(6) with the appropriate scaling dimensions, therefore we tried to compare it with the numerical results. As seen in Fig.7 the agreement is very good for all values of y. To have a more precise check in inset a) of Fig.7 we have presented the ratio of the numerical results and the formula in Eq. (6). Here one can notice small deviations from unity, which are of the order of 1%. Consequently the formula in Eq.(6) is a very good fit, however presumably it is not exact.
We can conclude that the density profiles of the final clusters of the SDRG procedure for the RTFIC are well described by scaling predictions close to the boundaries. The full profiles are also well approximated by analytical formulae, which are however not fully perfect.
The density of final clusters which contain the boundary site at l = 1 is shown in Fig.8. From scaling theory one knows the behavior of the profile close to the boundaries: This behavior is indeed found in the numerically calculated profile as seen in the inset of Fig.8. The asymptotics mentioned above is valid for the formula in Eq. (8). We tried to fit the numerical results with this formula (with the appropriate scaling dimensions), however the weight of the the tail at y ∼ 1 given by this formula is too large, by about a factor of 2. Much better agreement with the data can be obtained with the formula: which is just the average of the density of clusters which touch one boundary an may and may not touch the other boundary. As seen in Fig.8 the analytical and numerical results are close to each other for all y, although the agreement is certainly not perfect.
Finally we consider those final clusters that touch both boundaries. The corresponding density,ρ 01 (y), is similar to the order parameter profile with fixed-fixed boundary condition and its functional form for percolation is given in Eq. (9). The numerically calculated profile is given in Fig.9. Here the comperatively large fluctuations of the data points are due to the fact that only a fraction ∼ L −2xs ∼ L −1 of the samples have a final cluster which touches both boundaries. We have compared the calculated profile with the analytical formula in Eq.(9) using x b from Eq.(1). The agreement is generally very good, but not fully perfect. Small deviations of the order of a few percents can be observed (see in the inset of Fig.9).
IV. DISCUSSION
In this paper we have studied the density of critical clusters in two models the critical properties of which are dominated by disorder effects. One of the models is the two-dimensional random bond Potts model and we considered the FK clusters in the large-q limit. This model is expected to be conformally invariant, which means that average quantities which are related to FK clusters (such as correlation function and magnetization densities) are invariant under conformal transformations. We note that the RBPM and conventional bond percolation represent two different fixed points of the same phase diagram, which correspond to 0 < ∆ < 1/2 and ∆ = 1/2 for the binary disorder, respectively, see Fig.1. In contrast to percolation in the RBPM there is a finite length-scale, the breaking-up length, l b , and results of the continuum approximation are expected to hold for lengths which are larger than l b . In the strip geometry we have calculated the density of points of different type of clusters (crossing clusters, clusters which touch one boundary of the strip, etc.) in analogy with a related study of percolation in 14 . The densities close to free surfaces are well described by scaling predictions and from this analysis accurate estimate of the critical exponent x s − x b is obtained in agreement with the conjecture in Eq.(1). The full profiles are compared with analytical formulae which are obtained from the corresponding conformal results for percolation by using the appropriate values of the bulk and surface scaling exponents in Eq.(1). We have observed that the numerically calculated profiles agree well with the conformal results outside the surface region of width ∼ l b .
The second model we considered is the RTFIC, the fixed point of which is expected to control the critical behavior of a large class of 2d classical systems with anisotropic randomness. Examples are the Ising model and the (directed) percolation with layered disorder. In these systems scaling at the critical point is strongly anisotropic, therefore these systems are not conformally invariant. In the RTFIC critical clusters are defined through the strong disorder RG procedure. Here spins in the final cluster are strongly correlated and play analogous role as clusters in percolation or FK clusters in the Potts model. The density of final clusters of the RTFIC close to the surfaces of the strip are shown to obey scaling relations. We also tried to find analytical formulae which correctly approximate the numerical profiles. These formulae, which are borrowed from similar studies of conformal systems have an overall very good description, however these are not fully perfect. We have noticed a discrepancy of the order of a few percent.
Our investigations can be extended into different directions. For 2d classical systems one can study the density of FK clusters in the q-state Potts model, both without disorder (for q ≤ 4) and in the presence of disorder (for general value of q). One can also study the density of geometrical clusters in the 2d random-field Ising model 51,52 . For the random transverse-field Ising model one possibility is to investigate the distribution of final clusters in a 2d strip.
We thank for useful discussions with L. Turban. This | 2008-05-14T09:04:13.000Z | 2008-05-14T00:00:00.000 | {
"year": 2008,
"sha1": "e646b9ffda7113898b9f60eae1d2ddec20e2aca1",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0805.2006",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "e646b9ffda7113898b9f60eae1d2ddec20e2aca1",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Mathematics",
"Medicine",
"Physics"
]
} |
12282534 | pes2o/s2orc | v3-fos-license | Features of hepatocellular carcinoma in cases with autoimmune hepatitis and primary biliary cirrhosis.
AIM
To characterize the clinical features of hepatocellular carcinoma (HCC) associated with autoimmune liver disease, we critically evaluated the literature on HCC associated with autoimmune hepatitis (AIH) and primary biliary cirrhosis (PBC).
METHODS
A systematic review of the literature was conducted using the Japana Centra Revuo Medicina database which produced 38 cases of HCC with AIH (AIH-series) and 50 cases of HCC with PBC (PBC-series). We compared the clinical features of these two sets of patients with the general Japanese HCC population.
RESULTS
On average, HCC was more common in men than in women with AIH or PBC. While many patients underwent chemolipiodolization (CL) or transcatheter arterial embolization (TAE) (AIH-series: P = 0.048 (vs operation), P = 0.018 (vs RFA, PEIT); PBC-series: P = 0.027 (vs RFA, PEIT), others refused therapeutic interventions [AIH-series: P = 0.038 (vs RFA, PEIT); PBC-series: P = 0.003 (vs RFA, PEIT)]. Liver failure was the primary cause of death among patients in this study, followed by tumor rupture. The survival interval between diagnosis and death was fairly short, averaging 14 +/- 12 mo in AIH patients and 8.4 +/- 14 mo in PBC patients.
CONCLUSION
We demonstrated common clinical features among Japanese cases of HCC arising from AIH and PBC.
INTRODUCTION
Autoimmune hepatitis (AIH), primary biliary cirrhosis (PBC) and primary sclerotic cholangitis (PSC) form the triad of autoimmune liver diseases. As defined by Mackay et al [1] , AIH is a chronic active hepatitis resulting from several distinct autoimmune phenomena. While the anti-inflammatory effects of steroid therapy for this disease may inhibit the promotion of liver carcinogenesis, hepatocellular carcinoma (HCC) does occur rarely in patients with this condition (in about 0.5% of AIH cases) [2,3] .
In contrast to AIH, PBC results from an autoimmune mechanism causing chronic cholestasis and chronic non-suppurative destructive cholangitis in medium sized intrahepatic bile ducts [4] . Rare cases of HCC arising from PBC have been reported to date. However, this association is rare (affecting between 0.3% and 4.22% of cases) [5][6][7][8][9][10][11] , because a PBC patient's ability to produce regenerative nodules is weak [5][6][7][8][9]12] . Additionally, PBC is pathologically characterized in chronic nonsuppurative destructive cholangitis (CNSDC), and the main inflammatory lesions associated with PBC are not hepatocytes, but cholangiocytes, which may be one of the reasons why the incidence of HCC with PBC is low, especially at the early stage when cirrhotic and fibrotic changes do not progress. Recently, reports have suggested that the prevalence of HCC arising from both AIH and PBC is higher than previously believed. In 2001, Caballeria et al [13] found that the incidence of HCC in patients with advanced PBC (Scheuer histological stage Ⅲ or Ⅳ) was 11.1%, approximating the 15% incidence in patients with HCV-related cirrhosis (RR 0.812, 95% CI 0.229-2.883). The clinical features of HCC associated with AIH and PBC, however, have not yet been extensively described. Here, we performed a systematic literature review of HCC cases associated with AIH and PBC in Japan, a country with a high burden of autoimmune liver disease. We conducted a critical analysis of case reports to find common themes in the demographic and clinical histories of patients with HCC associated with AIH and PBC.
MATERIALS AND METHODS
We performed a systematic literature review of case reports published in Japan and listed in the Japana Centra Revuo Medicina database, version 3 (systematic literature search system through a computer web site for Japanese literature), using the keywords "hepatocellular carcinoma", "autoimmune hepatitis", and "primary biliary cirrhosis". The database search was limited to the period between 1990 (when the hepatitis C virus was first detected) to the present. The quality of this database available for analysis is thoroughly welldocumented. In total, 38 cases of HCC associated with AIH, and 50 cases of HCC associated with PBC were identified. No cases were duplicated, and patients were identified across multiple Japanese medical centers. Most patients in the series had been diagnosed with autoimmune liver disease before HCC was identified. Several cases also presented with co-factors of liver damage and HCC development other than AIH or PBC, such as excessive alcohol intake, HBV, or HCV infection. However, no cases had evidence of hemochromatosis or α1-antitrypsin deficiency. The demographics of these two groups were recorded based on gender, age, period of medical observation, and history of blood transfusion or excessive alcohol intake. Clinical data was also recorded to determine noncancerous pathologies of the liver, HBV or HCV infection status, serum α-fetoprotein (AFP) level, maximal tumor size, history of HCC therapy, clinical outcomes, and cause of death. Cases that did not include a description of alcohol intake were assumed not to have histories of excessive alcohol intake.
We confirmed that all 38 identified cases of HCC associated with AIH met generally accepted international criteria for diagnosis of AIH [14] . Scoring was performed prior to AIH therapy initiation; all scores were greater than 10, and thereby classified as either "probable AIH" or "definite AIH". Because no internationally accepted diagnostic criteria yet exists for PBC, we utilized the Japanese standard criteria for PBC diagnosis, a standard first proposed in 1992 by a clinical study group supported by the Japanese Ministry of Welfare. According to this standard, PBC diagnosis requires that cases meet at least one of the following criteria: (1) pathologic evidence of CNSDC and positive anti-mitochondrial antibody (AMA) or anti-PDH antibody titers, (2) positive AMA or anti-PDH antibody titers and non-CNSDC pathology compatible with PBC, or (3) no liver biopsy, but, positive AMA or anti-PDH antibody titers and a clinical picture and clinical course compatible with PBC. We confirmed that all 50 identified cases of HCC associated with PBC met the above diagnostic criteria. Six of 50 (12.0%) HCC cases with PBC met the third criteria for PBC, and 44 of 50 (88.0%) cases met the first or second criteria for PBC. The third criteria for PBC remain ambiguous, and it is really hoped that internationally accepted criteria will be determined for PBC diagnosis.
If a case met both generally accepted international criteria for diagnosis of AIH, and the Japanese standard criteria for PBC diagnosis, we diagnosed the case as overlap syndrome. We had two cases of overlap syndrome, and excluded these cases from our analysis.
We did not include a control group, but used the general HCC population in Japan for comparison [15] .
Statistical analysis
Intention-to-treat analyses were used throughout, and statistical analysis for categorical comparisons of the data was performed using the program ystat2006. xls for Windows/Macintosh (Igaku Tosho Shuppan Corporation, Tokyo, Japan). We used the χ 2 test and Fisher's exact test for categorical comparisons between patients with HCC associated with AIH or PBC and HCC patients without associated autoimmune disease [15] . The following variables were assessed: gender, HBV or HCV co-infection, history of blood transfusions, history of excessive alcohol intake, positivity for serum-AFP and clinical outcomes. Because the baseline male to female ratio of AIH and PBC was 1:7 and 1:9, respectively, we performed the χ 2 test for males and females separately. We also used the χ 2 test with or without the Yates correction for categorical comparisons of pathological findings of noncancerous lesions of the liver, HCC therapy choices, and cause of death. Where significant differences were noted, χ 2 tests or Fisher's exact tests were repeated with all categorical combinations, using Bonferroni corrections for multiple comparisons. Two tailed Mann-Whitney U-tests and F-tests were performed at the 5% significance level only for comparisons between HCC patients with AIH and PBC, as the following variables were unavailable for the general HCC population: interval between liver damage and HCC diagnosis, interval from HCC diagnosis to death, age at HCC diagnosis, serum-AFP levels, maximum tumor size and number of HCC loci. Because the patient sample size in each group was greater than 20, we chose to use P-values calculated from the asymptotic distribution. The total number of cases in each patient group did not include cases for which categorical data were unknown ( Table 1).
The statistical analysis for survival among HCC patients with AIH and PBC was performed on a personal computer with the statistical package SPSS for Windows (version II, SPSS Inc., Chicago, IL, USA). Because there were too few published cases of HCC arising from AIH or PBC, however, differences in sur vival between patient groups could not be calculated.
RESULTS
The intervals between HCC diagnosis and death for HCC patients with AIH (14 ± 12 mo) and PBC (8.4 ± 14 mo) was notably shorter than among general HCC patients in Japan (77.5% 1-year survival, 52.5% 3-year survival, and 35.4% 5-year survival) [15] . As shown in Table 1, the survival interval for HCC patients with PBC was also significantly shorter than that for patients with AIH (P = 0.047).
Among HCC cases associated with AIH, the actual male to female ratio was 7:31. Because AIH patients in Japan are predominantly female (7:1), the corrected risk ratio for HCC among male AIH patients was 1.6:1 relative to females, and the male to female ratio of the relative numbers was 23.3:14.7 ( Table 2). The majority of Japanese PBC patients are also female, outnumbering
Table 1 Developement period of reported cases of hepatocellular carcinoma associated with autoimmune hepatitis and primary biliary cirrhosis, compared to cases of general hepatocellular carcinoma in Japan
The P-value above was calculated from the Mann-Whitney U-test and the P-value below, indicated in parentheses, was calculated from the F-test. a P < 0.05, Statistically significant. HCC: Hepatocellular carcinoma; AIH: Autoimmune hepatitis; PBC: Primary biliary cirrhosis; SD: Standard deviation; NA: Not available.
Table 2 Analysis on gender and age of reported cases of hepatocellular carcinoma associated with autoimmune hepatitis and primary biliary cirrhosis, compared to cases of general hepatocellular carcinoma in Japan
The P-value above was calculated from the Mann-Whitney U-test and the P-value below, indicated in parentheses, was calculated from the F-test. 1 The P-value was calculated from the relative numbers. HCC: Hepatocellular carcinoma; AIH: Autoimmune hepatitis; PBC: Primary biliary cirrhosis; SD: Standard deviation; NA: Not available.
Watanabe T et al . The features of HCC cases with AIH and PBC 233 males by 9:1. The relative risk ratio for HCC among males with PBC was 3.2:1 relative to females, and the male to female ratio of the relative numbers was 38:12 (Table 2). No significant differences in male to female ratios were noted between the three patient groups (P = 0.149, P = 0.512, P = 0.244, respectively). Among the HCC cases associated with AIH, only three (10.3%) had a history of blood transfusions, while 13 (34.2%) of the cases with PBC had such a history. Among all Japanese patients with HCC, 3633 (28.8%) had a history of blood transfusions [15] . The proportion of HCC cases associated with AIH having a history of blood transfusions was significantly lower than that of the general HCC cases in Japan (P = 0.040), and the proportion of HCC cases associated with PBC having a history of blood transfusions was significantly greater than that of the HCC cases associated with AIH (P = 0.041, Table 3).
Similarly, only one case (3.1%) of HCC associated with AIH had a history of excessive alcohol intake, while five (20.0%) cases associated with PBC had such a history (P = 0.352, Table 3). Among all Japanese patients with HCC, 3271 (22.3%) had a history of excessive alcohol intake [15] .
While prior infection with HBV was relatively rare among AIH patients (6.1%), it was much more prevalent among patients with PBC (25.0%, P = 0.025). Similarly, 7.9% of AIH patients tested positive for HCV, as compared to 20.4% of PBC patients (P = 0.044). The population of Japanese HCC patients without autoimmune liver disease had significantly higher rates of both HBV and HCV co-infection (P < 0.001, Table 3).
Among the HCC cases associated with AIH, 18/31 (58.1%) were found to have cirrhosis on examination of liver biopsy samples or resected samples at operation. In contrast, 29/44 (65.9%) of the HCC cases associated with PBC were found to have cirrhotic liver tissue. Within the general HCC population in Japan, 2250 of the 4941 cases for which liver specimens were available (45.5%) showed evidence of cirrhosis [15] . While the proportion of liver cirrhosis among HCC cases associated with PBC was significantly greater than that in the general HCC population in Japan (P = 0.007), no statistical significance in the prevalence of cirrhosis was found between AIH-associated HCC and general HCC patients (P = 0.163, Table 3).
The numbers and positive ratios of the AIH-series, PBC-series and general-HCC patients were 22/37 (59.5%), 34/47 (72.4%) and 10075/15831 (63.6%), respectively. No significant differences in positive ratios of serum-AFP were noted between the three patient groups (P = 0.597, P = 0.216, P = 0.214, respectively, Table 4). AFP levels at diagnosis were 2340.2 ng/mL (range 1-49100 ng/mL) among patients with AIH, and 854.2 ng/mL (range 4.2-14 646 ng/mL) among patients with PBC. The maximum size of the primary hepatic tumor at diagnosis was 3.97 cm (range 1.0-10.0 cm) among patients with AIH and 3.51 cm (range 1.0-8.8 cm) among PBC patients (Table 4). Due to lack of available data, we could not compare ser um AFP levels, tumor sizes and numbers of HCC loci between the autoimmune-associated HCC cases and the general HCC cases in Japan. However, we found that serum AFP level did not vary widely, and that maximum tumor size and number of HCC loci were considerably lower in patients with autoimmune liver disease than in general HCC patients (Table 4).
Among both the AIH and PBC patient groups,
AIH-series/PBCseries
History of blood transfusion
Table 3 Clinical status of reported cases of hepatocellular carcinoma associated with autoimmune hepatitis and primary biliary cirrhosis, compared to cases of general hepatocellular carcinoma in Japan
the most commonly selected forms of treatment were chemolipiodolization (CL) and transcatheter arterial embolization (TAE); other options included percutaneous ethanol injection therapy (PEIT) and radiofrequency ablation (RFA). Differences in the choice of therapeutic procedures were noted as follows, although no comparisons reached statistical significance following the Bonferroni correction: (1) The rate of CL or TAE among HCC patients with AIH was greater than the rate of operations among general HCC patients (P = 0.048), (2) The rate of CL or TAE among HCC patients with AIH was greater than the rate of PEIT and RFA among general HCC patients (P = 0.018), and (3) The rate of CL or TAE in HCC patients with PBC was greater than the rate of PEIT and RFA among general HCC patients (P = 0.027). Additionally, the frequency with which HCC patients with PBC chose to forgo treatment was significantly higher than the frequency with which general HCC patients chose to undergo PEIT or RFA (P = 0.003). Although not statistically significant, the frequency with which HCC patients with AIH refused therapeutic interventions was also higher than the frequency of PEIT or RFA in the general HCC population (P = 0.038, Table 5). Ideally, data on survival by treatment modality should be presented. However, the number of patients receiving each treatment modality who were able to be followed up to death was small. Hence, the mean period from HCC development to death was calculated from patient survival following all treatment options. Future prospective studies are needed to further analyze mean survival for each treatment alternative. Across all three patient groups, we found that liver failure was the leading cause of death, followed by rupture of HCC. Among general HCC patients, neoplastic death was most common (1487/2700, 55.1%), although differences between causes of death did not reach statistical significance. Comparisons between patient groups showed that: (1) The rate of neoplastic death in general HCC patients was higher than the rate of variceal rupture in HCC patients with AIH (P = 0.050), (2) The rate of neoplastic death in general HCC patients was higher than the rate of gastrointestinal bleeding in HCC patients with AIH (P = 0.013), and (3) The rate of neoplastic death in general HCC patients was greater than the rate of variceal rupture in HCC patients with PBC (P = 0.050, Table 5).
DISCUSSION
While autoimmune liver disease is more common among women than men in Japan, HCC in our group of patients with autoimmune liver disease was more common in men than women ( Table 2). Men with AIH had a 1.6-fold greater risk of HCC than women, while men with PBC had a 3.2-fold greater risk of HCC than women with PBC. Moreover, when we followed AIH and PBC patients during HCC surveillance, we noted that the rate of HCC development was higher in male patients with autoimmune liver disease than in female patients with autoimmune liver disease.
Table 4 Serum AFP levels, tumor sizes and number of HCC loci of reported cases of hepatocellular carcinoma associated with autoimmune hepatitis and primary biliary cirrhosis, compared to cases of general hepatocellular carcinoma in Japan
The P-value in the first row was calculated from the χ 2 test and Fisher's exact test. The P-value in the following row was calculated from the Mann-Whitney U-test and the P-value below, indicated in parentheses, was calculated from the F-test. a P < 0.05, b P < 0.01, Statistically significant. HCC: Hepatocellular carcinoma; AIH: Autoimmune hepatitis; PBC: Primary biliary cirrhosis; AFP: α-fetoprotein; SD: Standard deviation; NA: Not available.
patients with AIH, 29/44 (65.9%) of HCC patients with PBC, and in only 2250/4941 (45.5%) of the general Japanese HCC population. We did not add cases with liver fibrosis (LF) to the incidence of liver cirrhosis (LC) in the general Japanese HCC population, which may be one of the reasons why the incidence of liver cirrhosis was surprisingly low. Additionally, we think that the HCC cases with PBC and AIH and with non-cirrhotic liver, in which sufficient examinations and successful treatments were performed because of their higher hepatic reserve, were likely to be reported and submitted for publication. The possibility of bias in the selection of the reported cases should be raised. Another interesting finding was that the interval between HCC diagnosis and death was shorter for patients with autoimmune liver disease than for the general HCC population of Japan [15] . Furthermore, although we found that serum AFP level did not vary widely, the maximum tumor size and number of HCC loci were considerably lower in patients with autoimmune liver disease than in general HCC patients (Table 4). One explanation for this finding may be a selection bias, as cases which were detected earlier and treated successfully were more likely to be submitted for publication. Despite a smaller tumor size and a lower number of HCC loci in patients with HCC arising in the setting of autoimmune liver disease at the time of HCC diagnosis, a shorter reported survival was not attributed to late detection of HCC and failure to survey patients with autoimmune liver disease for HCC, but was more likely to be due to advanced liver disease and cirrhosis. Future prospective studies will be needed to verify or refute these findings.
Although CL and TAE were the most frequently selected treatment modalities across all patient groups (Table 5), many patients ultimately refused treatment due to advanced age or social circumstances. Medical treatments using CL or TAE may be common because HCC cases are often inoperable due to cirrhotic liver disease in these patients. While survival may be related to the choice of therapeutic options, inconsistencies in data reporting over multiple decades and across multiple medical centers made the calculation of survival data difficult.
Several mechanisms explaining the development of HCC from autoimmune liver diseases have been proposed: enhanced progression to cirrhosis through progressive autoimmune hepatitis, decreased antitumor immune responses caused by long-term administration of steroids and immunosuppressants, or virus-mediated hepatitis [16,17] . In this study, we found significantly higher rates of HBV and HCV among PBC patients with HCC than among AIH patients with HCC. This finding may be attributable to the higher rates of blood transfusion in HCC patients with PBC (P = 0.041, Table 3). This result is supported by the findings of Shimizu et al [18] , who reported that 3/16 (19%) HCC patients with PBC tested positive for prior HBV and present HCV infections. Given the high rates of prior HBV infections among HCC patients with PBC, it is possible that prior HBV infection predisposes patients to HCC through HBV-DNA becoming integrated into hepatocyte DNA. It has been reported that even in patients who test negative for serum HCV-RNA and serum HBV-DNA (less for AIH patients is needed similar to that conducted for PBC patients. At present, HCC transformation in early-stage precirrhotic AIH and PBC were thought to be very rare. However, a high incidence of HCC development was observed in AIH and PBC patients with overlapping HCV and HBV infection, including occult HBV infection [9,19,20] . These patients should be closely followed using ultrasonography, CT-scanning and MRI of the abdomen, as well as tumor markers for HCC. Reports of HCC cases arising from "pure" AIH and PBC (with no history of blood transfusion, excessive alcohol intake, immunosuppressant administration, and with negative HBV and HCV serotyping) are rare [2,[27][28][29][30][31] . El-Serag et al [32] , in a multivariate analysis reported that AIH itself is not significant; however, our study indicates that earlystage AIH and PBC patients also have the potential to develop HCC. We advocate that "pure" or "early-stage" AIH and PBC cases should also be regularly screened for HCC.
Our data also indicate that the clinical course after diagnosis of HCC with AIH and PBC differs from virus-associated HCC, although prospective studies are needed to confirm these results. Clinicians should note the common clinical features of HCC cases with AIH and PBC at diagnosis, treatment, and follow-up of these patients.
Lastly, our findings also beg the question of why HCC rupture is the second most common cause of death in both groups of patients examined. We have recently reported a pelioid-type HCC patient with PBC, who died from rupture of HCC [33] . A peliotic change was observed more frequency in large poorly-differentiated and encapsuled HCC [34] , and the features of pelioidtype HCC were high blood flow into the HCC, high pressure in the tumor and fibrous capsular formation. It is unknown whether the ruptured HCCs in the present study had these features, as this study had severe limitations because it was retrospective. Tumors in such patients may grow rapidly, and pathophysiological factors shared by both patient groups may trigger the rupture of HCC. A prospective study on the cause of death and a pathologic study of ruptured HCC with AIH and PBC is awaited with great interest.
Further clinical and laboratory studies are needed to describe which pathological, biological and genetic features are common among HCC cases arising from AIH and PBC. How HCC in these patients relates to viral hepatitis also requires further clarification. The present study was retrospective; however, this is the first study to date that highlights the importance of these future research topics. Future prospective studies on these important subjects are required. | 2018-04-03T00:29:05.826Z | 2009-01-14T00:00:00.000 | {
"year": 2009,
"sha1": "6325b6015de5dee10b6e0d923f01ae436599116c",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.3748/wjg.15.231",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "455af6311ae6540b52e57339e655036cdfd3ec3a",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
258377765 | pes2o/s2orc | v3-fos-license | Characterization of microbiomic and geochemical compositions across the photosynthetic fringe
Hot spring outflow channels provide geochemical gradients that are reflected in microbial community compositions. In many hot spring outflows, there is a distinct visual demarcation as the community transitions from predominantly chemotrophs to having visible pigments from phototrophs. It has been hypothesized that this transition to phototrophy, known as the photosynthetic fringe, is a result of the pH, temperature, and/or sulfide concentration gradients in the hot spring outflows. Here, we explicitly evaluated the predictive capability of geochemistry in determining the location of the photosynthetic fringe in hot spring outflows. A total of 46 samples were taken from 12 hot spring outflows in Yellowstone National Park that spanned pH values from 1.9 to 9.0 and temperatures from 28.9 to 92.2°C. Sampling locations were selected to be equidistant in geochemical space above and below the photosynthetic fringe based on linear discriminant analysis. Although pH, temperature, and total sulfide concentrations have all previously been cited as determining factors for microbial community composition, total sulfide did not correlate with microbial community composition with statistical significance in non-metric multidimensional scaling. In contrast, pH, temperature, ammonia, dissolved organic carbon, dissolved inorganic carbon, and dissolved oxygen did correlate with the microbial community composition with statistical significance. Additionally, there was observed statistical significance between beta diversity and the relative position to the photosynthetic fringe with sites above the photosynthetic fringe being significantly different from those at or below the photosynthetic fringe according to canonical correspondence analysis. However, in combination, the geochemical parameters considered in this study only accounted for 35% of the variation in microbial community composition determined by redundancy analysis. In co-occurrence network analyses, each clique correlated with either pH and/or temperature, whereas sulfide concentrations only correlated with individual nodes. These results indicate that there is a complex interplay between geochemical variables and the position of the photosynthetic fringe that cannot be fully explained by statistical correlations with the individual geochemical variables included in this study.
Hot spring outflow channels provide geochemical gradients that are reflected in microbial community compositions. In many hot spring outflows, there is a distinct visual demarcation as the community transitions from predominantly chemotrophs to having visible pigments from phototrophs. It has been hypothesized that this transition to phototrophy, known as the photosynthetic fringe, is a result of the pH, temperature, and/or sulfide concentration gradients in the hot spring outflows. Here, we explicitly evaluated the predictive capability of geochemistry in determining the location of the photosynthetic fringe in hot spring outflows. A total of 46 samples were taken from 12 hot spring outflows in Yellowstone National Park that spanned pH values from 1.9 to 9.0 and temperatures from 28.9 to 92.2 • C. Sampling locations were selected to be equidistant in geochemical space above and below the photosynthetic fringe based on linear discriminant analysis. Although pH, temperature, and total sulfide concentrations have all previously been cited as determining factors for microbial community composition, total sulfide did not correlate with microbial community composition with statistical significance in non-metric multidimensional scaling. In contrast, pH, temperature, ammonia, dissolved organic carbon, dissolved inorganic carbon, and dissolved oxygen did correlate with the microbial community composition with statistical significance. Additionally, there was observed statistical significance between beta diversity and the relative position to the photosynthetic fringe with sites above the photosynthetic fringe being significantly different from those at or below the photosynthetic fringe according to canonical correspondence analysis. However, in combination, the geochemical parameters considered in this study only accounted for 35% of the variation in microbial community composition determined by redundancy analysis. In co-occurrence network analyses, each clique correlated with either pH and/or temperature, whereas sulfide concentrations only correlated with individual nodes. These results indicate that there is a complex interplay between geochemical variables and the position of the photosynthetic fringe that cannot be fully explained by statistical correlations with the individual geochemical variables included in this study.
Introduction
In hot spring outflow channels, there is a visual transition from predominantly chemotrophic microbial communities to those with larger contributions from phototrophs. This transition is marked by the occurrence of green, orange, yellow, brown, and/or purple pigments in biofilms associated with chlorophylls, carotenoids, and/or phycobiliproteins (Cox et al., 2011). The first transition to pigmented organisms is referred to as the photosynthetic fringe (Shock and Holland, 2007). This visual transition corresponds to geochemical transitions as the hot spring water flows away from its source and begins to cool and equilibrate with the atmosphere. This process leads to more oxygenation and increased pH as CO 2 degases, among other geochemical changes (Nordstrom et al., 2005).
Previous studies have attributed the visual appearance of the photosynthetic fringe to concurrent changes in geochemistry that are more conducive to phototrophs in hot spring outflows. Temperature, pH, and sulfide concentrations have been suggested as limiting factors of photosynthesis in hot spring outflows (Cox et al., 2011;Boyd et al., 2012;Hamilton et al., 2012). Previous work established an upper limit of photosynthesis between 73 and 75 • C across environments (Kempner, 1963;Brock and Brock, 1966;Brock, 1978;Castenholz, 1969). However, the upper temperature limit of photosynthesis depends on pH (Cox et al., 2011;Boyd et al., 2012;Fecteau et al., 2022) and is reduced to ∼56 • C under acidic conditions Brock, 1970, 1971). Though the upper temperature limit for photosynthesis was established based on observation, culture work, microbial activity, and pigment studies, the advent of sequencing methods challenges these earlier findings. Additionally, the reason for the upper temperature limit for photosynthesis is still debated. Possible limits on photosynthesis based on temperature include protein instability and the functionality of the CO 2 -assimilating mechanism (Brock and Brock, 1966;Meeks and Castenholz, 1978). It is also uncertain why the temperature limit is lower in acidic conditions, but it is most likely due to the dominant phototrophs transitioning from bacteria to comparatively less thermotolerant eukaryotes at lower pH Fecteau et al., 2022). Sulfide concentration may also be a limiting factor for photosynthesis due to sulfide's ability to bind to metalloproteins and block electron flow to photosystem II (Oren et al., 1979;Miller and Bebout, 2004).
In the phototrophic communities of hot spring outflows in Yellowstone National Park (YNP), the composition of phototrophs changes with pH. In the phototrophic mats below the photosynthetic fringes of basic springs (pH > 7), the microbial communities consist predominantly of bacterial phototrophs, including Cyanobacteria and filamentous anoxygenic phototrophs (Inskeep et al., 2013;Bennett et al., 2022). The predominance of bacterial phototrophs in basic hot spring outflows has been supported by 16S rRNA gene sequencing, metagenomic sequencing, and in situ studies of bicarbonate and nitrogen fixation (Ward et al., 1990;Steunou et al., 2008;Klatt et al., 2011;Thiel et al., 2016). Below the photosynthetic fringe of acidic outflows (pH < 4), the phototrophs are typically eukaryotic and include acidophilic algae such as Cyanidioschyzon (Toplin et al., 2008;Skorupa et al., 2013). Both eukaryotic and bacterial phototrophs have been identified in the phototrophic mats of acidic to circumneutral hot springs (pH 4-7), combining the likes of Cyanobacteria and Cyanidioschyzon (Fecteau et al., 2022). Thus, in hot spring environments there exists a trend in microbial community composition from prokaryotic to eukaryotic phototrophs as pH decreases (Brock, 1973;Bennett et al., 2022;Fecteau et al., 2022).
The predominantly chemotrophic communities above the photosynthetic fringe of hot spring outflows also vary with pH (Swingley et al., 2012;Inskeep et al., 2013;Colman et al., 2016;Lindsay et al., 2018). In general, according to quantitative PCR amplification of 16S rRNA genes, Archaea dominate in the chemotrophic communities of acidic hot springs, whereas Bacteria are dominant in the chemotrophic communities of basic hot springs (Colman et al., 2018). In acidic hot springs (pH < 4), predominant archaeal constituents are Sulfolobales, Desulfurcoccales, and Thermoproteales (Inskeep et al., 2013;Colman et al., 2018). The predominant bacteria in acidic hot springs include Aquificales, Thermales, Firmicutes, and Proteobacteria (Inskeep et al., 2013;Colman et al., 2016). However, it should be noted that Aquificales and Proteobacteria are predominant bacterial constituents in hot spring outflows regardless of pH. These previous studies have provided characterizations of chemotrophic community compositions and implicate the importance of pH on the composition of the community.
The distinct compositions of microbial communities along hot spring outflows have been linked to differences in the concentrations of dissolved inorganic carbon (DIC) and dissolved organic carbon (DOC). investigated the presence versus absence of various carbon fixation pathways and connected the findings back to changes in carbon isotope fractionation ( 13 C) data as well as DIC and DOC concentrations. In the outflow of a basic spring, "Bison Pool, " also discussed in this study as hot spring "BP, " 13 C measurements of the biofilm became more negative down the outflow, indicating a possible shift in carbon fixation strategies . Additionally, DIC was observed to decrease down the outflow while DOC increases. Generally, DIC concentrations are dependent on CO 2 input from the hydrothermal source and decrease down hot spring outflows as CO 2 degasses and is microbially fixed. In contrast, DOC generally increases down the outflow, and, in the case of "Bison Pool, " this is attributed to meteoric water input from the surrounding meadow (Swingley et al., 2012). In the "Bison Pool" outflow, the increase in DOC was connected to a transition in the microbial community composition from chemoautotrophs at the highest temperatures to heterotrophs and phototrophs further downstream, implicating the importance of DIC/DOC in the composition of the microbial community (Swingley et al., 2012).
Nitrogen availability in YNP hot spring outflows also contributes to the microbial community composition. According to isotopic observations by , measurable N-fixation only occurs at and below the photosynthetic fringe of the "Bison Pool" outflow. This limitation on N-fixation was reflected in the distribution of nitrogen fixation (nif ) genes only at and below the photosynthetic fringe. In contrast, the expression of nif genes was observed above the photosynthetic fringe at "Mound Spring, " also discussed in this study as hot spring "MN" (Loiacono et al., 2012). nif genes have also been identified in acidic to circumneutral springs, suggesting that nitrogen fixation is not limited by pH in hot spring ecosystems (Hamilton et al., 2011a). This was supported further by enrichment of diazotrophs from acidic YNP hot springs that were shown to fix nitrogen in situ via acetylene reduction assays (Hamilton et al., 2011b). In contrast, amoA, a gene associated with ammonia oxidation, is predominantly found in circumneutral to basic hot springs, and ammonia-oxidizers have only been enriched from circumneutral to basic YNP hot springs (De la Torre et al., 2008;Hatzenpichler et al., 2008;Hamilton et al., 2011b;Boyd et al., 2013). It is hypothesized that ammoniaoxidizers outcompete diazotrophs in circumneutral to basic hot springs, consuming the bioavailable nitrogen, thereby producing a downstream niche for diazotrophs (Hamilton et al., 2014). Genetic and geochemical analyses both implicate nitrogen as a determining factor for microbial community composition in YNP hot spring outflows.
In this study, 12 hot spring outflows in YNP were selected for sampling above, at, and below the photosynthetic fringe spanning temperatures from 28.9 to 92.2 • C, pH from 1.9 to 9.0, and sulfide concentrations from below the detection limit (<0.15 µmolal) to 52.6 µmolal (Supplementary Table 1). Due to this sampling scheme, each of the 12 hot spring outflows differ in both geochemical and microbiomic diversity and complexity, in addition to differing in history and geographic location. Site selection used linear discriminant analysis (LDA) of multiple geochemical parameters to estimate equidistant geochemical space above and below the visual photosynthetic fringe, for a total of 46 samples. Communities in sample sites above the photosynthetic fringe were expected to consist predominantly of chemotrophs, while those below the photosynthetic fringe were expected to contain a larger contribution of phototrophs. Samples at the photosynthetic fringe provided insight into the transition between the predominantly chemotrophic to the phototroph-containing microbial communities. From each sample site, geochemical measurements were taken including temperature, pH, conductivity, total sulfide, total dissolved silica, ferrous iron, dissolved oxygen gas (DO), DOC, DIC, and major cation and anion concentrations (Supplementary Tables 1, 2). In addition, 16S rRNA gene sequencing was performed using a sediment slurry from each sample location to determine the microbial community composition along the hot spring outflows (Supplementary Table 3). In combination, the geochemical measurements and sequencing data provide insights into how both the microbial community and the geochemistry change and interact down hot spring outflows and across the photosynthetic fringe. However, we find collapsing the complexity of these hot spring outflows into a list of geochemical variables was insufficient to determine the exact position of the photosynthetic fringe in geochemical space. Table 1) were determined by linear discriminant analysis (LDA), in which at least three samples were collected to represent locations below (chemosynthetic and photosynthetic), at (fringe), and above (chemosynthetic) the photosynthetic fringe in hot spring outflow channels, though additional samples were taken at several sites to provide additional biogeochemical context. The LDA model (Equation 1) was trained to separate samples into photosynthetic and non-photosynthetic classes based on 20 variables measured across 56 samples collected from twenty-nine geochemically diverse hot springs in previous years. The 20 variables selected to construct the LDA model were chosen based on perceived biological relevance, specifically, temperature, pH, conductivity, DIC, DOC, DO, Fe(II), sulfide, phosphate, total ammonia, and total dissolved Mg, Co, Ni, Cu, Zn, As, Mo, Cd, W, and Pb. Fe(II) was used in place of total Fe because Fe(II) could be measured in the field spectrophotometrically. Additionally, representative variables were chosen per element, for example, total ammonia is representative for nitrogen species. Samples above the photosynthetic fringe tend to have greater negative LDA scores, while samples below tend to have greater positive scores. Photosynthetic fringe positions were determined visually when possible and by LDA when not discernable by eye. When testing the LDA model on 381 previously collected samples, where photosynthesis had been identified by the visual presence of photosynthetic pigments, an LDA score of 0.13 predicted photosynthetic and non-photosynthetic samples with the fewest number of false positives and negatives. In outflow channels where the fringe was not apparent, the location of the fringe was estimated by choosing a location where the LDA score was close to or equal to 0.13 based on temperature, pH, and conductivity measured in the field combined with historical data for the remaining 17 variables in the LDA model. In the outflow of CF and MO, the photosynthetic fringe was apparent but occurred at different temperatures throughout the hot spring outflow, hence multiple samples were taken throughout the outflows to account for these variations. Sampling locations were chosen such that they were chemically equidistant from the fringe as estimated by the LDA model. In other words, sampling was carried out such that the difference in LDA scores between the at and below samples was equal to the difference between above and at scores. The LDA model was trained using the lda function in the MASS package in R (RRID:SCR_019125) (Venables and Ripley, 2002;R Core Team, 2013).
Geochemical sampling and analyses
Temperature, pH, and conductivity were measured in the field as previously described (Boyer et al., 2020). Temperature and conductivity were measured using a YSI-30 portable meter (YSI, Yellow Springs, OH, USA). Measurements of pH were obtained using a WTW 3110 meter and SenTix 41 temperature-compensated probes (Xylem Analytics, Weilheim, Germany) calibrated daily at ambient temperature using buffered pH solutions. Dissolved oxygen was measured optically using a PreSens Fibox 4 meter and a DP-PSt3-L2.5-St10-YOP-HT sensor calibrated to 100 • C (PreSens, Regensburg, Germany) as previously described (St Clair et al., 2019). Total dissolved sulfide was determined via the methylene blue method using Hach reagents and a DR1900 spectrophotometer on unfiltered water samples and analyzed immediately after collection.
Filtered (0.2 micron; Supor, Pall Corporation, Port Washington, NY, USA) water samples for laboratory analyses were collected and stored according to previously described procedures (Fecteau et al., 2022). Samples for anions were collected in 30 ml high-density polyethylene (HDPE) bottles that had been soaked and rinsed with deionized water multiple times; separate 30 ml samples for cations were collected in bottles that had been spiked with 6 M methanesulfonic acid resulting in a final concentration of ∼20 mM. These samples were frozen at -20 • C as soon as possible after collection and maintained at that temperature until analysis. DIC samples were collected in acid-washed 40 ml amber class vials and sealed with black butyl rubber septa without any headspace. DOC samples were collected in combusted (450 • C, 24 h) 40 ml amber glass vials spiked with 0.1 ml of 85% phosphoric acid (Thermo Scientific, Waltham, MA, USA) and sealed with Teflon-lined septa without any headspace.
Anions (F − , Cl − , SO 4 −2 , Br − , NO 3 − ) and cations (Li + , Na + , K + , Mg +2 , Ca +2 , NH 4 + ) were determined on separate Dionex DX-600 4 mm ion chromatography systems using suppressedconductivity detection as described elsewhere (Iacovino et al., 2020). Samples were injected via AS-40 autosamplers from 5 ml vials (2 injections per vial) onto 100 µl or 75 µl sample loops for anions or cations, respectively. Anions were separated using AG-/AS-18 columns and a hydroxide concentration gradient that was initially held isocratically at 5 mM for 10 min, followed by a nonlinear (Chromeleon curve 8) (RRID:SCR_016874) gradient applied over 32 min to 55 mM hydroxide, after which the concentration was kept constant at 55 mM for 7 min, reduced back to 5 mM hydroxide over 1 min, and then the column was re-equilibrated at 5 mM hydroxide for 10 min before the next sample injection. The flow rate was held constant at 1 ml/min. Cations were separated isocratically using 19 mM methanesulfonic acid on CG-/CS-16 columns at 0.5 ml/min over 58 min. Suppressors were operated in external water mode and suppressor currents were 137 and 50 mA for anions and cations, respectively. Calibration curves were constructed from a series of dilutions of mixed-ion standards (Environmental Express, Charleston, SC, USA) and accuracy was verified daily by analysis of an independent mixed-ion standard (Thermo Scientific).
Analyses of DIC and DOC were performed with a OI Wet Oxidation TOC analyzer coupled to a Thermo Delta Plus Advantage mass spectrometer as previously described . Briefly, CO 2 was generated via addition of phosphoric acid (DIC) or sodium persulfate (DOC) and the ion chromatogram for the molecular ion (44 m/z) was used for quantification relative to calibration curves prepared with sodium bicarbonate (DIC) or glycine (DOC) standards. Three sample loops with volumes of 1 ml (calibration range 10-200 mg C L −1 ), 5 ml (calibration range 2-50 mg C L −1 ), and 25 ml (calibration range 0.25-8 mg C L −1 ) were employed to capture the range of carbon concentrations across the sample set.
Biological sampling, extractions, and sequencing
Hot spring outflow sediment samples were collected and preserved for biological analyses (16S rRNA gene amplicon sequencing and subsequent microbial community diversity analyses). Samples were collected using a flame-sterilized spatula into a 1.8 ml cryovial. Once samples were collected, they were transferred into a container of dry ice and frozen until they could be stored at −80 • C at ASU.
For DNA extraction, biological samples were homogenized, and DNA was extracted using a ZymoBIOMICS DNA Miniprep Kit (Catalog # D4300), binding capacity 25 µg, as previously described (Howells, 2020). A NanoDrop was used to spectrophotometrically analyze the purity of the DNA and a Qubit fluorometric assay kit from Invitrogen (Catalog # Q32850) was used to determine the concentration of purified DNA. The DNA was then sequenced for both bacterial and archaeal 16S rRNA gene amplicons at Arizona State University's Biodesign Institute using Illumina MiSeq v2 2 × 300 chemistry (RRID:SCR_020134) with the Earth Microbiome Project primers 505F and 806R (Thompson et al., 2017). The 16S rRNA gene amplicon library was prepared following Earth Microbiome Project protocol, 1 (Caporaso et al., 2012). All raw sequences were uploaded to the NCBI Sequence Read Archive (SRA) (RRID:SCR_004891) under BioProject ID PRJNA938133.
Bioinformatic analyses
FASTQC (v. 0.11.9) (RRID:SCR_014583) was used to quality filter the 16S rRNA amplicon sequences (Andrews, 2010). The resulting high quality fasta files were then processed using the QIIME2 (v. 2020.2) (RRID:SCR_021258) pipeline to produce amplicon sequence variants (ASVs) and were denoised using the DADA2 plug-in (Bolyen et al., 2019). The SILVA database (RRID:SCR_006423) was used for the taxonomic classification of ASVs (Quast et al., 2012). The produced ASV table was normalized by putting the sequence counts into relative abundances and multiplying them by the mean library size (Supplementary Table 3; Fullerton et al., 2021). Non-metric multidimensional scaling (NMDS) analyses were performed using VEGAN R software (v. 2.5-7) (RRID:SCR_011950) and Bray-Curtis Dissimilarity values (Oksanen et al., 2022). The envfit function from the VEGAN package was used to add geochemical vectors to the NMDS and to calculate the respective p-values for each geochemical vector. The percent contribution for each geochemical vector was determined using the redundancy analysis (RDA) function in the VEGAN package. To determine the significance of separating sites as above, at, or below the photosynthetic fringe, analysis was performed using the canonical correlation analysis (CCA) function from the VEGAN package and an ANOVA test was performed on each axis using the R stats package. For the co-occurrence network analysis, ASVs were filtered to only include ASVs that had more than 20 reads across all samples and occurrences across more than 3 sample sites. The remaining ASVs were used to construct a co-occurrence network using R's igraph package (v. 1.2.11) (RRID:SCR_021238), in which each node is an ASV, and each edge represents a Spearman's correlation coefficient greater than 0.7 between the two nodes (Csardi and Nepusz, 2006;Fullerton et al., 2021). Cliques were determined by using the Louvain membership algorithm (Csardi and Nepusz, 2006). The nodes were then plotted and colored in Gephi (v. 0.9.4) (RRID:SCR_004293) based on their Spearman correlation with the selected geochemical variables (Bastian et al., 2009).
Sampling locations and geochemical data
A total of twelve hot springs from eight separate locations within YNP were sampled to investigate the connection between the microbiomic and geochemical transitions that occur down hot spring outflows (Supplementary Table 1 and Supplementary Figure 1).
To select sample sites, an LDA equation was developed (see section 2. Materials and methods) to determine the locations above and below the photosynthetic fringe such that sampling would be equidistant from the fringe in multivariate geochemical space. The photosynthetic fringe can often be visually confirmed as seen in example images of a sampled basic ( Figure 1A) and an acidic ( Figure 1B) hot spring outflow channel. An example of a sampled hot spring outflow before reaching the photosynthetic fringe is shown in Figure 1C. In 3 of the 12 hot spring outflows, the photosynthetic fringe could not be visually identified, so temperature, pH, and conductivity measurements were taken along the hot spring outflow and used in the LDA model to predict the location of the photosynthetic fringe.
In total, 46 samples were taken with the intention of spanning the temperature, pH, and sulfide ranges provided by YNP hot springs (Figure 2 and Supplementary Table 1). Of the 12 hot spring outflows sampled, four were acidic (pH < 4) (CF, MO, GL, and CH), three were acidic to circumneutral (pH 4-7) (MU, FI, and EM), and five were considered basic (pH 7-9) (RN, OB, BP, PB, and MN). Additionally, Shannon diversity values were Hot spring outflows with depictions of the photosynthetic fringe. Locations above (blue), at (yellow), and below (green) the photosynthetic fringe are indicated by respective diamonds. Black arrows indicate a general flow of water away from the source. computed to assess the alpha diversity of the microbial community and range from 2.63 to 5.93 (Supplementary Table 1). This study includes additional geochemical measurements of total ammonia, nitrate, DIC, DOC, and DO concentrations (Supplementary Table 1). Of the major ions measured, total ammonia and nitrate were the two most significant ions, contributing 3.78 and 2.55%, respectively, to overall microbial community composition (Supplementary Table 4
Microbial community composition
The relative abundance of taxonomic classes present in each of the 46 samples was determined by 16S rRNA gene sequencing to investigate the microbial community composition and diversity A total of 46 samples taken from the outflows of 12 separate hot springs above, at, and below the photosynthetic fringe, as determined by linear discriminant analysis, displayed as functions of (A) pH and temperature, and (B) total dissolved sulfide and temperature. In both panels (A,B), the dashed line represents the photosynthetic limits defined by Cox et al. (2011). along each outflow and among the hot springs sampled (Figure 3). In each of the hot springs sampled, ASVs associated with the taxonomic classes Thermotogae, Planctomycetia, Deinococci, Aquificae, Thermoprotei, and Nitrososphaeria were present. Aquificae and Deinococci were present in high relative abundances across all hot springs, up to 96.8 and 33.0%, respectively. In the four most basic springs sampled (RN, OB, MN, and PB), Deinococci reached relative abundances greater than 30%. Additionally, Nitrososphaeria occurred across all hot springs sampled with the highest relative abundances occurring in the basic sites, RN and OB, and above the photosynthetic fringe with relative abundances of 37.4 and 28.4%, respectively. Both Thermotogae and Planctomycetia occurred at all hot spring sites but had consistently low relative abundances, ranging from around 0.1-9.0 and 0.1-4.9%, respectively, when present. Additionally, there were unidentified bacteria present in each of the hot springs sampled with the highest relative abundances in acidic hot springs. As an example, below the photosynthetic fringe of GL, where the pH is 2.5, unidentified bacteria made up 21.2% of the community.
Amplicon sequence variants associated with photoautotrophs, including those in the phyla Cyanobacteria and Chloroflexi, such as Leptococcus and Chloroflexus, respectively, occurred with higher relative abundances in the circumneutral to basic hot spring outflows. Both Cyanobacteria and Chloroflexi occurred in nearly all hot spring outflows sampled, except, neither Cyanobacteria nor Chloroflexi occurred in the outflow of GL (49.7 • C, pH 2.5) or MO (74.0 • C, pH 2.4), nor did Chloroflexi occur in the outflow of FI (64.1 • C, pH 5.2). Furthermore, neither Cyanobacteria nor Chloroflexi surpassed a relative abundance of 0.1% in the outflow of any of the acidic hot springs sampled. Chloroflexi reached relative abundances above 1% only at pH values greater than 7, while Cyanobacteria reached a relative abundance above 1% only at pH values greater than 8. In basic conditions, Chloroflexi and Cyanobacteria made up to 30.7 and 25.6% of the microbial community, respectively, at their highest relative abundances. Chloroflexi occurred above, at, and below the visually determined photosynthetic fringe. However, Chloroflexi reached higher relative abundances at (0.9-2.4%) and below (0.0-22.6%) the photosynthetic fringe compared to above (0.0-3.4%). Cyanobacteria were also identified above the visually detected photosynthetic fringe but did not surpass a relative abundance of 1.8% except at outflow sample BP0.5, where the relative abundance was 6.2%. In contrast, at and below the photosynthetic fringe, the relative abundances for Cyanobacteria made up to 25.6% of the microbial community. ASVs associated with putative photoheterotrophs were also present, including those in the taxonomic classes of Alphaproteobacteria and Acidobacteriia, such as Acidiphilium, Acidisphaera, and Chloracidobacterium, which, when combined, only surpass a relative abundance of 1% in a single sample, MU3, where they make up 4.0% of the community.
Samples at or below the photosynthetic fringe indicate an apparent cut-off for photosynthesis at ∼73 • C for basic to circum neutral pH samples (Figure 2A). The temperature cutoff for photosynthesis is lower, ∼56 • C, for the outflow of acidic hot springs (Figure 2A), consistent with previous studies (Cox et al., 2011;Boyd et al., 2012;Fecteau et al., 2022). However, it should be noted that there are samples categorized as being "above" the photosynthetic fringe that are within these temperature limits, indicating additional factors may be restricting the growth of photosynthetic organisms in individual locations. Common bacterial phototrophs, such as Cyanobacteria and Chloroflexi, are abundant in basic hot spring outflows at and below the photosynthetic fringe, but also occur in small relative abundances above the visually determined photosynthetic fringe (Supplementary Figure 2). None of the 46 sample sites that were below the photosynthetic fringe had total sulfide concentrations that exceeded 500 µg/L; however, there was one sample, GL2, assessed to be "at" the photosynthetic fringe that exceeded 500 µg/L, the suggested sulfide limit defined by Cox et al. (2011) (Figure 2B).
In addition to ASVs associated with photosynthetic taxa occurring in the circumneutral to basic hot spring outflows, ASVs associated with non-phototrophs in the taxonomic classes of Acetothermia, Kapabacteria, Fervidibacteria, Hydrothermae, Anaerolineae, and the phylum Armatimonadota were also present. Percent relative abundance of the 16S rRNA gene sequencing results to the class level except for Armatimonadota, which is at the phylum level, and unidentified bacteria, which were binned together at the domain level. To focus on abundant features and overarching patterns, classes not occurring at >20% relative abundance when summed over all samples were binned into the "Other" category. Organization of hot spring sites, separated by black bars, follows the order of increasing pH shown in Supplementary Table 1. Within each site, samples are organized down the outflow with above (blue diamond), at (yellow diamond), and below (green diamond) the photosynthetic fringe indicated as in Figure 1. Non-metric multidimensional scaling (NMDS) analysis using the 16S rRNA gene sequencing data from each of the 46 samples. Each point represents the normalized microbial community composition determined in a hot spring sample while the distance between points represents the dissimilarity. Sample point colors (blue, yellow, and green) refer to the position along the photosynthetic fringe (above, at, below, respectively). Geochemical data are added as vectors; vectors that correlate with an ordination axis with a p-value < 0.05 are indicated by an asterisk.
Although Cyanobacteria and Chloroflexia were abundant within the outflow of basic hot springs, ASVs associated with the nonphotosynthetic phylum Armatimonadota were also prevalent in these samples, occurring with an average relative abundance of 13.5% when present. Armatimonadota were present in every sample above a pH of 7, except for sample site 0.5 from BP. As for the ASVs associated with the taxonomic classes Acetothermia, Kapabacteria, Fervidibacteria, Hydrothermae, and Anaerolineae, they reached relative abundances above 1% only above a pH of 7 but none were as consistently predominant as ASVs associated with the phylum Armatimonadota.
The ASVs abundant in the lower pH sites (pH < 7) include ASVs from the following taxonomic classes: Gammaproteobacteria, Alphaproteobacteria, Desulfurellia,
FIGURE 5
Co-occurrence network analysis of commonly occurring ASVs within the 46 hot spring samples. Each node represents an ASV, and nodes were organized into 18 cliques using Louvain's membership algorithm. The connections between nodes, also known as edges, represent a >0.7 Spearman's correlation coefficient. In panels (A-D), each node is colored on a gradient of blue to red based on its Spearman's correlation, from negative one to positive one, respectively, with the following geochemical parameters, (A) pH, (B) temperature, (C) total ammonia, and (D) DOC.
Each of the cliques is differentiated by color and number in panel (E).
Actinobacteria, Acidimicrobiia, and Thermoplasmata. Gammaproteobacteria, Alphaproteobacteria, and Actinobacteria were present in at least one sample from each hot spring outflow but reached relative abundances above 1% only at acidic sites. Acidimicrobiia were present in all hot springs sampled except for in the outflow of MN (75.4 • C, pH 9.0) and only reached a relative abundance over 10% at pH values below 3. Gammaproteobacteria had a high relative abundance of 61.8% at the photosynthetic fringe of hot spring CF where the temperature was 28.8 • C and the pH was 3.7. Thermoplasmata and Desulfurellia were restricted to circumneutral and acidic sites, only occurring below a pH of 8.0 and 6.5, respectively, with Desulfurellia only making up to 5.3% of any microbial community composition, while Thermoplasmata reached up to 34.5% of the relative abundance of the microbial community composition below the photosynthetic fringe of CF.
Additionally, there were ASVs associated with three taxonomic classes that were restricted to circumneutral conditions, including Ktedonobacteria, Bacteroidia, and Bathyarchaea. Ktedonobacteria only occurred with a relative abundance higher than 1% at the photosynthetic fringe of MU, where they made up 32.2% of the community, which is at a pH of 5.7. Bacteroidia occurred across pH, but only made up more than 1% of the community in the outflows of MU, FI, and EM, which range in pH from 3.9 to 8.0. Below the photosynthetic fringe of FI, Bacteroidia made up 39.0% of the microbial community. Bathyarchaea were present in small relative abundances (<0.1%) in the outflows of RN and CF but occurred with relative abundances of 13.6 and 12.6% at the photosynthetic fringes of FI and EM, respectively. The photosynthetic fringe of FI occurred at a pH of 5.8 while the photosynthetic fringe of EM occurred at a pH of 7.8.
Overall, there were trends in microbial community composition across pH as well as down the outflows of the 12 sampled hot springs. In the lower pH sites, ASVs associated with taxonomic classes such as Thermoplasmata and Gammaproteobacteria dominated. In basic sites, ASVs associated with the phylum Armatimonadota dominated, as well as ASVs associated with the potentially photosynthetic phyla, Cyanobacteria and Chloroflexi. The relative abundance of potential bacterial photosynthetic taxa increased in relative abundance with increasing pH and increased down hot spring outflows. In contrast, the relative abundance of the taxonomic class Hydrothermae decreased down hot spring outflows. At all hot springs and outflows, ASVs for the taxonomic classes Aquificae and Deinococci were present and made up a considerable portion (19.0 and 10.8% when averaged across all samples, respectively) of the microbial community composition.
Geochemical influence on community composition
Non-metric multidimensional scaling (NMDS) analysis was used to interrogate the influence of geochemistry on microbial community composition across the photosynthetic fringe (Figure 4). The 16S rRNA gene sequencing data was used to determine the microbial community composition (Supplementary Table 5). Geochemical vectors for each of the geochemical measurements were initially added to indicate potential causes for differences in microbiome composition in the 46 sampled locations; however, only a subset of eight geochemical vectors were included due to either their hypothesized importance (sulfide), or their contribution to the variation of the microbial community composition determined by RDA (pH, Temperature, DIC, DOC, nitrate, total ammonia, and DO) (Supplementary Table 4). Of the eight included geochemical vectors, all correlated with changes in microbial community composition with statistical significance (p > 0.05) except for sulfide. Along NMDS2, the distribution of sites can be differentiated based on position relative to the photosynthetic fringe. However, only sites labeled as being above the photosynthetic fringe can be differentiated from the sample sites at or below the photosynthetic fringe with statistical significance according to CCA analysis (Supplementary Figure 5). The geochemical vectors that correlate with changes in microbial community composition along NMDS2 with statistical significance are ammonia, DO, temperature, and nitrate, all of which increase down the hot spring outflows. There is more variation in microbial community composition across NMDS1 which correlates with changes in pH, DIC, and DOC with statistical significance. This indicates that there is a complex interplay between geochemical parameters and microbial community composition.
While NMDS analyses provide insight into overall microbial community composition in relation to geochemical parameters, co-occurrence network analyses provide insight at the level of individual ASVs and their correlation(s) with geochemical parameters (Fullerton et al., 2021). In a co-occurrence network, each node is an ASV. All nodes were grouped into 18 unique cliques (statistically significant groups of ASVs) using the Louvain algorithm and consisted of at least two nodes (Supplementary Table 6; Fullerton et al., 2021). Clique analysis provides a mechanism for observing microbial patterns within guilds and individuals rather than as an entire assemblage (e.g., NMDS).
The co-occurrence network consisted of ten cliques with no interconnections (modular cliques), a central cluster consisting of six interconnected cliques, and a separate cluster of two cliques. This topology suggests there are groups of ASVs that uniquely co-occur (modular cliques) and groups of ASVs that co-occur predominantly with each other (main cluster cliques). Some members of the main cluster cliques co-occur with ASVs outside the clique as well. The statistical association of the cliques with geochemical parameters of interest was conducted by Spearman correlation. From these combined analyses we can observe how cliques and individual ASVs co-varied with pH, temperature, total ammonia, DIC, DOC, nitrate, and sulfide concentrations (Figure 5; Supplementary Figure 6).
Within the main cluster of six cliques (2, 4, 5, 8, 9, and 13) all ASVs correlated positively with an increase in pH ( Figure 5A). The remaining cliques, outside the central cluster, are composed of ASVs that mostly occur in acidic or circumneutral samples, and, therefore, either correlate negatively or have no significant correlation with increasing pH. However, cliques 17 and 18 are an exception, and like the main cluster cliques, are correlated with an increase in pH.
The main cluster cliques consist of the phototrophs Leptococcus, Roseiflexus, and Chloroflexus, as well as nonphototrophs from the taxonomic classes Aquificae, Anaerolineae, Deinococci, Acetothermiia, and the phylum Armatimonadota among others listed in Supplementary Table 6. Due to the prevalence of Aquificae and Deinococci throughout the dataset, it is important to note that the genera present in the main cluster cliques are Thermocrinis and Thermus, respectively. Although all 6 cliques within the main cluster are correlated positively with an increase in pH, only cliques 2, 4, and 9, which include ASVs associated with the taxa Thermocrinis, Deinococci, Leptococcus, Acetothermia, and Armatimonadota, are positively correlated with temperature. Clique 5 is correlated slightly negatively with temperature and consists of the taxonomic classes Anaerolineae, Acetothermia, and the phylum Armatimonadota. Additionally, cliques 2, 4, and 9 are all correlated negatively with total ammonia, whereas the nodes in clique 5 are mixed between positive and negative correlations with total ammonia. The remaining cliques in the cluster (8 and 13) are not correlated with total ammonia. In contrast, all 6 cliques within the cluster are correlated negatively with DOC, although cliques 2 and 9 have a stronger negative correlation with DOC than cliques 4, 5, 8, or 13.
Outside of the main cluster cliques, there are five cliques (3, 7, 11, 12, and 15) that are correlated negatively with pH. Cliques 7 and 11 are interconnected and tend to follow similar trends. Cliques 7 and 11 include ASVs associated with the taxonomic classes Acidimicrobiia, Thermoplasmata, Deinococci, and Aquificae. In this case, Deinococci is represented by Meiothermus, and Aquificae is represented by Hydrogenobaculum. Cliques 7 and 11 are correlated positively with total ammonia and DOC concentrations but are correlated negatively with pH and temperature. The only major differences in trends for cliques 7 and 11 are displayed in Supplementary Figure 6, where clique 7 has a strong negative correlation with nitrate, whereas clique 11 has a slight positive correlation with nitrate.
The unconnected cliques, 3, 12, and 15, are also correlated negatively with pH. Cliques 3, 12, and 15, which contain the taxa Meiothermus, Acidimicrobiia, Gammaproteobacteria, Alphaproteobacteria, and Thermoprotei, are correlated positively with total ammonia and DOC concentrations. However, cliques 3 and 12 are correlated negatively with pH but not temperature, whereas clique 15 is negatively correlated with both temperature and pH.
In contrast to the previously mentioned cliques, cliques 17 and 18 are correlated positively with an increase in pH and are outside of the central cluster. Clique 18, which only contains two taxa, Fervidicoccaceae and Ignisphaera, a Thermoprotei, is positively correlated with both temperature and pH but negatively correlates with both total ammonia and DOC. Clique 17 contains Nitrososphaeria and Thermoprotei and is correlated positively with temperature, but has no strong correlation with total ammonia or DOC.
The remaining cliques, 1, 6, 10, 14, and 16, lack strong correlations with pH in either direction. Cliques 6, 10, and 16 are only correlated with temperature, cliques 6 and 16 correlated negatively with temperature, and clique 10 correlated positively with temperature. Clique 6 is correlated negatively with temperature and total ammonia concentration but is correlated positively with DOC. Besides temperature, cliques 10 and 16 cannot be differentiated based on pH, total ammonia, or DOC. However, clique 16 is correlated negatively with sulfide and is positively correlated with nitrate (Supplementary Figure 6). There are only 2 nodes in clique 16 representing the phylum Armatimonadota and the genus Thermoflavifilum, whereas clique 10 contains the taxa Geoarchaeales and Corynebacterium. Clique 14 is unique in that it is correlated negatively with temperature but has a slight positive correlation with total ammonia. The taxa in clique 14 include Meiothermus, Betaproteobacteria, and Gammaproteobacteria. Clique 1, which consists of Mycobacterium and Thiomonas, is correlated positively with temperature and correlated negatively with total ammonia, but is not strongly correlated with temperature or DOC.
Overall, we find internal consistencies between the pH of samples where ASVs are abundant and the correlation with pH in the co-occurrence networks. For example, taxa associated with higher pH samples in Figure 3 are located within the 6 cliques in the main cluster in Figure 5 and are correlated positively with pH. Of the twelve remaining cliques surrounding the cluster, five of them are negatively correlated with pH, including 3, 7, 11, 12, and 15, and include taxa associated with low pH samples in Figure 3. Cliques 17 and 18 are correlated positively with pH and contain taxa associated with circumneutral to basic samples, as well as Thermoprotei, a taxonomic class found in all hot springs sampled. Finally, the remaining cliques, 6, 10, and 16, show no strong correlation with pH in either direction and contain taxa such as Meiothermus, from the class Deinococci, and Armatimonadota, both of which are found across all hot springs sampled.
Discussion
The stark visual differences in microbial community compositions above versus below the photosynthetic fringe of hot spring outflows were also observed in the NMDS analysis in the distribution of points relative to their location across the photosynthetic fringe, as determined by LDA analysis (see section 2. Material and methods; Figure 4). The difference in the microbial community composition above versus below the photosynthetic fringe was determined and verified to be statistically significant (p < 0.05) through an ANOVA test of the CCA axes (Supplementary Figure 5). In contrast, metrics of alpha diversity showed no significant correlation to the LDA-determined photosynthetic fringe (Supplementary Figure 7).
The photosynthetic fringe is not necessarily indicative of an ecotone as described by Meyer-Dombard et al. (2011) (Supplementary Table 1), which is a region where there is either an increase or a decrease in biological diversity where two or more communities mix (van der Maarel, 1990;Meyer-Dombard et al., 2011). Because the photosynthetic fringe represents a transition from a chemotrophic microbial community to a community also consisting of phototrophs, there is the potential that this transition promotes biological diversity. Ecotones have been identified at the photosynthetic fringe of hot spring outflows with streamer biofilm communities, but ecotones were not present at the photosynthetic fringe of hot spring outflows lacking streamer biofilm communities (Meyer-Dombard et al., 2011). Instead, hot spring outflows lacking streamer biofilm communities had higher diversity below the photosynthetic fringe transition. It should be noted that Meyer-Dombard et al. (2011) measured diversity by species richness and taxonomic complexity, whereas the measurement of diversity for this study (Shannon Diversity Index) accounts for both species richness and relative abundances. Overall, Shannon diversity values at the photosynthetic fringe were not higher than values above or below the photosynthetic fringe (Supplementary Figure 7). Instead, the mean Shannon diversity values were lower at the photosynthetic fringe, but not with statistical significance.
Beta diversity was investigated using NMDS analysis with the addition of geochemical vectors. Geochemical factors known to affect microbial community composition, as described in the introduction, including temperature, pH, total sulfide, total ammonia, nitrate, DIC, DOC, and DO concentrations, were added to the NMDS as geochemical vectors (Cox et al., 2011;Hamilton et al., 2011a;Inskeep et al., 2013). Each of these geochemical parameters, except for total sulfide, exhibited a statistically significant correlation with the overall microbial community composition. Additionally, there were observable patterns among cliques of ASVs and the geochemical parameters determined to correlate with microbial community composition with statistical significance (Figure 5). However, only individual nodes are correlated with sulfide and nitrate concentrations.
Both pH and temperature correlated with microbial community composition with statistical significance according to the NMDS analysis (p < 0.05) and were determined to contribute 7.33 and 4.85%, respectively, to variations in the microbial community compositions according to RDA, in agreement with previous studies (Inskeep et al., 2013;Colman et al., 2016; Figure 4 and Supplementary Table 4). The extent of the roles of pH and temperature in affecting microbial community compositions are revealed in the co-occurrence network analysis as well as in the distribution of individual taxa in the 16S rRNA gene amplicon analysis (Figures 3, 5, respectively). In acidic hot spring outflows, classes including Thermoplasmata, Acidimicrobiia, and Gammaproteobacteria are major constituents. In basic hot spring outflows, Cyanobacteria, Chloroflexi, and Deinococci are the major constituents. There are also taxonomic classes present in all samples regardless of pH, such as Aquificae and Nitrososphaeria. These findings are reflected in most of the cliques being correlated with pH (13 cliques) and/or temperature (16 cliques), whereas only 5 cliques are not correlated with pH and only two cliques are not correlated with temperature. There are no cliques that are not correlated with either temperature or pH. Although temperature and pH do contribute to microbial community composition, our analyses indicate that other geochemical factors are additionally contributing to the overall microbial community composition in hot spring outflow communities.
Sulfide concentrations are known to negatively impact oxygenic photosynthesis and have been linked to differences in microbial community composition (Cox et al., 2011;Hamilton et al., 2011b;Boyd et al., 2012;Inskeep et al., 2013;Jørgensen and Nelson, 1988), where the mechanism of sulfide's negative impact on phototrophs is most likely due to an inhibition of photosystem II (Oren et al., 1979;Miller and Bebout, 2004;Boyd et al., 2012). Rates of autotrophy in Yellowstone microbial mats have also been shown to be sulfide dependent, where acidic phototrophic communities were suppressed at 5 µM sulfide . This suppression was not observed in basic mats. The suppression of acidic phototrophic communities at 5 µM sulfide, largely consisting of phototrophic algae, may be due to H 2 S being the dominant form of sulfide in acidic environments. H 2 S has been shown to more readily cross the cell membrane than HS − (Howsley and Pearson, 1979).
Our samples were distributed across sulfide concentrations (below detection <0.15 µmolal to 52.6 µmolal) and pH (1.92-9.04). Two samples designated as being at the photosynthetic fringe occurred beyond the previously noted maximum range for photosynthesis of ∼15 µmolal (Cox et al., 2011), thereby expanding the possible sulfide range for photosynthesis in YNP hot springs ( Figure 2B). Additionally, eukaryotic phototrophs have been identified in acidic samples with sulfide concentrations above 5 µM, the previously noted limit for phototrophic activity in acidic conditions Romero, 2018). Therefore, the concentrations of sulfide required to limit the presence of phototrophs in YNP hot springs may be higher than initially determined based on measurements coinciding with visual detection of the photosynthetic fringe or measurements of DIC uptake. However, in this study we only observed presence, not activity, under these sulfide concentrations.
When considering the overall microbial community composition, sulfide concentrations did not contribute to the composition with statistical significance even in acidic hot spring outflows (pH < 4, p = 0.30) (Figure 4). Additionally, sulfide concentrations only contributed 1.10% to overall variation in microbial community composition according to RDA. The lack of sulfide's statistical significance in microbial community composition is further supported by the co-occurrence network (Supplementary Figure 6), in which there are no trends between overall clique membership and sulfide concentrations. However, there are individual ASVs, or nodes, within cliques that are correlated with sulfide concentrations, including those representing taxa of known phototrophs, although this does not hold true for all nodes representative of phototrophic taxa. Overall, these findings corroborate the importance of sulfide concentrations in determining the distribution of individual taxa, but do not support the importance of sulfide concentrations in determining overall microbial community compositions, despite previous evidence for the suppression of phototrophic activity in acidic environments.
The gradient of increasing DO concentrations down hot spring outflows contributes 2.68% to changes in the microbial community composition according to RDA (Inskeep et al., 2013). The increase in DO down hot spring outflows is connected to the increased solubility of O 2 gas as temperature decreases and the increasing extent to which the reduced hydrothermal fluids have equilibrated with the atmosphere. In sulfidic systems, DO is quickly consumed in abiotic oxidation reactions, including the oxidation of reduced sulfur compounds (Inskeep et al., 2013). Examples include both the rapid oxidation of sulfide to polysulfides and the oxidation of sulfide to thiosulfate (the fate for up to 33% of sulfide present in pH 6-8 hot springs in YNP) (Nordstrom et al., 2005). The oxidation of reduced sulfide compounds affects the potential niche availability for sulfur-cyclers, therefore potentially contributing to microbial community composition (Nordstrom et al., 2005(Nordstrom et al., , 2009Inskeep et al., 2013). Though DO in our samples is not correlated significantly with sulfide (−0.201 Pearson correlation), DO does increase with a decrease in temperature and is significantly correlated with temperature (−0.779 Pearson correlation). Therefore, the effects of DO on microbial community composition are difficult to disentangle from those of temperature, as observed by others (Inskeep et al., 2013), but cannot be ruled out in influencing overall microbial community composition.
Nitrogen species correlate with the abundance of subsets (specific cliques) within microbial communities, as well as overall microbial community composition down hot spring outflows. Ammonia and nitrate contribute 3.78 and 2.55% to the variation of overall microbial community (Supplementary Table 4). Furthermore, by including correlations with ammonia and nitrate concentrations in the co-occurrence network analysis, cliques can be further distinguished. Although clique 5 is not correlated with ammonia, it is 1 of only 3 cliques containing nodes representative of known ammonia-oxidizers, the other two cliques being cliques 2 and 13, both of which are correlated negatively with total ammonia. Only two taxa of ammonia-oxidizers, Nitrosocaldus and Nitrososphaeria, were identified in the 46 samples. Similar trends are present with nodes representing taxa of known phototrophic nitrogen fixers, such as Leptococcus, which trend negatively with ammonia and trend positively with pH. Spring pH could be a driver for N-cycler distribution, but acidic springs provide higher concentrations of N-compounds, such as ammonia, providing what seems to be a sparsely attended buffet for N-cyclers beyond N-fixers. Given these findings, the hypothesis by Hamilton et al. (2014) that ammonia-oxidizers consume the low levels of fixed nitrogen available in basic springs, leaving a niche for nitrogen fixers, would not apply in acidic YNP springs. Additionally, in acidic YNP hot springs, though nif genes and N-fixers are found abundantly, ammonia-oxidizers have not yet been identified definitively nor cultured from acidic YNP hot springs (Reigstad et al., 2008;Hamilton et al., 2011a,b;Boyd et al., 2013). However, the amoA gene responsible for the first step of ammonia oxidation has been identified in acidic hot springs but is less abundant than in circumneutral to basic hot springs (Boyd et al., 2013). Thus, the cycling of nitrogen in acidic YNP hot springs is ripe for further investigation to characterize the full nitrogen cycle.
According to the NMDS analysis, both DIC and DOC concentrations contribute significantly to differences in microbial community composition along NMDS1 with DIC and DOC contributing 8.20 and 3.60%, respectively, to the variation in the overall composition of microbial community compositions (Figure 4 and Supplementary Table 4). DIC and DOC concentrations are correlated negatively with each other (−0.69 Pearson correlation) and are both strongly correlated with pH (0.79 and −0.77 Pearson correlations, respectively). The strong correlation between DIC and pH is attributed to the increased DIC input in high temperature, circumneutral to basic hot springs, due to being predominantly hydrothermally fed (Nordstrom et al., 2005). Additionally, there is the degassing of CO 2 , which increases the pH for already circumneutral to basic hot springs (Nordstrom et al., 2005). In contrast, the strong negative correlation between DOC and pH has been attributed to increased soil input in acidic springs compared to basic springs which are commonly raised and/or surrounded by sinter, restricting DOC inputs to hydrothermal and microbial sources (Nye et al., 2020).
The effects of dissolved inorganic and organic carbon on microbial community composition are difficult to deconvolve; however, in hot spring outflow studies dissolved carbon and position along the photosynthetic fringe does correlate with changes in the presence or absence of genes associated with specific carbon cycling pathways . The dominant carbon fixation pathway down the outflow of BP was the reverse tricarboxylic acid cycle, a chemoautotrophic carbon fixation pathway, but below the photosynthetic fringe, the reductive pentose phosphate cycle, a carbon fixation pathway used by oxygenic phototrophs, also became abundant. The transition in the abundance of chemotrophic-versus phototrophic-associated carbon fixation pathway genes is not only indicative of changes in carbon metabolism down hot spring outflows but would be reflected in the microbial community composition. Therefore, at least in the case of a basic outflow, changes in DIC and DOC concentrations are reflected within the microbial communities.
In the present study, the significance of DIC and DOC in microbial distribution is reflected in the clique analysis, with cliques 2-9, 11, and 13 correlating with DOC concentrations (Figure 5). Clique 2 has the strongest negative correlation with DOC and includes taxa found in the outflows of basic sites, where DIC concentrations are comparatively high, and it includes taxa that are generally known as heterotrophs or phototrophs, such as Armatimonadota and Synechococcus. In contrast, clique 7 has a strong positive correlation with DOC and includes taxa found typically in cooler acidic sites but are generally known as chemoautotrophs, such as Hydrogenobaculum and Acidimicrobia. Even though DIC and DOC concentrations may be important down individual hot spring outflows, when analyzing the NMDS and breaking up the microbial data into cliques for co-occurrence analysis, it becomes more difficult to separate DIC and DOC from pH and temperature to gain meaningful information on the overall composition of hot spring microbial communities.
Conclusion
Although prior studies have focused on temperature, pH, and sulfide as determining factors for the position of the photosynthetic fringe, these geochemical variables, nor the geochemical variables included in this study, account for the true complexity of hot spring outflows and their hosted microbial communities. Each of the 12 hot spring outflows included in this study is geochemically complex as well as visually and geochemically unique. Though the stark differences visually observed between the predominantly chemotrophic communities above the photosynthetic fringe and the phototroph-containing communities below the photosynthetic fringe of each outflow were supported by CCA to be different in microbial community composition with statistical significance, statistical patterns in the geochemistry across all photosynthetic fringe locations studied are more nuanced. Even when including DIC, DOC, nitrate, ammonia, and DO concentrations, in addition to pH, temperature, and sulfide, the position of the photosynthetic fringe as defined by LDA could not be completely explained across the 12 hot spring outflows included in this study. While temperature and pH correlate with the composition of microbial communities and the position of the photosynthetic fringe down hot spring outflows, other variables, such as DIC, DOC, nitrate, ammonia, and DO concentrations act as supporting players, and are themselves correlated with pH and temperature. Though the cooccurrence analysis provides further differentiation on distribution based on taxa and geochemistry, especially when analyzing through the lens of dissolved carbon or ammonia concentrations, no single variable or set of variables could significantly predict community composition. However, according to both the NMDS analysis and the co-occurrence network analysis, the concentration of sulfide may play less of a role in overall microbial community composition than previously hypothesized. Overall, these findings mirror the complexity of the hot spring outflows studied. Further work inclusive of energy supplies and the rate of change in chemical concentrations down the hot spring outflows were not considered in this study but have been suggested as potential contributing factors in microbial community composition in hot spring outflows (Shock and Holland, 2007;Cox et al., 2011). Furthermore, these are active biological systems where factors of competition and adaptation could also explain departures from what chemical observations might predict (Leibold et al., 2022).
Data availability statement
The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found in the article/Supplementary material.
Author contributions
ES, GB, and KF conceived the study and supervised the sampling. AH, FD, and GG carried out the DNA extractions, library prep, and sequencing efforts. KW, AH, and ET-R performed the sequence processing and bioinformatic analyses. KF performed the processing of water samples and ion chromatography (IC). KW and ET-R conducted the in-depth statistical analyses of geochemical and microbiomic data, including NMDS, and the co-occurrence network analysis. KW and ET-R wrote the manuscript with input from all authors. All authors contributed to the article and approved the submitted version.
Funding
This research was supported in part by the NASA Exobiology grant #NNX16AJ61G.
Organization of hot spring sites, separated by black bars, follows the order of increasing pH as in Supplementary Table 1. Within each site, samples are organized down the outflow with above (blue diamond), at (yellow diamond), and below (green diamond) the photosynthetic fringe indicated as in Figure 1. SUPPLEMENTARY FIGURE 5 Canonical correspondence analysis (CCA) using the 16S rRNA gene sequencing data from each of the 46 sample sites separated by relative position to the photosynthetic fringe. Sample point colors (blue, yellow, and green) refer to the position along the photosynthetic fringe (above, at, below, respectively). Samples above the photosynthetic fringe are separated from samples at or below the photosynthetic fringe on the CCA1 axis with statistical significance (p < 0.05) as determined by an ANOVA test. Samples at and below the photosynthetic fringe are separated on the CCA2 axis but not with statistical significance.
SUPPLEMENTARY FIGURE 6
Co-occurrence network analysis of commonly occurring ASVs within the 46 hot spring samples. The ASVs are organized into 18 cliques using Louvain's membership algorithm. Edges represent a >0.7 Spearman's correlation coefficient. Each node is color-coded based on the strength of the Spearman correlation coefficient, negative 1 to positive 1 (blue to red, respectively) between the presence and abundance of the ASV with the following geochemical parameters, (A) pH, (B) temperature, (C) total ammonia, (D) DOC, (E) total dissolved sulfide, (F) nitrate, and (G) in relation to the photosynthetic fringe. Panel (H) shows all cliques differentiated by color and number of the clique.
SUPPLEMENTARY TABLE 1
Geochemical data, alpha diversity, and position in respect to the visually determined photosynthetic fringe for each sample. Samples are listed in the order in which they appear in Figure 3.
SUPPLEMENTARY TABLE 2
Extended geochemical data and position in respect to the visually determined photosynthetic fringe for each sample. All officially recognized hot spring names are listed along with citations. Samples are listed in the order in which they appear in Figure 3.
SUPPLEMENTARY TABLE 3
Sequence count data for each sample. The raw sequence count, the cleaned sequence count, and the sequence count after normalization are included for each sample.
SUPPLEMENTARY TABLE 4
Percent contribution for each geochemical variable included in this study on overall microbial community composition determined by RDA. Variables that were determined to contribute to overall community composition with statistical significance (p < 0.05) according to the NMDS analysis are bolded. Percent contributions calculated from a subset of sample data are marked with an asterisk.
SUPPLEMENTARY TABLE 5
ASV count table using the normalized sequence counts for each sample. The SILVA database was used to provide the taxonomic labels for each ASV using QIIME2 (v. 2020.2).
SUPPLEMENTARY TABLE 6
ASV node table organized by the assigned clique. All nodes present in the co-occurrence network analysis are provided in this table along with the associated ASV down to the lowest available taxonomic classification. The level of taxonomic classification for each ASV is also noted. | 2023-04-29T13:02:51.998Z | 2023-04-28T00:00:00.000 | {
"year": 2023,
"sha1": "3af4456ae4f01e58606f84c0a65631f3528ada7f",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "3af4456ae4f01e58606f84c0a65631f3528ada7f",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
} |
247295195 | pes2o/s2orc | v3-fos-license | Encouraging voluntary government action via a solar-friendly designation program to promote solar energy in the United States
Significance Due to market and system failures, policies and programs at the local level are needed to accelerate the renewable energy transition. A voluntary environmental program (VEP), such as SolSmart, can encourage local governments to adopt solar-friendly best practices. Unlike previous research, this study uses a national sample, more recent data, and a matched control group for difference-in-differences estimation to quantify the causal impact of a VEP in the public, rather than private, sector. We offer empirical evidence that SolSmart increased installed solar capacity and, with less statistical significance, the number of solar installations. The results inform the design of sustainability-focused VEPs and future research to understand the causal pathways between local governments’ voluntary actions and solar market development.
presents the complete regression results that are shown in Table 1 in the main text. The dependent variables in Table S8 are the natural logarithm of installed capacity, the number of installations, and soft costs. Models 1, 3, and 5 include renewable goal as a matching and control variable, while models 2, 4, and 6 use solar goal instead. Additional control variables in the DID models include climate plan, number of installers, annual performance-based incentive, rebate, median home value, occupied housing units, median household income, the number of installations (for the soft costs models), and average efficiency of systems (for the soft costs models). Table S9 shows the results of the absolute (rather than percent) change of the SolSmart program on installed capacity, the number of installations, and soft costs. The dependent variables in Tables S9 are installed capacity, the number of installations, and soft costs (without the log transformation). In models using renewable goal, the SolSmart designation is associated with an increased installed capacity of 83 kW/month, an increase of 6 installed systems/month (statistically insignificant), and a soft cost reduction of $0.16/W. In models using the existence of solar goal, the coefficients for the absolute effects are less robust and not significant.
As most of the price data and system size are self-reported by system owners or installers, it is possible that some outliers are errors. To check the robustness of our regression models, we re-ran all the models after removing potential outliers, presented in Tables S10 and S11. We define potential outliers using the interquartile range (IQR) criterion. IQR is the difference between the third and first quartile of a variable. Any observations above Q0.75+1.5*IQR or below Q0.25-1.5*IQR (Q0.25 and Q0.75 refer to first and third quartile respectively) are classified as potential outliers. After removing these potential outliers, the regression results that use the natural logarithm of dependent variables are similar to our model results in the main text, suggesting that our main regression results based on percent change are robust. However, the results that use the absolute value of dependent variables are much smaller in the models without outliers, indicating that the results for the absolute changes in installed capacity, the number of installations, and soft costs could be biased by outliers.
Tables S12 tested the robustness to pair-specific linear time trends. We include an interaction between the linear time trend and each matched pair to control for varying time trends for each matched pair. In most of the models, the treatment effects are smaller and less significant (although still significant at the 10% level). Kearney and Levine (2018) suggest that this approach may overfit the data, leaving little remaining variation for observing the effect of a treatment 1 . In addition, Wolfers (2006) suggests that this approach may absorb part of the treatment effect 2 . Tables S13 and S14 tested the robustness to excluding specific outlier months after designation. Figure 1 shows that month 3 and month 18 have a large increase in installed capacity and number of installations, so these months might be outliners. Tables S15 and S16 include both solar goal and renewable goal as matching variables in PSM and control variables in DID. Although the effect of SolSmart is significant and in the hypothesized direction, the matching quality is less than ideal and Treatment is significant. These results show that our results are generally robust to different model specifications.
To further examine why the effect of SolSmart on the number of installations is less robust than that of installed capacity, we explored two potential mechanisms, (1) net metering policies and (2) percent of ground-mounted installations. For net metering rates, we collected data from DSIRE to categorize the level of net metering for each community (3,142 total SolSmart and control communities) at three levels: (1) at the full retail rate, (2) at avoided cost rate, (3) at avoided cost rate with credit limits. Past studies have found that the rate structure of NEM tariffs affects consumer's electric bill savings from PV solar 3 , which is assumed to affect residential solar adoption decision-making. Thus, rather than collect NEM rates, which cannot account for the differences in NEM policy structures, we created a categorical variable that distinguishes between the different types of NEM policies based on the potential benefits they offer consumers. Although this data is aggregated by DSIRE, it is not recorded in a machine-readable format. Therefore, we manually coded the net-metering policy for each community. For 1,523 communities, we used the state-level net metering policy because it applied to all utilities and did not change between 2013 and 2018. For the remaining 1,889 communities, we used GIS utility territory maps to identify all electric utilities serving each community. Then, using archived utility reports and the Wayback Machine, we identified all NEM policies in place for each utility between 2013 and 2018. For communities served by multiple utilities, the utility with the most "solar-friendly" NEM policy was identified and used to represent the community. As most communities served by multiple utilities were in deregulated states that allow customers to select their service provider, it was assumed that solar customers would select the service provider with the greatest incentives for solar owners.
Our analysis suggests that both net metering and percent of ground-mounted systems may be confounds, but the original conclusions are robust. Tables S17-S22 show the effect of adding net-metering policies with different model specifications. Including net metering as a matching variable ensures that we compare communities with the same type of net metering policy. Including net metering as a control variable in the DID model ensures that we separate the effect of net metering policies from the effect of SolSmart when estimating across communities. When net metering is included as a matching variable and a control variable in the DID model, the results are consistent for the number of installations, but the estimates for installed capacity are higher. Across the models, lower net metering rates (i.e., at the avoided cost rate) are associated with fewer installations. As a result, the results for the number of installations may be less robust because they are confounded with net metering policies. The impact of SolSmart on number of installations is statistically significant in the models with net metering policies For the insignificant model in the main text (the model with solar goals), the effect is larger and significant at the 10% level when net metering is included. However, the quality of the matching may not be ideal because the coefficient of Treatment is significant. Despite having 2,215 communities available as control communities, the quality of the matching does not always increase when adding more variables. This is likely due to constraints in finding a good match across all the variables, particularly for categorical variables.
Tables S23-S28 show the effect of adding percent of ground-mounted systems based on different model specifications. It is possible that some communities have a higher rate of ground-mounted systems, which tend to have higher capacity than rooftop systems. For the same level of installed capacity, communities with a higher percentage of ground-mounted systems tend to have fewer systems. Therefore, the difference in number of systems between treated and control communities might be confounded by the percent of groundmounted systems. Unfortunately, data availability is an issue for ground-mounted systems so including this variable reduces the sample size to 120 communities (60 treated and 60 control communities). When groundmounted systems are included as a matching variable and a control variable in the DID model, the effect of SolSmart on solar capacity and number of installations is more similar. Adding ground-mounted systems reduces the variability in the data, making the effects larger in most cases. 258 Notes: SolSmart Impact is the interaction term between SolSmart communities and pre-post designations, which captures the causal effect of SolSmart designation on installed capacity, number of installations, and soft costs. Treatment is a dummy variable indicating whether or not a community is SolSmart designated. Pre-post is a dummy variable representing the time period after designations. ***, **, and * represent the significance levels of 1%, 5%, and 10%, respectively. Robust standard errors that are clustered at community level are reported in parentheses. These notes apply to all of the regression models reported below. | 2022-03-09T06:22:35.190Z | 2022-03-07T00:00:00.000 | {
"year": 2022,
"sha1": "d5a44f3d75613e714588e36d5f8fb24cf277ba3f",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1073/pnas.2106201119",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "e3054acb2e4c18ed8e0af69945c5617836f011e0",
"s2fieldsofstudy": [
"Political Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
203581212 | pes2o/s2orc | v3-fos-license | Myxozoan Adhesion and Virulence: Ceratonova shasta on the Move
Motility factors are fundamental for parasite invasion, migration, proliferation and immune evasion and thus can influence parasitic disease pathogenesis and virulence. Salmonid enteronecrosis is caused by a myxozoan (Phylum Cnidarian) parasite, Ceratonova shasta. Three parasite genotypes (0, I, II) occur, with varying degrees of virulence in its host, making it a good model for examining the role of motility in virulence. We compare C. shasta cell motility between genotypes and describe how the cellular protrusions interact with the host. We support these observations with motility gene expression analyses. C. shasta stages can move by single or combined used of filopodia, lamellipodia and blebs, with different behaviors such as static adhesion, crawling or blebbing, some previously unobserved in myxozoans. C. shasta stages showed high flexibility of switching between different morphotypes, suggesting a high capacity to adapt to their microenvironment. Exposure to fibronectin showed that C. shasta stages have extraordinary adhesive affinities to glycoprotein components of the extracellular matrix (ECM). When comparing C. shasta genotypes 0 (low virulence, no mortality) and IIR (high virulence, high mortality) infections in rainbow trout, major differences were observed with regard to their migration to the target organ, gene expression patterns and proliferation rate in the host. IIR is characterized by rapid multiplication and fast amoeboid bleb-based migration to the gut, where adhesion (mediated by integrin-β and talin), ECM disruption and virulent systemic dispersion of the parasite causes massive pathology. Genotype 0 is characterized by low proliferation rates, slow directional and early adhesive migration and localized, non-destructive development in the gut. We conclude that parasite adhesion drives virulence in C. shasta and that effectors, such as integrins, reveal themselves as attractive therapeutic targets in a group of parasites for which no effective treatments are known.
Introduction
The capacity of movement is fundamental for cells, and most of them rely on a functionally conserved actomyosin cytoskeleton system that allows spatial displacement. The ability to switch plastically between different motility modes and cell protrusions depending on the environment optimizes cell migration. Single cell motility depends on the physical properties of the extracellular matrix (ECM), extracellular proteolysis and signaling factors. Two main modes of migration, i.e., mesenchymal vs. amoeboid can be distinguished by their leading edge structure, cell shape, and the degree of cell adhesion to the ECM. Mesenchymal migration is characterized by polarized and elongated cells that display actin-rich cell protrusions (lamellipodia/sheet-like protrusions and lobopodia), high quantities in gills, blood and intestine of rainbow trout infected with the most virulent genotype (IIR) and the low virulent genotype (0) and then quantified the relative expression of 8 selected genes composing the actomyosin machinery, adhesion complexes and mesenchymal vs. amoeboid motility modes. We demonstrate for the first time the importance of parasite motility and adhesion for the virulence in myxozoans.
Collection of C. shasta Genotypes IIR and I for Motility Studies
Between 2013 and 2015, developmental stages of C. shasta genotype IIR were collected from ascites, bile and intestine from heavily infected rainbow trout (Roaring River Hatchery strain; n = 30; 8.3-31 cm total length) held at the Aquatic Animal Health Laboratory at Oregon State University (AAHL, OSU). Additionally, caeca, liver and testes were collected from three rainbow trout showing gross systemic infection. In June 2015, extremely rare samples of ascites stages of genotype I were collected from five Chinook salmon (Iron Gate Hatchery strain; 7-9 cm total length) after field exposure in the Klamath River (Beaver Creek site), California, USA (these fish do not typically develop ascites during infection). All fish were euthanized by an overdose of buffered MS-222 (tricaine methanesulfonate; Argent Chemical Laboratories, Redmond, WA, USA). Tissues and fluids were collected and processed for different microscopies and molecular analyses.
Light Microscopy and Time Lapse Series
Measurements of stages and cellular processes of I and IIR C. shasta were taken as described in [12]. Motile/protruding stages were quantified from 10 µL aliquots of ascites using a counting chamber. Time-lapse images/movies were generated using a Leica DMR microscope (Leica, Wetzlar, Germany) with a Spot RT3 camera, Spot software 5.0 (Spot software, Amsterdam, Netherlands) and VideoPad Video Editor (NCH Software, Canberra, Australia). Recordings ranged from 4 to 18 min, with images captured every 3, 7 or 10 sec.
Electron Microscopy (SEM & TEM)
For scanning electron microscopy (SEM), IIR intestinal stages were washed off fish tissue with PBS, collected and fixed with 2.5% glutaraldehyde in PBS. Ascites (I and IIR) and bile (IIR) stages were directly fixed by adding concentrated fixative to the fluid they were in to reach 2.5%. On the day of processing, the parasites were washed in PBS and centrifuged (800 g, 5 min), and prepared as described in [12]. Imaging was performed with a JEOL JSM-7401F (JEOL Ltd., Tokyo, Japan) at the Institute of Parasitology, Czech Academy of Sciences (PARU, CAS) and a FEI QUANTA 600F environmental SEM (FEI, Hillsboro, OR, USA) at OSU Microscopy service. For transmission electron microscopy (TEM), infected intestine, caeca, liver and testes samples were fixed in 2.5% glutaraldehyde in 0.1 M PBS for several days. The tissues were then washed for 1 hr in PBS, post-fixed in 1% osmium tetroxide in PBS for 3 hr and dehydrated in an acetone series, before embedding in Epon resin (Polybed 812, Polysciences Inc., Warrington, PA, USA). Ultrathin sections were cut with diamond knives, stained with 5% uranyl acetate and lead citrate. Stained sections were examined using a JEOL 1010 TEM at PARU, CAS.
Surface Adhesion Experiment of Genotype IIR: 2D Environment
10 µL aliquots of live stages in ascites collected from three different rainbow trout were left to settle onto ethanol-washed 10 µL/mL fibronectin (ThermoFisher Scientific Inc., Waltham, MA, USA) coated slides. Fibronectin is a cell adhesion glycoprotein present in the animal ECM. Control stages were left to settle on uncoated microscopic slides. Both groups were recorded immediately after settling and again after 20 min using a Canon Eos Rebel T1i camera on a Zeiss 47 30 28 light microscope at AAHL, OSU. Videos were analyzed by eye and in vivo behavior of the stages was classified as stages showing mostly (1) filopodia-lamellipodia or (2) blebs. For SEM analysis, 200 µL aliquots of ascites containing live stages were left to settle onto fibronectin coated coverslips for 20 min and fixed in situ with 2.5% glutaraldehyde in PBS. An aliquot of ascites fluid was fixed as a control group. Both fixed stages on fibronectin surface and control fixed ascites stages were processed and visualized as specified above for SEM analysis.
C. shasta Genotype 0 and IIR Infections for Transcriptomic Analysis
In May 2016, SPF rainbow trout from Roaring River Hatchery (Scio, OR, Oregon Department of Fish and Wildlife, USA) (length 5.5-7.5 cm; weight 1.6-3.8 gr) were exposed in two different locations within the Klamath county: Keno Eddy (n = 60) and Williamson River (n = 64). Previous monitoring studies allowed selecting these locations as the most probable source of genotype 0 and genotype IIR infections respectively [22]. Fish were held in mesh cages for 72 h. After exposure, fish were prophylactically treated for the bacterial pathogen Flavobacterium columnare during transportation to the AAHL (OSU) and for external parasites within 1-week post exposure [26]. Both groups of fish were held in 100 L tanks in heated 18 • C well water. Fish were monitored daily and mortality was reported as cumulative mortality. Fish were fed regularly with a fasting period of 48 hr before sampling. Five fish per group were sampled at 1, 7, 15, 22 and 29 days post exposure (dpe). An additional time point was collected for type 0 infection 60 dpe. 4-16 µL of blood, all gill arches from one side and the anterior portion of the intestine was frozen at −20 • C for DNA analyses. The other side gills arches and the posterior portion of the intestine was stored in RNA later (Ambion, Austin, TX, USA) and kept at −20 • C. A wet mount of a small portion of distal intestine was examined using bright-field microscopy, and the type of parasite stages present (developmental stages and/or spores) was reported. Naïve fish (n = 5) from the same stock were sampled as negative uninfected controls.
To characterize parasite exposure, river water samples were taken at the beginning and at the end of the 72-h exposure for spore dose and genotyping. Three replicates of 1 L water samples were filtered and spore numbers were quantified using a C. shasta SSU rDNA-based absolute quantification qPCR assay [27,28] and genotype was confirmed using the ITS-1 rDNA region [29].
C. shasta Quantification
In order to compare quantities of parasite in genotypes 0 and IIR infections, an SSU rDNA qPCR assay was performed. First, the DNA content of blood, gills and intestine samples was quantified using the Quant-iT™ dsDNA Assay Kit (Invitrogen, Carlsbad, CA, USA) and a Biotek Synergy HT microplate reader (Biotek, Winooski, VT, USA). Parasite quantities were estimated in 19 ng of DNA from blood (n = 15 fish per genotype, 1, 15, 29 dpe), 134 ng from gills (n = 15 fish per genotype, 1, 7, 29 dpe) and 50 ng from intestine per reaction (n = 25 fish per genotype from all sampling points; n = 5 fish per genotype 0 at 60 dpe). These DNA quantities were chosen after quantification of all samples, and taking a value such as all samples could be quantified. C. shasta SSU rDNA-based absolute quantification qPCR assay for water samples (Taqman-probe assay) [27,28] was used with modified standards and no IPC test, as no inhibition was observed. A four point 10-fold dilution standard curve of a purified PCR product of a IIR ascites DNA sample was used to calculate SSU rDNA copy numbers of the parasite in the samples analyzed. All samples were run in triplicate, with a positive C. shasta sample as an interplate calibrator and a no template control.
Motility Gene Mining from Reference Transcriptome
Motility genes were mined from C. shasta reference transcriptome (Alama-Bermejo et al. in preparation; NCBI SRA Acc. number SRR6782113). Gene annotations were confirmed by BLASTX searches against three databases: UniProt, Cell Migration Knowledge Database (http: //www.cellmigration.org/index.shtml), and CDD (NCBI). Based on cell motility literature, the following genes were selected because their involvement in (1) actomyosin machinery: β-actin (a non-muscle cytoskeletal actin), coactosin, coronin (two actin binding proteins) and myosin-10 (a non-muscle myosin II); (2) cell adhesion: integrin-β and talin; and (3) mesenchymal vs. amoeboid motility regulation: Rac1 (Ras-related C3 botulinum toxin substrate 1) and RhoA (Ras homolog gene family, member A gene). Primers were designed using NCBI/Primer-BLAST [30] (Table S1, Suppl. Fasta file) and their specificity and parasite origin were confirmed using PCR on DNA and cDNA samples of fish infected with genotype IIR and 0, and negative control samples of non-infected rainbow trout. PCR products were purified and sequenced as described above.
As reference genes, a pool of eight C. shasta genes were tested by PCR and qPCR: SSU & LSU ribosomal gene regions; EF2 -elongation factor 2-, GAPDH -Glyceraldehyde 3-phosphate dehydrogenase-, NADH -dehydrogenase [ubiquinone] iron-sulfur protein 2-, HPRT -Hypoxanthineguanine phosphoribosyltransferase-, Ornithine aminotransferase and DNA-directed RNA polymerase II (Table S1, Suppl. Fasta file). GAPDH, NADH and HPRT were selected due to their consistently low coefficient of variation (0.8-1.1%). Primer efficiencies were obtained using a set of 2-fold serial dilutions between 5 ng/µL and 0.625 ng/µL and run using qPCR assay described below. The efficiencies were calculated based on the slope of the standard curve in the StepOne TM software, with restriction to ±10% variation (Table S1).
Motility Genes Expression Assays
Intestine samples in RNA later were extracted using a column-based RNA extraction method, High Pure RNA tissue kit (Roche, Basel, Switzerland), including an on-column DNAse step. RNA was quantified using NanoDrop at the CGRB (OSU). Extracted RNA samples were stored at −80 • C. Detection of genomic DNA contamination and quality of RNA was determined by running 100-200 ng of RNA in a 1% agarose gel and using a minus reverse transcriptase control in a subset of samples. 500 ng of RNA was used to synthesize cDNA using the Transcriptor High Fidelity cDNA Synthesis Kit (Roche) and anchored-oligo (dT) 18 primers. Newly synthesized cDNA was stored at −20 • C.
Five fish intestines per genotype and per sampling time point (7, 15, 22 & 29 dpe) were analyzed for the selected set of motility genes. Three non-infected rainbow trout were analyzed as negative controls. The qPCR reaction mix consisted of 5 µL TaqMan ® Universal PCR Master Mix (Applied Biosystems, Foster City, CA, USA), 1:100 dilution SYTO ® 9 Green Fluorescent Nucleic Acid Stain (Molecular Probes) in 1× TAE buffer, 10 µM of each primer, 25 ng/µL BSA, 5 ng of total cDNA and up to 10 µL of PCR grade water. 96 well plates were run and read using a StepOnePlus qPCR machine (Applied Biosystems) with the following cycling conditions: polymerase activation at 50 • C for 2 min, denaturation at 95 • C for 10 min, 44 cycles of denaturation at 95 • C for 15 s and annealing at 60 • C for 1 min, and a final melt curve stage of 95 • C for 15 s, 64 • C for 1 min and 88 • C for 15 s, in order to detect any unspecific PCR product. All qPCR reactions were simplex and run in triplicate. A positive C. shasta sample was run in all plates with NADH gene assay as an interplate calibrator, to compensate for the variation in qPCR runs (Cq +/-0.5). All plates were run with no template control and ROX was used as passive reference.
Microorganisms 2019, 7, 397 6 of 21 A Cq mean was calculated for each sample. Relative gene expression is shown as fold change using 2 −∆∆Cq method [31], assuming the low virulent genotype 0 is the calibrator and the highly virulent genotype IIR is the treated group (∆∆Cq = [(C q gene of interest − C q average three reference genes) genotype IIR − (C q gene of interest − C q average three reference genes) genotype 0]). Additionally, intragenotype temporal changes were calculated as relative change (2 −∆Cq ) to the reference genes. Differences between mean fold changes and relative change were tested for statistical significance using Tukey's method for multiple comparisons after one-way ANOVA or t-test for normally distributed data or Kruskal-Wallis with Dunn's multiple comparison in non-normally distributed data (Tables S3 and S4). All statistical analysis and graphs were done using SigmaPlot 13.0 (Systat Software Inc., Chicago, IL, USA).
3.1.
Motility Modes in C. shasta-Blebbing, Adhesion and Crawling C. shasta showed asynchronous development in its fish hosts. Early and late sporogonic stages as well as mature myxospores were observed in intestine, ascites, bile, liver, caeca and testes. Proliferating stages of C. shasta measured 9.9-92.8 µm in length (n = 88) and showed high morphological plasticity, with round or ellipsoidal shape, occasionally pyriform, and a variety of cell protrusions and active motility. Three main types of cell protrusions were observed on the outer, or primary, cell of IIR C. shasta stages: blebs, filopodia and lamellipodia. These cell protrusions were associated with different behaviors: (a) blebbing-driven movement with little to moderate displacement and low adhesion; and (b) lamellipodia-driven movement with high adhesion. The latter type of movement may be characterized either by absence of/little displacement, and abundant filopodia or by directional (crawling) movement and few filopodia in uropod. Stages could switch between protrusions and migration modes.
Myxozoan Blebbing Promotes Fast Amoeboid Motility and Spore Release
This is the first time blebbing is reported for myxozoans, although this type of amoeboid migration has a starring role in other cells e.g., amoeba, embryonic cells and tumor cells [32]. Blebs ( Figure 1A-I) were hemispherical hyaline cell protrusions of highly variable size (2.4-52.4 µm width; 2.3-45.2 µm height; n = 62) and short life span (mean 20 s, range 6-36 s). Blebbing stages were active but showed little physical displacement (Movie 1). Blebs consisted of semicircular membrane structures with little cytoplasmic content ( Figure 1J-L). Scattered F-actin was concentrated on the primary cell membrane, at the base and/or on the surface of blebs ( Figure 1G-I), probably indicating blebs in different moments of their life cycle, i.e., initiation, growth and retraction [33]. Unlike filopodia and lamellipodia, blebs are produced by hydrostatic pressure created by the actomyosin cortex [32,34]. Blebs protruded between host cells in intact and degraded ECM ( Figure 1J,L), probably aiding parasite migration through the gaps of the matrix using blebs to push and squeeze their way [35]. In some cases, host cells surrounding the parasite showed abundant intracellular vesicles ( Figure 1J,K). Actin microfilaments were visible in ultrathin sections at the base of blebs ( Figure 1M). Blebs expanded rapidly then retracted more slowly until they completely disappeared. Blebs were observed to occur randomly in location and time on the membrane. Blebbing is considered an alternative to lamellipodial amoeboid migration but in contrast to filopodia and lamellipodia, with little adhesion, displacement or directionality [33]. Two blebbing patterns were observed to occur: polarized-directional and non-polarized. Polarized blebbing stages predominantly protruded blebs on one pole of the parasite (Figure 2A). Blebs would continuously expand and retract, often with physical overlap of several blebs simultaneously at the same location ( Figure 1D-I, Movie 1). Sometimes the blebbing pole was found to shift to another side of the parasite, or blebbing completely ceased. Non-polarized blebbing, or circus movement, was less common in C. shasta and it consisted of a massive cell membrane detachment that was initiated as a regular hemispherical bleb, which propagated rapidly and progressively around the surface of the stage, returning to the initiation point ( Figure 2B, Movie 2). Circus movement has been reported in embryonic blastomeres of lower vertebrates [36], probably involved in embryonic cell migration [37]. Full circumnavigation lasted 19-27 s and the circular surface expansion could repeat several times (up to 3 observed). While polarized blebbing stages were observed to create some degree of physical displacement, no displacement was observed for the stages with circus blebs and its functionality for C. shasta stages is unclear.
Microorganisms 2019, 7, x FOR PEER REVIEW 7 of 21 blebbing stages were observed to create some degree of physical displacement, no displacement was observed for the stages with circus blebs and its functionality for C. shasta stages is unclear. Blebbing was long considered a hallmark of cell apoptosis [32] and it may play an apoptotic role in C. shasta sporogonic stages, at a later stage of development. In some mature plasmodia containing myxospores, blebbing stages expelled single mature spores (Movie 3). In others, active blebbing happened before bursting and releasing the spores from the primary cell (Movie 4).
Filopodia and Lamellipodia Promote Myxozoan Adhesion While Adhesive Surfaces Promote the Formation of These Cell Protrusions
Filopodia and lamellipodia have important roles in cell migration and adhesion, including substrate tethering and environment probing [38]. They are typical for cells migrating in a mesenchymal mode, i.e., irregularly shaped cells with strong adhesion by integrin and degradation of the ECM [39]. These cell protrusions have previously been reported for other myxozoans e.g., at the anterior pole of freely swimming bile stages of C. puntazzi [12]. Filopodia and lamellipodia played a major role in C. shasta adhesion. Adhesive stages were motionless, with filopodia and lamellipodia as abundant cell protrusions that increase surface contact of otherwise static cells to the host ECM (Figures 3 and 4). Filopodia were F-actin rich, 1.3-24.1 µm in length and 0.3-1.8 µm (n = 68) in thickness and were often ramified and distributed in a 3D radiating pattern or star-like arrangement ( Figure 3A,B,D and Figure 4A). Lamellipodia ( Figure 3C,G-I) were flat F-actin rich sheet-like protrusions ( Figure 3J-L), observed frequently with small filopodia projecting from the margin of the cell ( Figure 3G-I,L). Usually, these lamellipodia were located on one side, with single and ramified filopodia present on the remainder of the parasite body ( Figure 3C,G,H). Some stages showed small F-actin-rich crests on the surface ( Figure 3F,K). Intraepithelial stages in the intestine, caeca and testes showed sheet-like lamellipodia and filopodia that extended between host cells in a degraded ECM ( Figure 4B,C) probably probing the environment, guiding cell migration by connecting the parasite cytoskeleton with the host ECM [40] while also feeding on host proteins. Some of the stages had thin and long single filopodia ( Figure 4D) whereas others had groups of filopodia extending between host cells ( Figure 4E). In testes, these cell protrusions deeply anchored the parasite into host cells, resulting in parasites completely embedded in otherwise intact tissue ( Figure 4F-K). These protrusions were supported by a mesh of actin filaments ( Figure 4G,H). Cell protrusions were mainly exhibited by the Blebbing was long considered a hallmark of cell apoptosis [32] and it may play an apoptotic role in C. shasta sporogonic stages, at a later stage of development. In some mature plasmodia containing myxospores, blebbing stages expelled single mature spores (Movie 3). In others, active blebbing happened before bursting and releasing the spores from the primary cell (Movie 4).
Filopodia and Lamellipodia Promote Myxozoan Adhesion While Adhesive Surfaces Promote the Formation of These Cell Protrusions
Filopodia and lamellipodia have important roles in cell migration and adhesion, including substrate tethering and environment probing [38]. They are typical for cells migrating in a mesenchymal mode, i.e., irregularly shaped cells with strong adhesion by integrin and degradation of the ECM [39]. These cell protrusions have previously been reported for other myxozoans e.g., at the anterior pole of freely swimming bile stages of C. puntazzi [12]. Filopodia and lamellipodia played a major role in C. shasta adhesion. Adhesive stages were motionless, with filopodia and lamellipodia as abundant cell protrusions that increase surface contact of otherwise static cells to the host ECM (Figures 3 and 4). Filopodia were F-actin rich, 1.3-24.1 µm in length and 0.3-1.8 µm (n = 68) in thickness and were often ramified and distributed in a 3D radiating pattern or star-like arrangement ( Figure 3A,B,D and Figure 4A). Lamellipodia ( Figure 3C,G-I) were flat F-actin rich sheet-like protrusions ( Figure 3J-L), observed frequently with small filopodia projecting from the margin of the cell ( Figure 3G-I,L). Usually, these lamellipodia were located on one side, with single and ramified filopodia present on the remainder of the parasite body ( Figure 3C,G,H). Some stages showed small F-actin-rich crests on the surface ( Figure 3F,K). Intraepithelial stages in the intestine, caeca and testes showed sheet-like lamellipodia and filopodia that extended between host cells in a degraded ECM ( Figure 4B,C) probably probing the environment, guiding cell migration by connecting the parasite cytoskeleton with the host ECM [40] while also feeding on host proteins. Some of the stages had thin and long single filopodia ( Figure 4D) whereas others had groups of filopodia extending between host cells ( Figure 4E). In testes, these cell protrusions deeply anchored the parasite into host cells, resulting in parasites completely embedded in otherwise intact tissue ( Figure 4F-K). These protrusions were supported by a mesh of actin filaments ( Figure 4G,H). Cell protrusions were mainly exhibited by the primary cell, however, small filopodia were observed on secondary and tertiary cells protruding into the primary and secondary cells respectively ( Figure 4L).
Microorganisms 2019, 7, x FOR PEER REVIEW 9 of 21 primary cell, however, small filopodia were observed on secondary and tertiary cells protruding into the primary and secondary cells respectively ( Figure 4L). Parasite stages exposed to an adhesive 2D substrate (i.e., ECM binding protein fibronectin) showed extraordinary high attraction and binding activity to the adhesive surface. An increased Parasite stages exposed to an adhesive 2D substrate (i.e., ECM binding protein fibronectin) showed extraordinary high attraction and binding activity to the adhesive surface. An increased occurrence of filopodia and lamellipodia was observed in 60.4% (136/225) of the parasites on fibronectin, whereas only 21.6% (53/245) showed these protrusions on uncoated slides ( Figure 5A). Formation of blebs was discontinued in parasites exposed to an adhesive surface, with only 0.8% (2/225) of them blebbing, in contrast to 18.6% (46/245) in the control group, supporting their role in an amoeboid motility mode that requires little adhesion. Using SEM, the distribution pattern of filopodia-lamellipodia changed from a 3D distribution in the control stages ( Figure 5B) to a 2D distribution in stages on fibronectin ( Figure 5C-H), with a strong affinity for the coated surface. The protrusions of stages on fibronectin extended radially 1.9 to 14.9 µm from the body surface (similar to the filopodia length of non-treated stages). Parasite adhesion was complete ( Figure 5D) or partial ( Figure 5E). Radially distributed lamellipodia showed further filopodia projected from the external margin ( Figure 5D). Polarized stages showed a large sheet-like lamellipodium on one pole and filopodia on the other side ( Figure 5F,G). In some cases, stages showed long ramified filopodia extending over large areas ( Figure 5F). Some filopodia showed slightly thickened tips ( Figure 5H). occurrence of filopodia and lamellipodia was observed in 60.4% (136/225) of the parasites on fibronectin, whereas only 21.6% (53/245) showed these protrusions on uncoated slides ( Figure 5A). Formation of blebs was discontinued in parasites exposed to an adhesive surface, with only 0.8% (2/225) of them blebbing, in contrast to 18.6% (46/245) in the control group, supporting their role in an amoeboid motility mode that requires little adhesion. Using SEM, the distribution pattern of filopodia-lamellipodia changed from a 3D distribution in the control stages ( Figure 5B) to a 2D distribution in stages on fibronectin ( Figure 5C-H), with a strong affinity for the coated surface. The protrusions of stages on fibronectin extended radially 1.9 to 14.9 µm from the body surface (similar to the filopodia length of non-treated stages). Parasite adhesion was complete ( Figure 5D) or partial ( Figure 5E). Radially distributed lamellipodia showed further filopodia projected from the external margin ( Figure 5D). Polarized stages showed a large sheet-like lamellipodium on one pole and filopodia on the other side ( Figure 5F,G). In some cases, stages showed long ramified filopodia extending over large areas ( Figure 5F). Some filopodia showed slightly thickened tips ( Figure 5H).
Crawling Stages: Fast and Directional Motility
Crawling stages showed active displacement (Movie 5) but were the least frequent form of motility (5.3%; 10/188). When performing this motility, the usually round stages would stretch and acquire a pyriform shape, with an active round anterior edge. The posterior end or uropod was static and dragged along by the anterior end. The uropod possessed several short static filopodia with a root-like appearance. The leading edge position varied, dragging the parasite in different directions. Strong cytoplasmic streaming was observed in the primary cell of the stage, with secondary cells and/or sporoblasts sometimes containing mature spores that were moved or pushed within the primary cell to the anterior pole. The speed of displacement was observed to be between 5.4-19.4 µm/min. We did not determine the F-actin distribution in crawling stages.
Motility Mode Switching Optimizes C. shasta Migration in Complex Environments
C. shasta IIR stages showed the ability to switch between different cell morphologies, protrusions and motility types. Rapid transitions were observed between blebs, lamellipodia and filopodia, in different combinations. Alternation of blebs and actin-rich protrusions is common in 3D environments and some cells use this ability for a directed, more precise migration [41]. Parasites could switch between blebbing and lamellipodia with filopodia (Movie 6), or change from polarized to circus blebbing (Movie 1). The reversible nature of blebbing and the long-term survival of blebbing cells are signs of blebs involved in cell motility [42]. The reversible nature and long-term survival of blebs observed in C. shasta support the non-apoptotic role of this form of motility. In some cases, C. shasta stages had both blebs and actin-rich protrusions, static filopodia, simultaneously (Movie 6) or could transition to motionless stages with filopodia, and stop all visible displacement. Plasticity in cell protrusion formation is thought to optimize cell migration in complex environments (e.g., during embryos development, cell chemotaxis) and to promote cancer dissemination [32].
Physical and Morphological Differences in Motility-Related Structures Exist Between C. shasta Genotypes
Motility of type I C. shasta stages from Chinook salmon differed from type IIR stages from rainbow trout. While possessing the same type of cell protrusions, i.e., lamellipodia, filopodia and blebs, type I stages were strongly directional with all protrusions simultaneously produced at the anterior pole. We did not observe type I stages that displayed only blebbing or crawling behavior, nor represented adhesive stages exclusively, as observed for IIR. Type I migrating stages were pyriform to round ( Figure 6A-C), with two well-defined ends: (1) a leading edge, with large and profuse blebbing and lamellipodia/filopodia and (2) a posterior end with long and extensible filaments that acted as a root or uropod, anchoring the stage to host cells or other parasites (Movie 7). The posterior filaments were unique to type I and were observed using SEM ( Figure 6D,E). Parasites were able to migrate, pushing and moving forward between host cells using this configuration (Movie 7) at a speed of 3.6-5.8 µm/min and hence slower than type IIR crawling stages (see above). With this combination of cell protrusions, these stages showed an exploratory behavior in the ascites, rather than targeted migration (Movie 8). We previously observed that a large genetic divergence of cell migration genes exists between genotypes I and IIR (Alama-Bermejo et al., in preparation) and these differences may be reflected in considerable differences in their migration phenotypes.
Low Proliferation and Delayed Spore Production Characterize Low Virulent Genotype Infections; Fast and Massive Proliferation Characterizes Virulent Genotypes
The disease dynamics were markedly different between genotypes. Low virulent genotype 0 fish showed no clinical signs, no parasite stages were observed microscopically during the first month and there were no mortalities. We detected mature spores in the feces of otherwise healthy fish three months following exposure. Genotype 0 infection was confirmed by genotyping. Mortality in genotype IIR infected fish, first occurred on 19 dpe, peaked on 22 dpe (6 fish) ( Figure 7A), and reached 100% on 28 dpe. The first clinical signs in these fish occurred on 15 dpe, with enlarged intestine and whitish liver. Early parasite stages were observed microscopically in 1/5 fish on 15 dpe. On 22 dpe, all fish were heavily infected, with swollen abdomen, ascites, hemorrhagic liver with white nodules, and whitish and enlarged intestine. By 29 dpe, fish became lethargic and emaciated. Internal organs looked similar to 22 dpe infection, except for the kidney, which became enlarged and with white nodules. Microscopically, sporogonic stages and mature spores were observed between 22 and 29 dpe. On 29 dpe, mature spores were predominant in the gut.
Low Proliferation and Delayed Spore Production Characterize Low Virulent Genotype Infections; Fast and Massive Proliferation Characterizes Virulent Genotypes
The disease dynamics were markedly different between genotypes. Low virulent genotype 0 fish showed no clinical signs, no parasite stages were observed microscopically during the first month and there were no mortalities. We detected mature spores in the feces of otherwise healthy fish three months following exposure. Genotype 0 infection was confirmed by genotyping. Mortality in genotype IIR infected fish, first occurred on 19 dpe, peaked on 22 dpe (6 fish) ( Figure 7A), and reached 100% on 28 dpe. The first clinical signs in these fish occurred on 15 dpe, with enlarged intestine and whitish liver. Early parasite stages were observed microscopically in 1/5 fish on 15 dpe. On 22 dpe, all fish were heavily infected, with swollen abdomen, ascites, hemorrhagic liver with white nodules, and whitish and enlarged intestine. By 29 dpe, fish became lethargic and emaciated. Internal organs looked similar to 22 dpe infection, except for the kidney, which became enlarged and with white nodules. Microscopically, sporogonic stages and mature spores were observed between 22 and 29 dpe. On 29 dpe, mature spores were predominant in the gut. Parasite dose was approximately 18 spores/L for genotype 0 at both sampling points, while genotype IIR was undetected at the beginning but measured 16 spores/L at the end of the exposure. Gills were PCR positive through all sampling days for both genotypes. 1, 7 and 29 dpe gill samples Parasite dose was approximately 18 spores/L for genotype 0 at both sampling points, while genotype IIR was undetected at the beginning but measured 16 spores/L at the end of the exposure. Gills were PCR positive through all sampling days for both genotypes. 1, 7 and 29 dpe gill samples were quantified for parasite copy numbers ( Figure 7B, Table S2) revealing similar copy numbers on 1 and 7 dpe for both genotypes. On 29 dpe, genotype 0 levels in the gill remained unchanged while genotype IIR increased nearly 400-fold. Detection of the parasite in blood (samples quantified on 1, 15 and 29 dpe) was less than 1 copy for both genotypes on 1 and 15 dpe ( Figure 7B). On 29 dpe, genotype 0 remained low in the blood while IIR was detected at a higher copy number, but with high variability (Table S2). Intestine was PCR positive for both genotypes except on 1 dpe. Parasite quantities in the intestine ( Figure 7B, Table S2) followed a similar trend for both types: numbers increased over time, peaking on 22 dpe. However, intestinal parasite copy numbers of IIR infected fish was 21 to 152-fold higher than in genotype 0 fish. Type 0 copy numbers decreased 9-fold between 29 and late 60 dpe.
3.8. β-actin, Integrin-β, Talin and RhoA Are Upregulated in C. shasta Virulent Genotypes Comparison of motility gene expression in the intestines of rainbow trout infected with virulent genotype IIR relative to low virulent genotype 0 revealed that four genes were upregulated: β-actin, integrin-β, talin and RhoA, with the highest fold changes observed for integrin-β (up to 54-fold change) and β-actin (up to 21-fold change) ( Figure 7C, Table S3).
β-actin was the only actomyosin machinery-related gene showing significant upregulation at all time points in IIR infections, ranging between 4-to 21-fold change. Coronin, coactosin and myosin-10 were downregulated throughout the infection. Cell adhesion gene integrin-β showed the highest fold changes in this study, with 31-and 54-fold increases late in the infection (22 and 29 dpe respectively). Talin showed a similar pattern, with a 7-fold change on 29 dpe. RhoA was the only motility regulator gene upregulated, with up to 2-fold change (15 and 29 dpe.). Rac1 was downregulated throughout the infection.
The comparison of gene expression over time between genotypes revealed further differences and opposite trends ( Figure 7C, Table S4). β-actin expression was extremely high at the beginning of the infection with virulent type IIR (7 and 15 dpe), while it was highest on 15 and 22 dpe in the low virulent type 0. Myosin-10 decreased over time in 0 but showed no clear trend in IIR. In type 0 infections, the genes involved in cell adhesion, integrin-β and talin, showed a significant change in expression levels over time, with the highest expression on 7 and 15 dpe, followed by a decrease on 22 and 29 dpe. In contrast, a strong increase of integrin-β and a moderate increase of talin expression at these latter timepoints was observed in IIR infections. This indicates a high recruitment of adhesion-related genes for the virulent type IIR, during later stages of infection. Amongst the genes involved in cell motility regulation, RhoA showed no significant differences over time for any genotype and expression of the gene involved in mesenchymal motility regulation, Rac1, increased over time for IIR with significant difference between early and late time points.
Discussion
The parasitic cnidarian C. shasta develops and proliferates intercellularly in all layers of the intestine [19] resulting in a high level of contact and interaction with the host ECM. The present observations of C. shasta motility/protrusion modes and differential motility gene expression in different genotypes provide a first comprehensive characterization of the toolbox enabling migration and demonstrate morphological, cell multiplication and behavioral differences between virulent and avirulent genotypes of these parasites (Figure 8, Table S5).
Fast Proliferation and Rapid Bleb-Based Migration Characterize Virulent C. shasta Strain Invasion
We demonstrate that both migration strategies and rates of parasite proliferation differ between virulent and less virulent genotypes of C. shasta, as reported for other pathogens [4,43]. While both type IIR and 0 managed to establish and multiply in their fish host, IIR proliferated more rapidly during early infection in blood, gills and intestine. This difference could be an adaptation of virulent genotypes that allow the parasite to complete its development, and hence its transmission, before the host can respond immunologically [44]. The higher and early expression of β-actin in the virulent type suggests more motile and/or dividing stages during early infection, which could contribute to initial fast growth, multiplication and spread of the parasite [45].
In the virulent IIR genotype, rapid blebbing appears to be the chosen migration mode in an intact intestinal ECM. Downregulation of coronin (controller of actin subunits flux) [46] and coactosin (actin polymerization) [47] suggests that this genotype favors non F-actin rich amoeboid migration, such as blebbing motility. Furthermore, RhoA upregulation in IIR suppports non-lamellipodial amoeboid motility as the preferred mode in the virulent genotype, as high levels of RhoA in the cells inhibit lamellipodial-based migration and induce a switch to bleb-based cell migration [48]. As this represents a faster migration mode than mesenchymal migration [39], it may further explain why the parasite is able to reach and spread quickly in the intestine, using blebs to push and squeeze their way through the gaps.
Virulent Genotypes Interact with and Disrupt ECM at Late Stage Infection
Virulent IIR C. shasta infections are associated with massive destruction and loss of intestinal epithelium structure, with associated host mortality. The disease outcome is probably caused by a
Fast Proliferation and Rapid Bleb-Based Migration Characterize Virulent C. shasta Strain Invasion
We demonstrate that both migration strategies and rates of parasite proliferation differ between virulent and less virulent genotypes of C. shasta, as reported for other pathogens [4,43]. While both type IIR and 0 managed to establish and multiply in their fish host, IIR proliferated more rapidly during early infection in blood, gills and intestine. This difference could be an adaptation of virulent genotypes that allow the parasite to complete its development, and hence its transmission, before the host can respond immunologically [44]. The higher and early expression of β-actin in the virulent type suggests more motile and/or dividing stages during early infection, which could contribute to initial fast growth, multiplication and spread of the parasite [45].
In the virulent IIR genotype, rapid blebbing appears to be the chosen migration mode in an intact intestinal ECM. Downregulation of coronin (controller of actin subunits flux) [46] and coactosin (actin polymerization) [47] suggests that this genotype favors non F-actin rich amoeboid migration, such as blebbing motility. Furthermore, RhoA upregulation in IIR suppports non-lamellipodial amoeboid motility as the preferred mode in the virulent genotype, as high levels of RhoA in the cells inhibit lamellipodial-based migration and induce a switch to bleb-based cell migration [48]. As this represents a faster migration mode than mesenchymal migration [39], it may further explain why the parasite is able to reach and spread quickly in the intestine, using blebs to push and squeeze their way through the gaps.
Virulent Genotypes Interact with and Disrupt ECM at Late Stage Infection
Virulent IIR C. shasta infections are associated with massive destruction and loss of intestinal epithelium structure, with associated host mortality. The disease outcome is probably caused by a simultaneous effect of different factors affecting the ECM structure: parasite characteristics (proteolysis, feeding, adhesion) and host immune responses (inflammatory response, remodeling of ECM), as reported for other pathogens [49]. C. shasta adhesive structures likely play a very important role in shaping the virulence of the parasite at this stage of the infection. We demonstrate that C. shasta filopodia and lamellipodia have very strong affinity for glycoprotein components of the ECM, such as fibronectin. Parasite adhesion factors integrin-β and talin are upregulated late in the infection, and appear to be important in inducing changes in the ECM. Changes in adhesive substrates can induce cell haptotaxis, which is mediated by integrins. This cell movement plays important roles in tumor cell dissemination [50] and may be equally important in parasite dissemination. Interestingly, upregulation of these genes in IIR coincides with the change from an intestine localized infection to dispersion into and proliferation in other organs, i.e., liver, kidney, testes.
Active lamellipodia-based migration requires Rac1-mediated actin polymerization [51]. Rac1 was downregulated in genotype IIR, especially during initial infection, suggesting that lamellipodial-based migration is not the preferred motility mode for the virulent genotype. However, this GTPase showed a significant increase over time for IIR which suggests an increased use of lamellipodia and potentially a more proteolytic mesenchymal migration mode for the virulent genotype, during late infection. Together, these findings suggest that pathogenesis of IIR stages is likely related to a high level of interaction with the ECM (upregulated adhesion factors). Modulation and destruction of the ECM by means of adhesion is probably facilitating feeding and proliferation of genotype IIR, potentially promoting its haptotactic motility and consequent dispersion (systemic infection) and is the reason for its differential pathogenic capacity.
Early Direction-Driven Invasion Followed by Low Proliferation and Slow Mesenchymal Migration to Target Site Characterizes Low Virulent C. shasta
While the virulent C. shasta genotype IIR has been intensively studied due to its effects on rainbow trout health, this is the first attempt to unravel the biology of the less virulent type 0 in a comparative approach. The infection strategy of type 0 is characterized by low proliferation rates, less active stages and a delayed parasite development. This is revealed by low parasite copy numbers, downregulation of β-actin expression and long-term spore release (first spores observed after three months and up to 2 years pe, Atkinson & Bartholomew, personal communication).
Despite the low proliferation rate and less active stages of the low virulent genotype, these stages seem to perform strong directional and adhesive migration in the first stages of infection in the intestine. Upregulation of adhesive factors integrin-β and talin early in the infection suggests that mesenchymal migration may have a relevant role during invasion. Mesenchymally migrating cells acquire an elongated shape with a leading edge [39]. Type 0 stages showed early increase in expression of myosin-10, a gene involved in cell migration direction (front-to-back), suggesting a strong targeted migration during invasion, coordinating protrusion and stabilizing cell polarity [52,53]. In contrast, the frequently undefined polarity (cell protrusions projected in different directions) of IIR stages and the overall downregulation of its myosin-10 expression suggests cell polarity is not of major importance to virulence in C. shasta.
Moderate Exploitation of Target Tissues by Less Virulent Genotype
The initial mesenchymal migration strategy is abandoned later in the infection of type 0, with decreased expression of both adhesion factors and myosin-10. After initial invasion of the intestine, type 0 stages appear to proliferate slowly and form spores in the gaps of the ECM, while IIR proliferates rapidly and spreads widely. These differences suggest that type IIR stages are more active in the target organ than type 0, which may be related to the ability to respond to nutrient depletion and outgrowth of their own metabolism. Parasites like Entamoeba histolytica Schaudinn, 1903 show increased motility as a response to nutrient depletion and repellence by their glycolysis by-products [54]. In a cycle of cause and effect, increased proliferation requires more resources, forcing stages to migrate to other areas and organs where they can continue to feed and reproduce, thereby increasing pathogenesis and virulence.
Recent findings show that genotype 0 does not elicit an evident host immune response (Taggart-Murphy et al. in preparation) which together with phenotype, migration behavior and proliferation rate of this genotype likely indicates a high level of host-parasite mutual adaptation. This strengthens the hypothesis that the relationship between rainbow trout and C. shasta genotype IIR is out of balance due to the relatively recent encounter of the parasite and this new naïve host in which the parasite multiplies in an uncontrolled manner.
Conclusions
This study revealed the great diversity of morphologies, motility types and protrusions of the parasitic cnidarian C. shasta in its salmonid hosts. The phenotypic plasticity and the parasites' ability to switch between motility modes suggests a high capacity for adaptation to a changing microenvironment. Differential morphology and gene expression patterns in C. shasta genotypes characterized by different degrees of virulence revealed that parasite adhesion and increased spread represents an important pathogenic mechanism that shapes myxozoan virulence. Virulent genotype IIR is characterized by fast initial proliferation, initial rapid bleb-based migration, followed by increased parasite adhesiveness with massive interaction and disruption of the host intestinal ECM at late stage infection. The less virulent genotype 0 is characterized by low proliferation rates and slow direction-driven mesenchymal migration, without massive exploitation of target tissues. Myxozoan integrins are spotlighted as attractive chemotherapeutic intervention targets, due to their essential role in virulent interactions, as well as their known function in leukocyte homing, inflammation and cancer. Anti-integrin therapies have been successful in gut-related diseases such as inflammatory bowel disease [55]. As a first step to controlling of the enteronecrosis disease in salmonids we need to obtain a better understanding of the reciprocal feedback between C. shasta parasite cells, host ECM and the immune system. Funding: This work was supported by the following funding agencies: Czech Science Foundation (project 14-28784P-Gema Alama-Bermejo, 19-28399X AQUAPARA-OMICS-Astrid Holzer) and Consellería de Educación, Investigación, Cultura y Deporte, Valencia, Spain (APOSTD/2013/087-Gema Alama-Bermejo). | 2019-09-29T13:01:31.849Z | 2019-09-26T00:00:00.000 | {
"year": 2019,
"sha1": "36d42279168b98563384a8a1f1833a5f034d4aaf",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-2607/7/10/397/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0e4d9140b1d32d6adb6fe4d37cc9b6f67e2aa30b",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
119354834 | pes2o/s2orc | v3-fos-license | Absence of Localization in Certain Field Effect Transistors
We review some experimental and theoretical results on the metal-to-insulator transition (MIT) observed at zero magnetic field (B=0) in several two-dimensional electron systems (2DES). Scaling of the conductance and magnetic field dependence of the conductance provide convincing evidence that the MIT is driven by Coulomb interactions among the carriers and is dramatically sensitive to spin polarization of the carriers.
I. INTRODUCTION
Landauer's early work on scattering of particles and waves in random media [1] is still important today, because after decades of work, the physics of transport in disordered systems is still keeping us entertained with unexpected results. The generic "conductivity" that arises from ham-handedly averaging microscopic details and sweeping Coulomb interactions under the rug of effective mass is a poor description of transport in many circumstances. In the past two decades there have been several discoveries in experiments (much of it spurred or explained theoretically by Landauer and his co-workers) that have flown dramatically in the face of the classical conductivity. Response from quantum mechanically phase-coherent carrier excitations have appeared in studies of metals and semiconductors in both the relatively clean and very dirty limits of impurity scattering. In high magnetic fields the re-writing of Ohm's law has been especially dramatic. Noise from fluctuations in the transport parameters have behaved in ways that contradict the simple models that lead to the generic resistivity. [2] Finally, Coulomb interactions are appearing as large perturbations of the non-interacting carrier models that have dominated thinking for two decades.
Since the appearance of the scaling theory for noninteracting (Fermi-liquid) particles [3], most have accepted the result that there is no minimum metallic conductivity in three dimensions (3D) and no metallic behavior at all in two dimensions (2D). The non-interacting models were developed in terms of a scaling function that related dimensionless conductance g = G/(e 2 /h) (G is the conductance of the sample in 1/Ω) to the size L of the (hyper-cube) sample. For very low conductance (g ≪ 1), the transport is exponentially reduced as L increases because the carriers are localized and must hop or tunnel from site to site. For g ≫ 1, Ohm's law is assumed to hold: g ∝ L d−2 for cubes of dimension d. In weak disorder, perturbative calculations from non-interacting models of carrier transport lead to reduction of β indicating a (perfectly plausible) trend toward lower conductance and eventually to an insulator as the amount of disorder increased (illustrated in figure 1). According to this picture, a 3D system is a conductor if the Fermi energy E F is larger than the characteristic amplitude of the disorder potential W . This criterion is equivalent to the Ioffe-Regel criterion that k F l > 1, where k F is the Fermi wave-number and l is the mean free path. In 2D, there is no metallic behavior for any value of W = 0. That is, the proposed 2D scaling function β < 0. Both proclamations are reasonable at first glance, but in both cases the Coulomb interactions among the carriers have largely been swept beneath the rug of effective mass and (often ignored) Fermi liquid parameters. Subsequent work, however, has brought to question the validity of these simple ideas. In particular, the absence of a MIT in 2D has been disputed by Finkelstein [4], who has emphasized that corrections from interacting particles become important as T = 0 is approached. These and more recent [5] considerations of interacting systems, have sug- Illustration that the effects of growth of Lϕ ∝ 1/T p can be cut-off by a finite sample length L.
gested that β can change sign for d = 2, i.e. that there can exist a 2D MIT.
In fact, a wide variety of two-dimensional systems exhibit transitions like the MIT. Kosterlitz-Thouless-Berenshinskii transitions [6] in 2D superfluids is an old example. There are more recent examples of proper conductor-insulator transitions in two-dimensional electrical transport problems. Granular films of superconducting materials exhibit a transition from insulator to superconductor as the film thickness increases [7]. The transition is driven by the superconducting fluctuations in competition with the Coulomb (charging) interactions between the grains of material [8]. In oxide superconductors, the 2D sheets of oxygen atoms lead to a highly directional order parameter and, depending on doping level, exhibit a superconductor to insulator transition somewhat like that in granular films. [9] Between plateaux in the quantized Hall effect, which demands very high magnetic field B such that the Landau energyhω c ≫ E F , there is another 2D MIT manifest as a reversal of the temperature coefficient of the resistivity ρ. [10,11]. All of these examples appear in rather exotic circumstances (Landau quantization or Cooper pairing), but there is a manifestation of a 2D MIT in garden-variety conductors and that transition is the subject of the present work.
In generic silicon metal-oxide-semiconductor fieldeffect transistors (MOSFETs), which comprise strictly 2D electrons (or holes) between reservoirs of 3D carriers, [12] there is a MIT [13] for MOSFETs with sufficiently low disorder and high carrier mobility [15]. In the MOSFETs, the transition occurs among normal (nonsuperconducting) carriers at zero magnetic field. The transition occurs at relatively low electron densities n s of the order of 1 × 10 15 /m 2 , where the Coulomb energy (U ∼ √ n s ) is larger than E F .
Clear signatures of the 2D MIT have been observed at B = 0 in generic 2DES as a reversal of the sign of the temperature coefficient of conductance as the carrier density crosses through some critical value n c : dG(T )/dT changes from positive to negative. The same signature is found in a variety of materials including electrons in Si-MOSFETs, [13,15] holes in GaAs/AlGaAs heterostructures [17,18], holes in SiGe/Si heterostructures [19], and electrons in Si/SiGe [20].
For all versions of the 2D MIT, a scaling of the resistivity data against a control parameter can be accomplished in a form that directly follows from standard scaling considerations. In earlier work, this scaling form has been employed to explain the superconductor-insulator transition in granular metal films [8]. Written in the form appropriate to describe the MOSFET experiment (where the control parameter is n s ), the scaling equation is [5] where δ n = (n s −n c )/n c is a distance from the transition and z and ν are exponents associated with the scaling of a microscopic correlation length ξ, which measures the regions over which the quantum interactions are effective i. e. it is related to the phase coherence length L ϕ . In the vicinity of the MIT, the correlation length is assumed to scale to infinity as ξ = 1/|δ n | ν . Note that standard scaling arguments are typically formulated in terms of the scale dependence of the conductance at T = 0. It is clear that the T = 0 limit of the theoretical models is a poor approximation to any experiment, since it presumes that the coherence length for the carrier excitations is infinite. As first emphasized by Thouless [21], this assumption is false at any finite temperature; more recent arguments [22] question its validity even at T = 0. The predicted β can be related to more useful observables, by noting that the effective sample size for quantum interactions and interferences is not the patterned sample size but the length-scale L ϕ on which carriers retain phase coherence or phase memory. For most systems, this characteristic thermal length scales as a power of the temperature, viz. L ϕ ∝ 1/T 1/z , where z is the so-called dynamical exponent (z = 1/p in "weak localization" parlance). In this way, the scaling function in the length "domain" can be converted to β T = −d(ln g)/d(ln T ) in the T "domain" [5]. We emphasize that if the sample size L < L ϕ , then temperature dependence may be "short-circuited" by the finite size effects as illustrated in figure 2.
The use of the quantum phase transition [23] models to explain the 2D MIT carries a certain baggage with it. Such models are generally accepted for a superfluid (boson) ground state and transition only at T = 0 to an insulating phase. For the quantized Hall effect or the granular superconductor, there is an obvious choice of ground state in the dissipationless (ρ = 0) behavior characteristic of these two physical systems. It begs the question, however, of the ground state in the high-mobility MOS-FETs: What is going on at T = 0 ? The appearance of the 2D MIT in experiments has led to something of an avalanche of theoretical ideas, with "explanations" of the metallic behavior ranging from valley crossings, [24] triplet superconductivity, [25] anyon superconductivity [26],... There also have been suggestions that noninteracting models with spin-orbit scattering can explain the data [27]. These efforts nonwithstanding, the nature of the 2D metallic ground state and the 2D MIT remain some of the most challenging open problems of contemporary condensed matter science.
II. EXPERIMENT
We have studied the conductance G of two different sets of high-mobility n-channel Si-MOSFETs. A set of large area (≃ 1mm 2 ) FETs were fabricated on the (100) surface of silicon wafers doped with N a ≈ 8.3 × 10 20 m −3 acceptors. Corbino channels of length L = 0.4 mm and width w = 8 mm were formed with a poly-silicon gate above a 44 nm oxide layer. The measured residual oxide charge for these devices was ≈ 3 × 10 14 /m 2 . These samples have been studied only at relatively high temperatures 1.2 < T < 4.2 K. Another set of samples made in a different process run on very similar starting material had a residual oxide charge of < 10 14 /m 2 . This run comprised samples of various lengths 1 µm< L < 256 µm and widths 11 µm< w < 500 µm. For the shorter samples w ≫ L, so that even though L is small enough to compete with important microscopic process length scales in the material there is some hope of inferring averaged properties as a result of having many "mesoscopic" elements in parallel. [28] This later batch of samples have been studied at much lower temperatures 0.01 < T < 4.2 K. All sets of FETs had rather high peak mobilities ≃ 1 m 2 /V·sec, or they could achieve this regime with substrate bias. [15] All measurements of G were conducted in an electrically shielded enclosure with standard lockin techniques. A source-drain voltage V sd was applied and the resulting current I sd was recorded as a function of the temperature T , the magnetic field B, or the gate voltage V g , which controls the carrier density n s = C ox (V g − V th ). V th is the threshold for populating the channel, which can be inferred from higher temperature transconductance or from Shubnikov-de Haas measurements at low temperatures, and C ox is the specific capacity of the gate oxide. Our experiments demonstrate first that the disorder in the devices must be low relative to other important energy scales. For a moderately disordered sample, we find that the slope of G(T ) is consistently positive: all values of n s scale as insulators and the conductivity vanishes as T → 0. For a substantial back-gate bias (e.g. V sub = −9 V for the sample shown in Fig. 3), however, the mobility increases enough to permit a MIT at a value of n s = n c ≃ 1.65 × 10 15 /m 2 . The MIT appears as a change of sign of the slope of σ(T ) = (L/w)G(T ) at n c (see Fig. 3(a)). We infer that the peak mobility has increased by about 20%, but this clearly is sufficient to change the behavior of the 2DES dramatically, even though the value of σ at n s = n c has not changed much.
More importantly, all of the conductance curves G(V g , T ) = G(n s , T ) can be scaled (see figure 3(b)) against δ n = (n s − n c )/n c to form a single two-branched function [ Fig. 3(b)] as expected from the scaling arguments mentioned above. We take this as a signature of a quantum phase transition in the 2DES. In figure 3(c) we plot σ(δ n , T ) = σ c exp Aδ n /T 1/zν , for all data from figure 3(b), and find agreement with the theoretical prediction [5] for the temperature dependence in the quantum critical region T > T 0 , where the crossover temperature T 0 ∝ |δ n | zν is shown by the dashed line in Fig. 3(c). Very similar scaling of σ has been obtained in other 2DES samples in a variety of physical circumstances. [7,10,11,13,15] A related scaling of σ(E, T, n s ), where E is the electric field applied between the source and drain, has been observed in some 2DES. [14] This collection of experimental results is overwhelming evidence that the MIT is a quantum phase transition, and exhibits similarities with transitions in granular superconducting thin films, oxide superconductors, and plateaux transitions in the quantized Hall effect.
In shorter MOSFETs we find a similar scaling of σ for moderate temperatures 1 < T < 4.2 K. Our results are illustrated in figure 4 which contains measurements from a L = 1.25 µm, w = 11.5 µm MOSFET. The reversal of sign of the slope dG/dT occurs at essentially the same value as for the larger MOSFETs, and the conductivity σ c at the MIT is ∼ e 2 /2h in agreement with many other experiments on MOSFETs. A similar value of σ c obtains in completely different physical circumstances such as the granular films or the quantized Hall systems, but there the amplitude of σ is governed by physics (charging energy and phase fluctuations or the dominance of a particular momentum mode) that are not directly germane to the MIT. Yet a third set of samples with electrons confined at the interface between Si and SiGe has exhibited very similar response. [20] This result (see figure 5) is especially intriguing, because the mobility is very high compared with the MOSFETs experiments [29] -in the same range as for electrons in GaAs heterostructures, which have not exhibited the MIT to date. Again the slope dG/dT changes sign at a concentration n c = 1.8×10 15 /m 2 , but owing to the higher mobility, the value of σ c ≃ 80e 2 /h is very large compared to the MOSFET experiments. This result proves the existence of the MIT in yet another materials system. More interestingly, it lays to rest once and forever the suggestion (quoted widely and believed even more widely) that the value σ c is a "universal" number. Indeed, subsequent studies of the MIT in a 2D hole systems in SiGe/Si [19] and GaAs/AlGaAs heterostructures [18] have found σ c ∼ 2e 2 /h, almost a factor of seven larger than in some Si MOSFETs.
A much more plausible expectation is that r s = U/E F will be a universal number. This, in fact, is not born out across all samples: r s ∼ 12 − 19 at the transi-tion for electrons in Si MOSFETs and Si/SiGe quantum wells, r s ∼ 13 for holes in SiGe/Si heterostructures and r s ∼ 11 − 23 for holes in GaAs/AlAs heterostructures. These numbers, however, are much closer to each other (within a factor of two) than the values of σ c , which are distributed across more than two orders of magnitude.
In general, one expects the scaling exponents, such as z and ν, to reflect only the symmetry of the problem in question, and thus to be universal numbers. In the experiments, however, a large range of exponents are found. In the Si-MOSFETs at B = 0, zν = 1.6 ± 0.2 has appeared in two different sets of samples. [13,15] The same exponent has been reported for holes in SiGe/Si quantum wells. [19] For holes in GaAs/AlAs heterostructures [18] or electrons in Si/SiGe heterostructures [20], however, zν > 4. It is worth noting that these two experiments were on samples with very low disorder (k F l ≫ 1) by comparison with the MOSFET and p-SiGe samples where the relative effect of disorder is much stronger (k F l ≃ 1). In the very short MOSFETs [30,31], zν is even larger -zν = 16 ± 4 for L ≈ 1µm. Furthermore, the range of "scalable" temperature dependence is smaller than in the larger samples. In fact for |δ n | > 0.15 on the insulating side, the lowest temperature curve (△) is already failing to scale with the higher temperature data. In the sample of figure 3, the data obeyed Eq. (2) well for 1.2 < T < 3.6 K (T = 1.2 K was the lowest temperature studied in that experiment), and for other large MOSFETs [13] the range of temperature extended down to 0.05 K. We attribute this saturation of the T dependence to the cut-off of L ϕ and hence ξ by the sample length L.
IV. MAGNETOCONDUCTANCE
An applied magnetic field B alters the behavior of the carriers in the 2DES in different ways depending on the angle between the field and the plane of the 2DES. The perpendicular component of the magnetic field tends to bend the path of the carrier (the Lorentz force tends to make the carriers form circular orbits), as well as inducing a splitting of the energies of different spin orientations and consequently to polarize the spins of the carriers out of the plane of the 2DES. A parallel component of the magnetic field probably has little effect on the orbital motion of the carriers, but it still splits spin energies and it polarizes the spins into the plane. Studies of G(B) have been employed fruitfully for decades as probes of the transport mechanisms of disordered systems. [32] There are detailed calculations of the functional form of G(B) for various conditions among non-interacting carriers [32]. The contributions from Zeeman interactions (triplet terms in Hartree approximation) have been calculated and for µgB < ∼ k B T (µg is the carrier magnetic moment and k B is Boltzmann's constant) lead to a neg- ative parabolic magnetoconductance in a perpendicular magnetic field. In contrast, quantum interference of carriers (orbital effects) can lead to positive slopes of G(B) near B = 0.
The MIT in MOSFETs is quenched by a large parallel magnetic field (B > ∼ 1 T) [33,34], which in turn implies that polarizing the electron spins in the plane of the carrier motion has a dramatic effect on the correlations of the carriers. The conductance of nominally metallic 2DES (i.e. n s > n c ) decreases by orders of magnitude and the slope of G(T ) reverses from negative to positive, indicative of insulating samples. The application of B perpendicular to the plane of the 2DES also quenches the metallic phase [35,36]. Moreover, careful measurements of magnetoconductance in the quantum critical region in the presence of a perpendicular B provide quite a bit of insight into the relative importance of spin interactions and orbital motion at the MIT. [15] Figure 7 contains representative plots of magnetoconductance (MC) ∆σ/σ(B = 0), where ∆σ = σ(B) − σ(B = 0). Clearly, there is a positive magnetoconductance and a negative magnetoconductance contribution to each curve, and the positive contribution is more accentuated near n s = n c = 1.65 × 10 15 /m 2 . As we will see below, this is because the negative contribution has dropped to a minimum value at this point.
Each curve in figure 6 can be written where p, q > 0. The assumption of parabolic form for the negative term is justified because (1) it fits the data and (2) it is the form expected for the electron-electron interaction contribution for B < ∼ k B T /µg ≃ 0.8 T [37]. We have decomposed the data following formula (4) therefore a constant. The coefficient q, i.e. the electronelectron interaction contribution to MC, exhibits a minimum near n s = n c as is obvious in figure 7(b). We have found exactly the same behavior of MC in short MOS-FETs [30,38] at temperatures down to 0.04 K, in spite of the presence of sizable conductance fluctuations.
Both positive and negative MC have been observed in other MOSFETs as well [34]. Since orbital effects (positive magnetoconductance) are absent in an applied parallel field, our results show clearly that MC of a 2DES does depend on the angle between the magnetic field and the plane of a 2DES, at least in the quantum critical region. Away from the critical region, the orbital effects at low fields become small compared to spin effects [see Fig. 7(b)], and only negative MC is observed. Therefore, a recent claim [36] about the absence of any angular dependence of MC is valid only far from the critical regime, in the metallic phase where the experiment [36] was carried out.
V. SCALING OF THE CONDUCTANCE
We can extract β T ∝ β from our data by using the assumption of power-law dependence of the correlation scale (the phase coherence length) on T . [32,5] One such plot for four of our samples is given in figure 8. Anticipating the dependence of L ϕ suggested in the in- troduction, we have scaled the bare β T multiplying it by zν. The two large MOSFETs have zν = 1.6, and the two short MOSFETs yielded zν = 16. In spite of the disparity, all four curves produce the same scaled scaling function indicating that the underlying β is the same for all samples. A noteworthy feature is that β is essentially linear as it crosses through β = 0 as expected from recent scaling arguments [5]. For non-interacting particles, β < 0 as illustrated by the dotted lines in figure 1. Our experiment (which corroborates other measurements [39]) proves that β extends into the metallic region and thus provides clear evidence as to the inadequacy of noninteracting models of charge carriers.
An even more striking feature is evident from the present data: β decreases towards zero at large g, as proposed in Ref. [5]. Although this is by no means a necessary condition for such an exotic metallic state, it seems as if the system tries to restore Ohm's law in the large conductance (weak disorder) limit.
VI. SUMMARY
Our experiments corroborate other experiments on 2DES and prove the existence of a metal-insulator transition in two-dimensions for sufficiently low disorder. The conductances can be reduced to a single (two-branched) scaling function that is consistent with recent proposals for Coulomb driven quantum phase transitions. Measurements for different sample lengths and different levels of disorder demonstrate certain universal features of the transition. The transition occurs near a certain value of n c for a given density of states (carrier effective mass). The conductance at the transition is not a universal constant. Magnetoconductance experiments suggest that spin polarization and Coulomb interactions of the carriers have a dramatic effect on the transition. The product of conductance scaling function β T and the correla-tion scale exponents zν shows that the underlying scaling function β is the same for all MOSFETs. The inferred β violates the predictions from non-interacting models and supports recent predictions based on interacting carriers. | 2019-04-14T02:17:08.487Z | 1997-10-29T00:00:00.000 | {
"year": 1997,
"sha1": "563a69030a7036176568660f1caf6f7b7f74223a",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/cond-mat/9710315",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "563a69030a7036176568660f1caf6f7b7f74223a",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
79116145 | pes2o/s2orc | v3-fos-license | Cardiovascular disease risk among breast cancer survivors: an evolutionary concept analysis
php and incorporate the Creative Commons Attribution – Non Commercial (unported, v3.0) License (http://creativecommons.org/licenses/by-nc/3.0/). By accessing the work you hereby accept the Terms. Non-commercial uses of the work are permitted without any further permission from Dove Medical Press Limited, provided the work is properly attributed. For permission for commercial use of this work, please see paragraphs 4.2 and 5 of our Terms (https://www.dovepress.com/terms.php). Nursing: Research and Reviews 2017:7 9–16 Nursing: Research and Reviews Dovepress
Introduction
Cardiovascular diseases are the leading cause of death in the US and account for more than 600,000 deaths each year. 1 The American Heart Association has set a goal to reduce cardiovascular diseases by 20% by the year 2020. 2 Cancer survivors comprise a population who are at an increased risk of cardiovascular disease.3][4][5] As a result of this combination of factors, a new field called cardio-oncology has emerged, which is focused on the treatment of patients with cardiovascular-related events after cancer.
The largest group of cancer survivors in the US comprises breast cancer survivors.As of 2016, approximately 3.5 million breast cancer survivors are living in the US. 6 Ninety percent of women diagnosed with breast cancer will survive for at least 5 years after diagnosis, 7 shifting the cause of death from breast cancer to other agerelated conditions such as cardiovascular diseases. 8Breast cancer survivors are at an increased risk of cardiovascular diseases due to adverse effects from cancer treatment, such as cardiotoxicity, which may persist from the time of treatment to survivorship. 9espite studies exploring cardiovascular disease risk among breast cancer survivors, guidelines for monitoring and reducing cardiovascular disease risk do not exist currently for patients treated in the US.Several leading organizations have developed general survivorship guidelines for breast cancer survivors, including the American Society of Clinical Oncology, the American Cancer Society, and the National Comprehensive Cancer Network.The American Society of Clinical Oncology and the American Cancer Society guidelines recommend cardiovascular disease risk monitoring as needed, similar to the general population, despite the higher risk of cardiovascular diseases in breast cancer survivors. 9The National Comprehensive Cancer Network recommends baseline cardiac function assessment for all cancer survivors treated with anthracycline chemotherapy; 10 however, other chemotherapies and cancer treatments may contribute to an increased cardiovascular disease risk. 93][4][5] Yet, guidelines do not recommend long-term cardiac monitoring.While cardiotoxicity is addressed in these survivorship guidelines, recommendations for cardiovascular disease risk monitoring are not addressed.
In contrast to the US guidelines, the European Society for Medical Oncology recommends that patients treated with anthracyclines have their cardiac function measured at baseline prior to administration of anthracycline treatment, and 4 and 10 years after anthracycline treatment is completed. 11he guidelines also take into account the effects of different cancer treatments on cardiovascular disease risk including targeted and radiation therapies.Conforming to cardiovascular disease risk guidelines developed for the general population, as recommended by many American oncology organizations, may not be appropriate for breast cancer survivors due to their additional cancer treatment-related risk factors.Understanding the synergistic effects of cancer treatments combined with preexisting modifiable and nonmodifiable risk factors is crucial to defining the concept of cardiovascular disease risk among breast cancer survivors.
To facilitate our understanding of risk, a concept analysis using Rodgers' evolutionary concept analysis method was conducted on cardiovascular disease risk among breast cancer survivors.Cardiovascular disease risk factors in the general population are well known; however, cardiovascular disease risk among breast cancer survivors requires further clarification.The purpose of this concept analysis is to define cardiovascular disease risk for breast cancer survivors.First, the significance and background of cardiovascular disease risk among breast cancer survivors are reviewed.Next, the method of conducting Rodgers' evolutionary concept analysis on this topic is described.The results are examined following Rodgers' method.Finally, from the knowledge derived from this concept analysis, implications for nursing are provided.
Concept analysis method
The Rodgers' evolutionary concept analysis method was selected for three reasons.First, it uses an inductive approach to develop the concept of cardiovascular disease risk among breast cancer survivors.Second, it applies a rigorous six-step method.Third, concepts are viewed as cyclical and continuously evolving.The six steps in Rodgers' concept analysis include the following: 1) identifying the concept and associated terms, 2) selecting an appropriate setting or sample for data collection, 3) collecting data to identify the attributes of the concept, 4) analyzing the characteristics of the concept (i.e., surrogate terms, related concepts, antecedents, and consequences), 5) identifying an exemplar of the concept, and 6) identifying hypotheses and implications for future development. 12The identified concepts may be continually refined and synthesized as newer innovative research emerges. 12
Literature search method
A literature search was conducted on July 22, 2016, using the databases PubMed, EMBASE, and CINAHL.The key terms used to search the databases included "breast cancer OR breast neoplasm" AND "survivors" AND "cardiovascular disease risk OR cardiotoxicity".The articles published only in English were chosen for this literature review.The results yielded 357 articles; duplicates were removed, resulting in 293 articles.Articles were included if published within the past 15 years and full text was available, excluding 106 articles.The remaining 187 articles were assessed for inclusion criteria: primary studies with breast cancer survivors among the population and cardiovascular disease risk were explored.Twenty-one articles met the inclusion criteria.Ascendancy (searching forward to articles that have cited selected articles) and descendancy (exploring articles used in the reference lists) approaches were used to identify nine additional articles that met the inclusion criteria, resulting in a total of 30 articles for this review.Figure 1 shows a Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) diagram describing the search strategy.Data were collected using the sample of 30 articles to conduct this concept analysis.The articles were reviewed to identify the attributes of the concept of cardiovascular disease risk among breast cancer survivors.
Results
Several themes were identified during the literature search and data analysis: conceptual definitions, operational definitions, and attributes of cardiovascular disease risk.The following sections outline findings from the 30 articles.
Conceptual definitions
Conceptual definitions include the necessary components of the concept or how it is scientifically understood. 13During the review of literature, conceptual definitions of cardiovascular disease risk were identified as follows: 1) incidence of cardiovascular diseases, 2) presence of cardiovascular disease risk factors, and 3) changes in heart function.15][16][17][18][19][20][21][22][23][24][25][26][27][28][29] While one study included angina as a cardiovascular disease risk factor, 15 Darby et al indicated that angina would not be included as a risk factor due to difficulty with clinical identification of angina. 19Other investigators conceptually defined cardiovascular disease risk as the presence of cardiovascular disease risk factors including smoking and having an overweight or obese body mass index. 23,30,31Changes in heart function, including decreases in left ventricular ejection fraction, presence of serum biomarkers, and changes in cardiorespiratory fitness, also comprised the conceptual For personal use only.
3][34][35][36][37][38][39][40] Three studies associated decreases in left ventricular ejection fraction with anthracycline chemotherapy and conceptually defined cardiovascular disease risk in breast cancer survivors as anthracycline-induced cardiotoxicity. 4,29,37erational definition Operational definitions specify how the concept is measured or applied. 12Cardiovascular disease risk was operationalized using multiple methods throughout the literature.][18][19][20][21][22][24][25][26][27]29 Authors of these studies used electronic health records, large databases, and/or linkages to death indexes as data sources.3][34][35][36][37][38][39][40] Four studies used self-reported history of cardiovascular diseases or cardiovascular disease risk factors as an operational definition. 23,28,30,31tributes According to Rodgers, attributes provide a "real definition as opposed to a nominal or dictionary definition". 12Attributes are the features that comprise the concept as opposed to the conceptual definition of the concept.The attributes of cardiovascular disease risk in breast cancer survivors are categorized as cancer treatment, modifiable risk factors, and nonmodifiable risk factors.
Cancer treatment
Chemotherapy, targeted therapies, radiation, and endocrine therapy are common cancer treatments associated with cardiotoxicity in breast cancer survivors.Chemotherapy agents used to treat breast cancer, such as anthracyclines and taxanes, were associated with heart failure and ischemic heart disease. 15,17,24,38The mechanism of action for anthracyclines, including epirubicin and doxorubicin, may lead to the production of free radicals.Anthracyclines were associated with the development of cardiomyopathy, arrhythmias, aortic stiffness, and heart failure. 3,4,35,39,40Cardiotoxic symptoms from anthracycline therapy may either appear early on in survivorship, such as within the first or second year, or have a late onset many years later. 3,4In addition, the anthracycline drug class is dose dependent; therefore, an increase in dosage is associated with increased risk of heart failure. 3Taxanes are often used as adjuvant therapy to anthracyclines in cancer treatment.When both drug classes were used concurrently, patients were at an increased risk of asymptomatic bradycardia. 39Targeted therapies, including trastuzumab, used with anthracycline-based chemotherapy demonstrated increased cardiovascular disease risk, including the development of heart failure and cardiomyopathy. 16,20Radiation therapy was associated with increased cardiovascular disease risk. 14,19,22,24adiation to the left breast increases radiation exposure to the heart, leading to cardiac ischemia or damage to the coronary arteries. 19Endocrine therapies including aromatase inhibitors were also associated with an increased cardiovascular disease risk. 28In summary, the attribute of cancer treatment is associated with cardiovascular disease development and facilitates the definition of cardiovascular disease risk among breast cancer survivors.
Modifiable factors
Modifiable factors including obesity, physical inactivity, poor diet, and smoking may result in increased cardiovascular disease risk among breast cancer survivors.A body mass index greater than 25 kg/m 2 is associated with increased cardiovascular disease risk. 2 After diagnosis, many breast cancer survivors gain weight and report higher body mass index as a result of cancer treatment and/or sedentary lifestyles. 33Obese breast cancer survivors are at higher cardiovascular disease risk compared with survivors with normal body mass index. 27reast cancer survivors can lose weight and decrease body mass index through exercise, and physical activity has shown to be protective against developing increased cardiovascular disease risk. 32Yet, in a study exploring cardiorespiratory fitness, the investigators found that breast cancer survivors had a low cardiorespiratory fitness level that may impair the ability to exercise. 32Thus, breast cancer survivors who are unable to or lack motivation to exercise may be physically inactive, resulting in increased cardiovascular disease risk.Dyslipidemia, which may result from poor diets high in cholesterol, was identified as a risk factor for cardiovascular diseases and prevalent among cancer survivors. 18moking was associated with decreases in heart function leading to an increased cardiovascular disease risk in breast cancer survivors treated with chemotherapy. 37In combination with physical inactivity, obesity, and diet, smoking may lead to increased cardiovascular disease risk, particularly atherosclerosis and diminished function of blood vessels. 41ancer survivors were also more likely to be smokers 30,34 and less likely to be counseled on diet by health care providers. 30herefore, modifiable risk factors and poor lifestyle choices can ultimately contribute to an increased cardiovascular disease risk among breast cancer survivors.
Nonmodifiable factors
Nonmodifiable personal characteristics such as age and race can contribute to an increased cardiovascular disease risk among breast cancer survivors.Several studies demonstrated that as aging increases, cardiovascular disease risk increases regardless of previous breast cancer diagnosis. 21,23,25Older age among breast cancer survivors increased the risk of early onset of cardiovascular diseases. 4,8,29,42Additional studies indicated that African American breast cancer survivors were at an increased cardiovascular disease risk compared to other races. 23,31,36aving a first-degree family history of heart disease also increased the cardiovascular disease risk. 43None of the reviewed articles explored family history as a cardiovascular disease risk factor.Figure 2 demonstrates the relationship between the concept and its attributes.Collectively, these studies support that the attributes of cancer treatments and modifiable and nonmodifiable risk factors contribute to the concept of cardiovascular disease risk among breast cancer survivors.Although family history was not explored in the selected articles, it is a known risk factor for cardiovascular diseases and is included in Figure 2.
Characteristics of the concept
Characteristics that describe the concept include the following: 1) providing surrogate terms and related concepts and 2) describing antecedents and consequences of the concept. 12he identified characteristics are outlined respectively.
Surrogate terms and related concepts
Surrogate terms are used interchangeably to express the concept. 12The term "cardiovascular disease risk" was not used consistently throughout the reviewed literature.The terms "cardiotoxicity", "cardiovascular disease development", and "incidences of cardiovascular disease" were used instead of "cardiovascular disease risk".The term "breast cancer survivors" was used consistently with several instances of "patient" instead of "survivors".Related concepts are similar to the concept but do not possess the same attributes. 12Since modifiable and nonmodifiable risk factors are attributes to both cardiovascular disease risk in the general population and the breast cancer population, "cardiovascular disease risk in the general population" is a related concept that shares similar but not all the same attributes.
Antecedents and consequences
Antecedents are events leading to the concept of interest. 12The antecedent to cardiovascular disease risk among breast cancer survivors is breast cancer diagnosis because patients become "survivors" at diagnosis. 44Additionally, consequences are events that follow as a result of the concept's occurrence. 12Breast cancer survivors may develop For personal use only.
Powered by TCPDF (www.tcpdf.org) cardiovascular diseases as a result of increased cardiovascular disease risk.Thus, the consequence of this concept is the development of cardiovascular diseases.
exemplar
For this concept analysis, the following exemplar is presented.Mrs. Jones is a 65-year-old African American breast cancer survivor.She was diagnosed with breast cancer 12 years ago, and prior to her diagnosis she had no family history of cardiovascular diseases.Yet, she was treated with anthracyclines and taxanes, in addition to radiation to the left breast.She does not exercise, eats a diet high in cholesterol, has a body mass index of 30 kg/m 2 , and has smoked one pack per day for the past 30 years.Her older age, race, physical inactivity, poor diet, obesity, smoking history, and cancer treatment contribute to an increased cardiovascular disease risk.
Hypotheses and implications
The final step in Rodgers' evolutionary concept analysis method is to identify hypotheses and implications.An example of a hypothesis is "breast cancer survivors have a higher cardiovascular disease risk as compared to the general population".Implications for future development include the following: 1) health care providers may need to implement cardiovascular disease risk monitoring at breast cancer diagnosis, during treatment, and regularly throughout survivorship, 2) breast cancer survivors should be informed of their individual cardiovascular disease risk and encouraged to implement healthy lifestyle choices, and 3) health care providers may consider use of validated risk assessment tools to identify patients at increased cardiovascular disease risk.
Discussion
This concept analysis identified several risk factors that contribute to an increased cardiovascular disease risk among breast cancer survivors.While several studies explored factors that increase cardiovascular disease risk, there were no reported studies exploring the cumulative effect of cancer treatment and modifiable and nonmodifiable risk factors.Many studies used an epidemiological design and did not involve direct contact with breast cancer survivors.Instead, most studies were retrospective and from secondary sources including medical records, cancer registries, death indexes, and large databases.Investigators were unable to select specific risk factors to explore.Additionally, family history is a known risk factor for cardiovascular disease risk but was not explored in any of the selected studies.A potential reason may be that family history data were not collected.Implications for future research include developing large databases for breast cancer survivors that include all cardiovascular disease risk factors or adding modifiable and nonmodifiable risk factors to existing cancer registries.Essentially, there is a need to conduct prospective studies exploring cardiovascular disease risk among breast cancer survivors.Studies suggest that cancer survivors, who were at increased cardiovascular disease risk, were not educated on healthy lifestyles to reduce the risk. 30,31In particular, no study explored strategies to increase the knowledge of cardiovascular disease risk among breast cancer survivors.Using risk prediction models as a mode to identify and teach breast cancer survivors about cardiovascular disease risk may lead to positive changes in modifiable risk factors.
As breast cancer survivors are living longer, their risk of developing cardiovascular diseases increases.Importantly, preexisting modifiable and nonmodifiable risk factors are coupled with additional risk from breast cancer treatment.Thus, common side effects of breast cancer treatment, such as weight gain and decreased physical activity, can increase cardiovascular disease risk among these women.The majority of breast cancer survivors are over the age of 60 years. 45As breast cancer survivors get older, they may experience higher cardiovascular disease risk.Rodgers' evolutionary concept analysis method allows for refinement of concepts.A future concept analysis may be necessary as cancer treatment evolves, survival rate continues to rise, breast cancer survivors get older, and the prevalence of cardiovascular disease increases.
Implementation of survivorship care plans is a method to increase awareness of cardiovascular disease risk among breast cancer survivors.The Institute of Medicine recommends individualized care plans for cancer survivors to summarize the effects and long-term management of cancer. 46Using survivorship care plans to promote individualized cardiovascular disease risk follow-up (stratified by the cancer treatment received and personal modifiable and nonmodifiable risk factors) may stimulate healthy lifestyle choices and provide resources to achieve reduced cardiovascular disease risk.Additionally, survivorship care plans can incorporate recommendations for monitoring and screening of cardiovascular diseases.
Limitations
Limitations of this concept analysis include exclusion of secondary articles and literature not written in English.Secondary articles, such as meta-analyses, were excluded to get clear data.Reviewing secondary articles may have been beneficial to increase understanding of the concept's attributes.Articles not written in English may have provided further insight on the concept.Another limitation was that many studies included multiple cancer types and were not specific to breast cancer survivors.
Conclusion
Cardiovascular disease risk among breast cancer survivors comprises three attributes: 1) cancer treatment risk factors, 2) modifiable risk factors, and 3) nonmodifiable risk factors.Survivors are likely to have complex health needs including the potential for development of long-term side effects from cancer treatment.Without stratified screening and monitoring of cardiovascular disease risk, survivors may unknowingly live with asymptomatic cardiovascular diseases.Nurses can educate breast cancer survivors about cardiovascular disease risk and empower patients to implement healthy lifestyle changes.
Several gaps in the literature were identified.An interdisciplinary team of researchers and clinicians should consider collaborating to conduct prospective studies of cardiovascular disease risk among breast cancer survivors and reduce paucity of literature in this area.Further, health care providers may consider use of survivorship care plans to tailor cardiovascular disease risk management.Lifestyle changes should be encouraged to reduce overall cardiovascular disease risk.Interdisciplinary team members must work in collaboration with breast cancer survivors to mitigate cardiovascular disease risk, which may potentially extend and improve the quality of life.
Figure 2
Figure 2 Diagram of cardiovascular disease risk among breast cancer survivors. | 2018-09-11T08:03:14.179Z | 2017-02-03T00:00:00.000 | {
"year": 2017,
"sha1": "1a45758fa8626ec9daac8e60b9b5187037b8395b",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=34748",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "b5efdb9661d74acb50f7d5195acb5c3105736155",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
263608325 | pes2o/s2orc | v3-fos-license | Piezo-to-Piezo (P2P) Conversion: Simultaneous $\beta$-Phase Crystallization and Poling of Ultrathin, Transparent and Freestanding Homopolymer PVDF Films via MHz-Order Nanoelectromechanical Vibration
An unconventional yet facile low-energy method for uniquely synthesizing neat poly(vinylidene fluoride) (PVDF) films for energy harvesting applications through piezo-to-piezo (P2P) conversion is reported. In this novel concept, the nanoelectromechanical energy from a piezoelectric substrate is directly coupled into another polarizable material (i.e., PVDF) during its crystallization to produce a micron-thick film that not only exhibits strong piezoelectricity, but is also freestanding and optically transparent - properties ideal for its use for energy harvesting, but which are difficult to achieve through conventional synthesis routes. In particular, we show that the unprecedented acceleration ($\mathcal{O}$($10^{8}$ m s$^{-2}$)) associated with the nanoelectromechanical vibration in the form of surface reflected bulk waves (SRBWs) facilitates preferentially-oriented nucleation of the ferroelectric PVDF $\beta$-phase, while simultaneously aligning its dipoles to pole the material through the SRBW's intense native evanescent electric field ($\mathcal{O}$($10^{8}$ V m$^{-1}$)). The resultant neat (additive-free) homopolymer film synthesized through this low voltage method requiring only $\mathcal{O}$(10 V) - orders-of-magnitude lower than the energy-intensive conventional poling methods utilising high kV electric potentials - is shown to possess a 76% higher macroscale piezoelectric charge coefficient ($d_{33}$), together with a similar improvement in its power generation output, when compared to the gold-standard commercially-poled PVDF films of similar thicknesses.
Introduction
6][7] Their use in practice is, however, limited by their rigidity and brittleness, and the need for solid-state sintering during their fabrication. 8,9Organic piezoelectric polymers, such as poly(vinylidene fluoride) (PVDF), on the other hand, are thin, lightweight, flexible, durable and biocompatible. 10,111][22] Nevertheless, despite PVDF having superior piezoelectric properties among the range of organic polymers known to date, 10,23,24 the ability to produce freestanding, neat (additive-free), transparent and ultrathin homopolymer PVDF films with high piezoelectricity remains challenging. 25,26 general, the ability to produce highly piezoelectric PVDF relies on (1) preferential crystal orientation of its ferroelectric β-phase, and, (2) subsequent β-phase dipole alignment (i.e., poling).To achieve the first, a PVDF film, which is usually composed predominantly of the non-ferroelectric α-phase in its natural state, is traditionally heated (to approximately 80 • C) and concurrently stretched, either uni-or bi-axially, often at high strain (e.g.9][30][31][32] Given the propensity for the films to crack and tear, however, such mechanical methods for α-to β-phase conversion tend to be limited to relatively thick films (> 10 µm). 27,335][36] Compared to homopolymers, however, copolymers are significantly more expensive (the cost of PVDF-TrFE, for example, is more than tenfold that of PVDF), 37 while the addition of nanofillers leads to non-uniform distribution in the β-phase crystallization, in which the β-phase is typically co-localised in the filler regions. 17r the latter, i.e., β-phase poling, two broad strategies have been employed to pole the film: the application of a large electric field that surpasses the coercive field of the neat (additive-free) homopolymer PVDF film, or by exploiting the self-poling effect of the nanofillers that are added to the film. 38,39On the one hand, the use of high electric fieldsranging from 10 7 -10 8 V m −1 (through methods such as corona, electrospinning, or electrode/contact poling) 33,40,41 -has long been shown to successfully result in polarized neat PVDF.Such approaches, however, not only necessitate the consumption of large amounts of energy, costly infrastructure and high kV sources, 42,43 but can also often lead to dielectric breakdown, particularly for thin films. 44,45[48] From this perspective, a facile, one-step, low-energy method for synthesizing thin freestanding neat (additive-free) homopolymer PVDF films possessing a large fraction of molecularlyoriented and poled ferroelectric β-phase that leads to high levels in its piezoelectricity remains unrealised.In this work, we report such a method for the first time; a piezo-to-piezo (P2P) conversion mechanism in which a piezoelectric substrate (in this case, lithium niobate; LiNbO 3 ) is harnessed to facilitate in situ simultaneous crystallization and poling of a polarizable piezoelectric (albeit, one that is weak, to begin with) homopolymer (in this case, PVDF) to produce a highly piezoelectric ultrathin (µm-order) freestanding film.More specifically, we show that the unique nanoelectromechanical coupling 49 associated with hybrid surface and bulk acoustic waves (i.e., surface reflected bulk waves, or SRBWs) 50 generated on the LiNbO 3 substrate gives rise to an extraordinary surface acceleration (O(10 8 m s −2 )) and electric field (O(10 8 V m −1 )) 51 capable of simultaneously inducing-in a single step-both the mechanical stimulation required to facilitate formation of the β-phase together with the electric field necessary to induce alignment of its dipoles, concurrently during its crystallization.The result is the production of an ultrathin film with high β-phase fraction (F β = 74%), surpassing that for both the solution-cast control (F β = 42%) and a gold-standard commercially-poled sample (F β = 58%), and yielding a 76% increase in the macroscale piezoelectric charge coefficient (d 33 ) compared with commercially-poled film of comparable thickness.As such, the P2P SRBW platform removes the need for the cumbersome, lengthy and energy-intensive processes associated with conventional post-synthesis phase conversion and electrical poling, constituting an efficient, novel and green alternative for the synthesis of high performance freestanding PVDF films.
Results and Discussion
The experimental setup and protocol for the synthesis and characterization of the PVDF films are illustrated in Fig. 1(a) and described in the Methods section, respectively.In particular, we compare PVDF films synthesized by drop casting precursor solutions of PVDF powder dissolved in acetone and N,N-dimethylformamide (DMF) onto the LiNbO 3 substrate (Fig. 1(a)), both in the absence of the SRBW excitation but under heating to 100 • C for 30 min as the control (Fig. 1(c)), and in the presence of the SRBW excitation (Fig. 1(b)) and hence the full electromechanical coupling (EM-SRBW; Fig. 1(d)) at a power of 15 dBm for 20 min and subsequently 30 dBm for a further 20 min.To elucidate the role of the electromechanical coupling, and more specifically the mechanical and electric fields separately, we repeated the synthesis in the presence of the SRBW excitation, but without its native electric field such that the crystallization of the precursor solutions into the PVDF film occurred while being subjected solely to the mechanical vibration (M-SRBW; Fig. 1(e)).This was achieved by depositing a thin gold shielding layer atop the substrate on which the precursor solutions were deposited in order to screen out the evanescent electric field.In all of the cases, the precursor solutions were observed to crystallize into ultrathin films during the solvent evaporation process, whose µm-order thickness was controlled primarily by adjusting the volume of the precursor solution deposited.Considerable differences were however observed with regards to the molecular orientation and dipole alignment of the phases, as well as the physical properties (in particular, the piezoelectricity as well as the optical transmittance (Fig. 1(f))), between the control and SRBW-synthesized films.
Figure 1: (a) Schematic of the SRBW device employed for simultaneous synthesis and poling of ultrathin PVDF films.The electromechanical vibration in the form of SRBWs are generated by applying a RF signal in the form of an oscillating electrical signal at the resonant frequency of the device (10 MHz) to a pair of interdigitated transducers (IDTs) photolithographically patterned on the lithium niobate (LiNbO 3 ) substrate.The SRBWs that are launched from each IDT can be seen to propagate along and through the substrate in opposing directions.As a consequence, they superimpose to form a standing wave beneath the region where the PVDF precursor solution is deposited and where the PVDF film forms, as indicated by (b) the laser Doppler vibrometer scan of the substrate vibration acceleration in this region.Compared to (c) the control experiment in which PVDF is conventionally synthesized in the absence of the SRBW forcing, yielding a film that mainly comprised a disordered non-polar α-phase, the electromechanical coupling associated with the SRBW (EM-SRBW) resulted in (d) a film with an oriented polar β-phase.If a gold film is patterned on the LiNbO 3 substrate to short out the electric field, the synthesis of the PVDF film in the presence of a predominantly mechanical stress field (M-SRBW) only led to a (e) primarily disordered β-phase film.Doing so allows us to isolate the crucial role of the evanescent SRBW electric field during the synthesis and poling.(f) The SRBW-synthesized PVDF film (EM-SRBW) can be seen to be more optically transparent than the control PVDF film of similar thickness.and presence (full electromechanical coupling; EM-SRBW) of its evanescent electric field.
It can be seen that all of the synthesized films showed the characteristic peaks associated with the α-and β-phases for PVDF at 794 and 839 cm −1 , respectively, that have been suggested to result from the rocking of their corresponding CH 2 bonds. 52,53Nevertheless, we note the prominence of the β-phase peak and suppression of the α-phase peak, together with the appearance of the peak associated with the electroactive γ-phase at 812 cm −1 for both SRBW-synthesized samples.More specifically, the relative intensity ratio of the β-to α-phases (I β /I α ) can be seen to increase from 0.31 for the control PVDF film to 1.86 and 1.82 for the EM-SRBW and M-SRBW films, respectively, alluding that exposure of the PVDF film to the SRBW during its synthesis leads to suppression of the non-ferroelectric α-phase and the preferential crystallization of the ferroelectric β-phase.We note that these values surpass even that for the state-of-the-art 3D-printed PVDF-MoS 2 (I β /I α ≈ 0.95) 17 and are comparable to those reported for Ti 3 C 2 T x /PVDF-TrFE composites (I β /I α = 2.5). 13reover, Raman mapping (Fig. S??) over a relatively large area (100 µm ×100 µm) further demonstrates the homogeneity in the distribution of the β-phase across the films subjected to the SRBW exposure, in contrast to the non-uniform distributions observed for PVDF nanocomposites that incorporate fillers to induce β-phase crystallization, in which the β-phase is typically co-localised in the filler regions. 17We conducted a brief parametric study to obtain the optimum SRBW input power that results in the largest β-phase molecular fraction (Fig. S??), demonstrating that this is an extremely efficient process to promote βphase formation even with the minimal powers used.
5][56] The Beer-Lambert law (Eq.1)then allows an estimation of the proportion of the β-phase content in the samples: 57,58 wherein A α and A β indicate the intensities of the FTIR peaks at 763 and 841 cm −1 , respectively; the factor of 1.26 for the α-phase compensates for the difference in the absorption coefficients, which are 6.1 × 10 4 and 7.7 × 10 4 cm 2 mol −1 for the α-and β-phases, respectively. 59Both the EM-SRBW and M-SRBW films were found to possess F β values of 74.4% and 72.7% respectively, which is equivalent to an approximate 76% increase in βphase fraction over the control (F β = 42.1%), and a 28% increase over the gold-standard commercially-poled sample of comparable thickness (F β = 58%; Fig. S??).
The effectiveness of the mechanical stresses generated along the piezoelectric substrate by the SRBW on the PVDF sample in selectively inducing the β-phase, rather than the typical α-phase, are further corroborated by the powder x-ray diffraction (XRD) spectra in the results above.From Fig. 2(c), it can be seen that the control film, whose diffraction peaks are located at 17.7 • , 19.9 • and 26.6 • , corresponding to the (100), ( 110) and (021) reflections of the monoclinic α-phase crystal, respectively, appear to be suppressed in both the SRBW-synthesized films. 60,61[66] Both the M-SRBW and EM-SRBW PVDF films furthermore display broader and larger areas (∆H m = 58-58.5J g −1 ) below the melting peak (T m = 170 • ) on their respective differential scanning calorimetry (DSC) curves when compared to the more narrow area of the control film (∆H m = 42.3J g −1 ), as seen in Fig. 2d.This is a consequence of the β-phase having a lower-temperature state, 53,65 driven by stronger polar interactions within its all-trans (TTT) planar zigzag conformations. 67Besides highlighting the prevalence of the β-phase in the SRBW-synthesized films, the DSC results also show an appreciable increase in the overall crystallinity ( χ c ) of these films, which can be calculated from where ∆H m and ∆H 0 m denote the enthalpy of melting for the PVDF film that is synthesized and that for purely crystalline PVDF (∆H 0 m = 104.5 J g −1 ), 68,69 respectively.The calculated χ c value for the M-SRBW and EM-SRBW synthesized films were 59.9% and 60.5%, respectively, which are considerably higher than that for the control film ( χ c = 44%).
The characterization of the samples, as summarised in Table 1, collectively suggest that the SRBW-synthesized films are predominantly composed of the β-phase, compared to the αphase dominant control films.This can further be seen from the morphology of the films, captured by the microscopy images in Fig. 3.In particular, we observe the SRBW-synthesized films (Fig. 3(b,c,e,f,h,i)) to be devoid of the typical well-defined and regular ringed spherulitic structure that appear as Maltese extinction crosses under polarized optical microscopy, characteristic of the α-phase, [70][71][72][73] prominent in the control films (Fig. 3(a,d,g)).
Table 1: Degree of crystallinity ( χ c ) from DSC measurements, β-phase (F β ) and α-phase (F α ) compositions from FTIR spectroscopy, and the relative β-to α-phase fractions I β /I α obtained from Raman spectroscopy for each of the PVDF films synthesized.The ability of the SRBW forcing to suppress α-phase growth and to allow preferential nucleation of the β-phase during crystallization of the polymer can be understood from the unique nanomechanical interactions arising at the solid-liquid interface of the piezo- electric LiNbO 3 substrate, along which the SRBW propagates. 49Despite being only several nanometers in amplitude, the high frequency of the SRBW (10 MHz) yields surface accelerations that are exceptionally large (O(10 8 m s −2 )).It is thus likely that the local dynamical stress variations-on the order of several MPa 74 -that the SRBW imparts on the PVDF film as it crystallizes act to introduce large numbers of nucleation sites throughout the film, in a manner akin to sonocation-induced nucleation in the sonocrystallization of polymers (such as poly-3-hexylthiophene), 75,76 although this has never been demonstrated for PVDF.
These local nucleation sites, in turn, act to disrupt the stable growth of α-phase spherulites, in a manner similar to the way the electrostatic interactions arising from the presence of nanofillers hinder α-phase growth and promote proliferation of the β-phase (we note though that in the typically non-uniform distribution of the nanofillers, however, tend to result in β-phases that are non-uniformly distributed in these cases). 36,77,78The SRBW platform thereby effectively allows the formation of the irregular, textual variations comprising considerably smaller protrusions associated with β-phase PVDF seen in Fig. 3(b,c,e,f); this transformation and hindered crystalline structure being further evidenced by the low birefringence observed in the polarized light microscopy images in Fig. 3(h,i), which is indicative of β-phase PVDF. 79e morphological differences between the films were also observed to influence their macroscopic properties.A side-by-side visual comparison of the control and EM-SRBW films in Fig. 1(f), together with their total optical transmittance across the visible spectrum (380-700 nm; Fig. S??) shows that the EM-SRBW films possess higher transparency (approximately 90% transmittance) than the control films, and a similar transmittance to that for commercially-acquired films of similar thicknesses (6 µm).This can be attributed to the higher β-phase composition, particularly given that the spherulitic α-phase superstructure has been noted to contribute to the opacity of the film. 80
Local Polarization Effects: SRBW Evanescent Electric Field Drives Simultaneous Poling
To demonstrate the effect of the evanescent electric field associated with the SRBW nanoelectromechanical coupling on the poling of the β-phase, we now quantify the local piezoelectric response in the synthesized films using piezoelectric force microscopy (PFM).In particular, the central role of the SRBW evanescent electric field can immediately be seen from the PFM phase profile in Fig. 4(a,b), which characterises the local polarization direction, i.e., the dipole orientation of the PVDF molecules within the specified area of interest. 81,82More specifically, we observe a broad phase distribution that characterises predominantly disordered polarization in the control film, which does not appreciably change with the SRBW synthesis in the absence of its evanescent electric field (M-SRBW), wherein the width of the corresponding phase histogram exhibits a similarly broad range.It is only when this electric field is present that there is a significant narrowing of the phase distribution to signify highly-oriented local polarization direction, 83,84 thereby alluding to the critical role of the SRBW evanescent electric field in simultaneously poling the material concurrently during its synthesis (for comparison, a similarly narrow phase distribution was obtained for commercially-acquired PVDF film that has been post-synthetically poled (Fig. S??(a,b)).
The PFM amplitude for the various PVDF films that were synthesized is shown in Fig. 4(c), and is directly related to the piezoelectric response in Fig. 4(d), whose slope (with respect to the AC voltage applied between the PFM tip and the local unit cell surface along the PVDF film) quantifies the piezoelectric coefficient (d 33,eff ).It can thus be seen that there is an increase in d 33,eff of approximately 65% when the PVDF is synthesized solely under the SRBW mechanical stress (M-SRBW), from 1.92 pm V −1 for the control to 3.12 pm V −1 , which can be attributed to the greater proportion of β-phase in the sample discussed above, despite it being predominantly disordered, as shown in Fig. 4(a,b).The d 33,eff value, nevertheless, further increases to 6.07 pm V −1 for the EM-SRBW film when the full electromechanical coupling associated with the SRBW is present-a significant 216% increase over the control film, which we conclude from Fig. 4(a,b) to arise due to the poling effect of the SRBW evanescent electric field in aligning the dipoles to give rise to a predominantly ordered β-phase simultaneously during its crystallization within the film.This value is superior, not only to that obtained with the commercially-poled film of comparable thickness (5.26 pm V −1 ; Fig. S??(c)) but also the d 33,eff values which have been reported for PVDF films poled via electrospinning involving high kV potentials (5.56 pm V −1 ), 65 and that for composite Ti 3 C 2 T x /PVDF-TrFE employing considerably more costly MXene nanofillers (5.11 pm V −1 ). 53e role of the evanescent electric field component of the SRBW in locally aligning the dipoles of the β-phase induced by its mechanical component is not unexpected in light of its intensity 51 -on the order 10 8 V m −1 -compared with typical electric field intensities that have been reported for poling PVDF (3 × 10 7 -65 × 10 7 V m −1 ). 15,33,85Such molecular orientation and dipole rearrangement under the SRBW is not without precedent: the SRBW, for example, has previously been shown to induce similar dipole alignment during the crystallization of MOFs to result in highly-oriented structures. 86The P2P EM-SRBW platform is unique (compared to post-synthesis poling methods utilising high electric fields) given its ability not only to pole the PVDF film simultaneously during crystallization, but to also require substantially lower applied voltages (≈ 1-10 V) compared to the kV voltages typical of electrical poling techniques. 42,43The ability of the SRBW to pole the PVDF film at these substantially lower applied voltages is possible because of the electric field confinement over the nanometer lengthscales associated with the SRBW displacement amplitude. 51
Macropiezoelectricity Measurements
To evaluate the device-scale macroscopic piezoelectric properties of the control and SRBWsynthesized films in comparison to those for commercially-poled PVDF films of similar thicknesses, we constructed piezoelectric nanogenerator (PENG) devices for each of the films by coating them with Cr/Au electrodes and encasing them with insulating polyimide tape, as shown in (Fig. S??), full details for which are given in the Methods section.The power output for the EM-SRBW PENG devices for different thickness films when subjected to a 1 Hz sinusoidal cyclic in-contact compressive force (5-15 N with 10 N preloading to minimise artefacts associated with contact electrification (i.e., contact separation, charge induction and lateral-sliding triboelectric modes) 13 that can often inflate the piezoelectric output) 87,88 is shown in Fig. 5 The optimal power output at a film thickness of 6 µm (Fig. 5(a)) is likely due to reduced dipole alignment in thinner (< 6 µm) PVDF films as a consequence of the free surface instabilities arising as a result of the acoustic radiation pressure imposed by the underlying SRBW forcing on the air-liquid interface when the initial film height of the liquid that forms following the spreading of the precursor solution as it is dispensed onto the LiNbO 3 substrate, prior to its crystallization, is below 100 µm.Weaker dipole alignment is also the reason for the reduced power output in thicker (> 6 µm) PVDF films, but this instead being due to the decrease in penetration of the SRBW evanescent electric field into the initial liquid film, when its thickness exceeds the SRBW wavelength (approximately 400 µm).These synergistic effects are observed to yield freestanding neat micron-thick PVDF films with high β-phase fractions and which are simultaneously poled-requisite characteristics for a material with strong piezoelectric properties-without the need for additives (e.g., nanofillers) or energy-intensive processes.By fabricating piezoelectric nanogenerator (PENG) devices from these films, we show the films synthesized with this method possess superior properties in terms of optical transparency and piezoelectric charge coefficient (d 33 ), the latter translating into > 70% improvement in power generation as an energy harvesting device compared to gold-standard commercially-poled films of similar thicknesses.
Altogether, this low-voltage and low-cost green approach circumvents conventional energyintensive processes, in addition to eliminating the need for costly nanofiller materials used in more recent approaches, for the preparation of highly piezoelectric PVDF films.
PVDF precursor solution
To prepare the PVDF precursor solution, we dissolved 3 wt.%PVDF powder (M m = 238,000, M w = 573,000 g mol
PVDF film synthesis
Control films were synthesized by drop casting 0.04-0.1 ml (depending on the requisite thickness) of the aforementioned PVDF precursor solution onto the LiNbO 3 substrate, but without SRBW excitation, and heating to 100 • C for 30 min.For the SRBW synthesis, the same volume was pipetted onto the LiNbO 3 substrate and the device actuated initially at a power of 15 dBm for 20 min and subsequently at 30 dBm for a further 20 min, to subsequently yield a film which could then be peeled off the substrate.UV-Vis spectra (Apollo; CRAIC Technologies, San Dimas, CA, USA) were obtained in the visible wavelength range (between 380 and 700 nm) at a step size of 0.8 nm.
Film characterization
Raman spectroscopy (LabRAM HR Evolution; Horiba Scientific SAS, Palaiseau, France) was conducted at 532 nm excitation (600-1000 cm −1 acquisition range) with a 100× objective and 1800 g mm −1 grating.All spectra were calibrated against a silicon wafer to a wavelength of 520 cm −1 .
Fourier-Transform infrared (FTIR) (Spectrum One, PerkinElmer Inc., Waltham, MA, USA) transmittance spectra were captured by utilising a compression technique at room temperature across a wide range of 500-4000 cm −1 over 64 scans and at high resolution (4 cm −1 ).
Differential scanning calorimetry (DSC) (Pyris 1; PerkinElmer, Pontyclun, UK) was employed to determine the crystallinity of the PVDF films.5 mg samples were placed in a metal pan and heated to 200 • C at a ramp rate of 10 • min −1 .
Scanning electron microscopy (SEM) (Nova NanoSEM 450, FEI, Hillsboro, OR, USA) imaging was conducted under a 30 kV electron beam with a spot size of 3.5.
Polarized optical micrographs (POM) were acquired using an optical microscope (2500; Leica Microsystems GmbH, Wetzlar, Germany) with a high definition digital camera (DFC290; Leica Microsystems GmbH, Wetzlar, Germany).Ltd., Preston VIC, Australia) was used for the DART-PFM measurements.The PVDF sample was adhered using carbon tape to a conductive Au/Cr substrate, which was grounded to the PFM stage prior to the measurement, in which scans were conducted at a frequency of 1 Hz over atleast a 5 µm × 5 µm area at 256 pixels per line.An AC driving amplitude is swept from 1 to 5 V whilst the tip was in-contact with the PVDF sample to calculate the quantitative vertical piezoelectric coefficient of the PVDF films, referred to as the effective piezoelectric constant (d 33,eff ).Unlike DART hysteresis PFM, which frequently experiences the influence of electrostatic interactions during polarization switching, particularly in the ON-field state, [92][93][94][95] our approach involved utilising a DC voltage bias to nullify the surface potential of the PVDF and thus reduce the impact of electrostatic effects during DART scanning PFM. 96To accurately mitigate and correct for DART scanning electrostatic effects, we conducted local Kelvin probe force microscopy scans to find the local surface potential and thus apply an opposing DC voltage prior to the PFM measurements to offset the sur-face potential. 96In addition, we also conducted pre-measurements to calculate the d 33,eff of periodically-polarized LiNbO 3 (PPLN; Asylum Research; Oxford Instruments, Santa Barbara, CA, USA) to further ensure reliability and accuracy in the measurement.intervals.From these measurements, the power output density of the device can then be calculated from
Local polarization measurements
wherein V pp is the peak-to-peak voltage, R the load resistance, t the film thickness, and A e the effective surface area of the cylindrical impactor that is in contact with the surface of the device (0.126 cm 2 ), respectively.To delineate between the individual contributions arising from triboelectric and piezoelectric effects, we also analysed the data in the frequency domain using Fast Fourier Transforms (FFT). 97To show the ability of the PENG devices to utilise their piezoelectric effect to convert finger tapping vibrations into electrical energy, a bridge rectifier circuit was used to convert the generated AC voltage into a DC signal, enabling the charging of a capacitor, which was then recorded using an oscilloscope
Figure 2 (
Figure 2(a), shows the Raman spectra for the PVDF films synthesized via solution casting (control) and via SRBW synthesis, both in the absence (pure mechanical stress; M-SRBW)
Figure 4 :
Figure 4: Piezoelectric force microscopy (PFM) (a) phase, (b) phase distribution, and, (c) piezo-response amplitude (at 5 V AC ) for the control (left box) and SRBW-synthesized films, both in the absence (M-SRBW; centre box) and in the presence (EM-SRBW; right box) of the SRBW evanescent electric field.(d) Piezoelectric force response and piezoelectric coefficient (d 33,eff ) for each of the films.The scale bars in (a) denote lengths of 2 µm.
(a), indicating optimum power generation with PVDF films of 6 ±1 µm thickness.Importantly, to demonstrate that the obtained voltage outputs solely arose as a consequence of the film's piezoelectricity, we conducted Fast Fourier Transform (FFT) analyses of the time domain waveforms associated with the input signals.In particular we note that a spectral bandwidth of 8 Hz, as determined through frequency domain FFT analysis of signals obtained from 0.11 mm contact separation measurements related to the film's triboelectric effects (see Fig. S??(a)), reduces to 1 Hz for measurements taken while in contact, which are exclusively linked to piezoelectric effects (see Fig. S??(b)) in the EM-SRBW device.Moreover, the bandwidth for in-contact measurements being lower than 2f 0 indicates the absence of substantial triboelectric electrostatic interference mechanisms affecting the overall voltage output. 89
Figure 5 :
Figure 5: Macroscale evaluation of control, M-SRBW, EM-SRBW and commercially-poled PENG devices subjected to 1 Hz sinusoidal cyclic in-contact compression forces (5-15 N).Power density (P D ) output (a) for EM-SRBW PENG devices with varying PVDF film thicknesses, and, (b) as compared across different PENG devices with the same film thickness (6 µm); their corresponding open-circuit voltages being shown in (c).(d-f) Voltage and power density output as a function of resistive load for the different PENG devices.(g) Voltage output for the EM-SRBW PENG device (6 µm) over multiple compression cycles at the open-circuit voltage.
3 ConclusionA
novel one-step route that exploits the nanoelectromechanical interactions between one piezoelectric substrate into another without an intermediary material (P2P) for synthesizing PVDF films that possess high levels of β-phase and are simultaneously poled during its crystallization to yield a material with superior piezoelectricity is presented.In particular, we harness the extraordinary acceleration-on the order of 10 million g's-together with the intense native electric field of nanoelectromechanical vibrations in the form of SRBWs to simultaneously enable oriented crystallization and align the dipoles during crystallization of the material into an ultrathin (µm-thick) freestanding film.The large mechanical stresses associated with the O(10 8 m s −2 ) SRBW substrate acceleration are shown to hinder growth of the spherulites associated with the non-ferroelectric PVDF α-phase and to facilitate large numbers of nucleation sites for the formation of the ferroelectric β-phase, whose dipoles are concurrently aligned by the O(10 8 V m −1 ) SRBW evanescent electric field.
Powder x-ray diffraction (XRD) (D8 General Area Detector Diffraction System (GADDS); Bruker Pty. Ltd., Preston VIC, Australia) was carried out to determine the phases of the control and SRBW-synthesized PVDF films.The analysis was performed at 40 mA and 40 kV Cu-Kα radiation (λ = 1.54 Å) over a 20 • -30 • 2θ range with a step size of 0.01 • and scan rate of 2.6 • min −1 .
3. 6
Macroscale piezoelectricity quantificationPENG devices ((Fig.S??(b,c)) were first fabricated by applying electrodes onto each of the control, SRBW-synthesized and commercially-poled PVDF films via electron beam deposition (PRO Line PVD 75; The Kurt J. Lesker Company, Jefferson Hills, PA, USA).This process involved depositing a 10 nm Cr layer and a 100 nm Au layer onto the films.A shadow mask with a 0.56 cm 2 active area was first used to define the electrode placement on both sides of the material.Copper foil tape was then attached as a point of contact to establish a solid connection between the tape and the Cr/Au coating, following which wires were soldered onto the tape on both sides of the films to ensure proper contact.Finally, insulating polyimide tape (Kapton ® ; DuPont Company, Wilmington, DE, USA) was applied to both surfaces to fully enclose the films.Macroscale polarization measurements of each PENG device then involved subjecting them to in-contact cyclic compressive force and measuring the output voltages.Briefly, a sinusoidal force at 1 Hz frequency with a preload of 10 N and a minimum and maximum load of approximately 5 and 15 N (∆F = 10 N), respectively, was applied to the active area of the PENG using the dynamic testing instrument (Electropuls E3000; Instron, Norwood, MA, USA) shown in Fig.S??(a).The electrical outputs, determined by connecting a known variable resistor (1 kΩ-10 GΩ) in parallel with the PENG device, were measured using a source meter unit (B2912A; Keysight Technologies, Mulgrave, VIC, Australia) at 0.1 s
Figure 1 :
Figure 1: Confocal Raman microscopy map of a 100 µm ×100 µm area on the surface of a EM-SRBW PVDF film, showing local distributions of the α-and β-phases, quantified by the relative intensity between the Raman β-phase peak at 839 cm −1 and the α-phase peak at 794 cm −1 (I β /I α ).The magnified inset represents an area of 10 µm × 10 µm.
Figure 2 :
Figure 2: Variation in the phase composition of EM-SRBW PVDF films synthesized at different input SRBW powers, as quantified using Raman spectroscopy.
Figure 6 :
Figure 6: (a) Setup for the macroscopic piezoelectric response measurements of the various piezoelectric energy harvesting (PENG) devices using a dynamic testing instrument (Electropuls E3000; Instron, Norwood, MA, USA).(b) Schematic of the component layout for the PENG devices.(c) Image of an example PENG device.
Figure 7 :
Figure 7: (a) Time domain waveforms and corresponding fast Fourier Transform (FFT) spectrum of signals from macroscopic piezoelectric response measurements with 0.11 mm contact separation, and, (b) under in-contact mode, for the EM-SRBW PENG device.
Figure 8 :
Figure 8: Measurement of the piezoelectric charge coefficient (d 33 ) for the 6 µm thick EM-SRBW and commercially-poled films using a piezoelectric meter with a 2 N static load.
Figure 9 :
Figure 9: (a) Charge-discharge profiles for PENG devices comprising 6 µm thick EM-SRBW PVDF films when one capacitor (1, 2.2 and 10 µF) was used, and, (b) 6 µm thick EM-SRBW and commercially-poled films when a 10 µF capacitor was utilised.The inset in (a) shows the variable capacitor being charged from voltages supplied by the PENG through the use of a bridge rectifier. | 2023-10-04T06:42:14.235Z | 2023-10-03T00:00:00.000 | {
"year": 2023,
"sha1": "7f0b39a14da1a7a0205ca7ba85378fe79bc42807",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "7f0b39a14da1a7a0205ca7ba85378fe79bc42807",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Physics"
]
} |
11629132 | pes2o/s2orc | v3-fos-license | Characterization of Detergent-Insoluble Proteins in ALS Indicates a Causal Link between Nitrative Stress and Aggregation in Pathogenesis
Background Amyotrophic lateral sclerosis (ALS) is a progressive and fatal motor neuron disease, and protein aggregation has been proposed as a possible pathogenetic mechanism. However, the aggregate protein constituents are poorly characterized so knowledge on the role of aggregation in pathogenesis is limited. Methodology/Principal Findings We carried out a proteomic analysis of the protein composition of the insoluble fraction, as a model of protein aggregates, from familial ALS (fALS) mouse model at different disease stages. We identified several proteins enriched in the detergent-insoluble fraction already at a preclinical stage, including intermediate filaments, chaperones and mitochondrial proteins. Aconitase, HSC70 and cyclophilin A were also significantly enriched in the insoluble fraction of spinal cords of ALS patients. Moreover, we found that the majority of proteins in mice and HSP90 in patients were tyrosine-nitrated. We therefore investigated the role of nitrative stress in aggregate formation in fALS-like murine motor neuron-neuroblastoma (NSC-34) cell lines. By inhibiting nitric oxide synthesis the amount of insoluble proteins, particularly aconitase, HSC70, cyclophilin A and SOD1 can be substantially reduced. Conclusion/Significance Analysis of the insoluble fractions from cellular/mouse models and human tissues revealed novel aggregation-prone proteins and suggests that nitrative stress contribute to protein aggregate formation in ALS.
Introduction
Protein aggregation and deposits of abnormal proteins are hallmarks of several neurodegenerative diseases [1]. In familial forms the deposits frequently contain the mutant protein; in sporadic forms, post-translational modifications of proteins may be at the basis of the abnormal conformation. Aggregates are biochemically poorly characterized and what is known of the protein constituents comes essentially from immunohistochemistry studies. This is probably why their role in neurodegeneration remains poorly defined.
Amyotrophic lateral sclerosis (ALS) is a progressive and fatal motor neuron disease, and protein aggregation has been proposed as a possible pathogenetic mechanism [2]. Approximately 10% of ALS cases are familial; 20% of these are associated with mutations in the superoxide dismutase 1 (SOD1) gene. In SOD1-linked cases it is thought that the mutant protein acquires new toxic properties, such as the propensity to form aggregates [3,4]. The aggregation hypothesis has received great support because mutant SOD1 mouse models of ALS develop protein inclusions in motor neurons and in some cases in astrocytes. In addition, insoluble SOD1 complexes can start to be detected prior to disease onset [5,6]. Speculation has been offered on the mechanism of toxicity of SOD1-rich aggregates. For example, they may sequester other protein components essential for motor neuronal function, such as chaperones and anti-apoptotic molecules [7], inhibit the ubiquitinproteasome system [8] and, by associating with motor proteins, impair axonal transport [9]. Insoluble mutant SOD1 was found associated with mitochondria and proposed as the basis of mitochondrial dysfunction [10].
In sporadic and familial ALS patients the most widely observed inclusions immunostain for ubiquitin, and other protein constituents are largely unknown [11]. Immunohistochemistry studies have detected proteins such as HSC70 [12], p38 MAP kinase [13] and TDP-43 [14] as constituents of the inclusions in ALS patients. In mutant SOD1 mice, protein inclusions are mainly immunoreactive for SOD1 and ubiquitin but also contain HSC70 and p38 MAPK [13]. We have shown that in the spinal cord of mice over-expressing hSOD1, carrying the G93A mutation (G93A SOD1 mice), there is progressive accumulation of mutant SOD1, its oligoubiquitinated forms and other unknown proteins in the Triton X-100-insoluble fraction (TIF) [5,15]. We have now used proteomic approaches to characterize the protein composition of TIF, as a model of protein aggregates, in G93A SOD1 mice at different stages of disease. We identified several proteins enriched in TIF of ALS mice, most of them nitrated. Interestingly, we already detected increased protein nitration in the spinal cord soluble fraction of the G93A SOD1 mouse [16] and in the peripheral blood monuclear cells of ALS patients [17]. We therefore investigated the role of nitrative stress in aggregate formation in a cellular model of ALS and showed that by inhibiting nitric oxide synthesis it is possible to interfere with aggregation of proteins such as aconitase, HSC70, cyclophilin A (CypA) and SOD1.
Results
In the spinal cord of G93A SOD1 mice we have observed progressive accumulation of Triton-insoluble proteins: mutant SOD1, its oligoubiquitinated forms and other unknown proteins [5,15]. TIF from spinal cords of mutant mice are also enriched in polyubiquitinated proteins ( Figure S1), and therefore have the fundamental biochemical features of protein inclusions in SOD1linked ALS. For these reasons TIF was used as our experimental model of protein aggregates. In this study we characterized TIF of the spinal cord of G93A SOD1 mice at different disease stages.
Proteomic Analysis of TIF from Spinal Cord of WT and G93A SOD1 Mice with Advanced Disease We started to analyze TIF from an advanced stage of disease, when protein aggregates are most abundant. TIF averaged 3.660.7 mg (n = 5) per mg of spinal cord tissue in G93A SOD1 mice at the end stage and 2.760.5 mg (n = 5) in age-matched wild-type (WT) SOD1 mice (p,0.05, as assessed by Student's t test). We analyzed the same amounts of TIF from spinal cord of G93A SOD1 mice and agematched WT SOD1 mice by two-dimensional gel electrophoresis (2DE). Figure 1 shows 2-D average maps of G93A and WT samples. Gel images were analyzed and compared. The analysis detected changes in protein composition of TIF in the two conditions. There were 42 spots uniquely present in G93A samples (unmatched) and 94 Figure 1. 2DE proteomic analysis. Representative Sypro Ruby-stained 2DE maps of TIF of late-symptomatic G93A SOD1 mice (A) and agematched WT SOD1 mice (B). In panel A the numbered spots correspond to proteins enriched or only present in TIF of G93A samples, and in panel B they indicate proteins enriched in TIF of WT samples. The same amount of protein was loaded in each gel (75 mg). The asterisk indicates the spot corresponding to GFAP, which is the most prominent, but equally abundant in the two conditions, and was therefore considered as background. doi:10.1371/journal.pone.0008130.g001 spots with different volumes in G93A in comparison with WT samples; 62 were more present in G93A samples and 32 more present in WT samples. We defined the proteins similarly present in both samples as intrinsically poorly soluble in non-ionic detergents (the background), and the ones enriched or only present in G93A samples as protein aggregate constituents. After comparison of gel patterns 136 differentially present spots were excised from the gels and processed for protein identification.
Identification of Differentially Present Proteins by MALDI-TOF
Peptide mass fingerprinting spectra were recorded on a MALDI-TOF mass spectrometer and proteins identified by a database search using the MASCOT program. The proteins enriched or only present in TIF from G93A samples are reported in Table 1 and Table S1. They belong to different functional categories: cytoskeletal proteins, metabolic enzymes, mitochondrial proteins, chaperones, proteins involved in signalling and mutant SOD1. MAPKp38, previously found by immunohistochemistry in the inclusions in spinal motor neurons of these mice [13], was enriched in TIF of G93A SOD1 mice by Western blotting (WB) with the specific antibody (Fig. 2). The most abundant protein spot in the 2D gels (labelled with the asterisk in Fig. 1) was GFAP, which was not differentially present. Fragments (spot 37,41,42,43,55,56) and a high-Mw isoform (spot 5) of GFAP were instead specifically enriched in G93A samples. Some proteins were more present in TIF from WT mice and were therefore selected for protein identification (Table S2). We could identify intermediate filament proteins that are known to be enriched in TIF and more present in WT samples since equal amounts of total proteins for WT and G93A samples were loaded in the 2D gels. Clearly, the lower level of neurofilaments in G93A samples is correlated with the consistent motor neuron loss in G93A SOD1 mice with advanced disease.
Validation Analysis in Mouse and Human Spinal Cord Samples
Some of the proteins enriched in TIF of G93A samples were selected for validation by WB: HSP90, aconitase, HSC70, ERK2, 14-3-3 gamma, and CypA. Fig. 2 shows representative WB of the same amounts of TIF from spinal cord of WT and G93A SOD1 mice probed with the specific antibodies. In all cases the enrichment of the proteins analyzed by 2DE was confirmed by WB ( Fig. 2, Table S3). The levels of these proteins were also measured in the soluble fraction. HSP90, aconitase, ERK1/2, 14-3-3 gamma were similarly present in the soluble fraction of spinal cord of WT and G93A SOD1 mice, while HSC70 and CypA, abundantly expressed in neurons [18,19], were substantially lower in G93A SOD1 mice, probably because of motor neuron loss.
TIF was also extracted from spinal cord tissues of sporadic ALS patients and controls. Significantly more TIF was obtained from patients than controls averaging 2.360.3 mg (n = 7) in comparison with 1.760.5 mg (n = 3) per mg of tissue analyzed (p,0.05, as assessed by unpaired t test with Welsh's correction). The levels of HSP90, aconitase, HSC70, ERK1/2 and CypA were measured by dot blot analysis. Interestingly, CypA, aconitase, and HSC70 were significantly enriched in TIF of patients (Fig. 3). The level of the same proteins in the soluble fraction did not change (data not shown).
DIGE Analysis of TIF from Spinal Cord of G93A SOD1 Mice at Different Stages of Disease
We then measured the levels of aggregated proteins at earlier disease stages, pre-symptomatic and early symptomatic. TIF from spinal cord of G93A SOD1 mice at the three different stages was analyzed by DIGE and compared with TIF from spinal cord of WT SOD1 mice ( Figure S2 and Table S4). Of the 66 protein spots analyzed, 35 were more present in the G93A samples than in WT already at 12 weeks of age, while 19 accumulated only at the endstage. For example, the neurofilament proteins L (NFL) and M (NFM) accumulated in TIF of G93A SOD1 mice at 12 weeks of age, while at end-stage disease the level of the insoluble proteins fell, parallel with the motor neuron loss. Mitochondrial proteins such as NADH-ubiquinone oxidoreductase and aconitase accumulated at all ages as much as chaperone proteins, HSP90 and HSC70. Insoluble 14-3-3 protein gamma was not recovered in TIF of WT mice but was present at all ages in G93A SOD1 mice. Proteins involved in glycolytic pathways, fructose-bisphosphate aldolase C (aldolase C) and glyceraldehyde-3-phosphate dehydrogenase (GAPDH), greatly accumulated, only or especially at endstage disease, as well as ERK2.
Immunohistochemistry of Aconitase
To verify the localization of proteins identified in the proteomic screening, we did immunostaining analysis on spinal cord sections of G93A SOD1 mice at pre-symptomatic and end stages of disease. We selected aconitase that had no known prior association with inclusions in ALS. Double labelling with anti-aconitase and anti-cytochrome oxidase in the lumbar spinal cord of control samples ( Figure S3) showed that the punctate labelling of aconitase and that of the mitochondrial marker cytochrome oxidase largely overlapped in neuronal cell bodies and profiles scattered in the neuropil. Aconitase immunoreactivity in human SOD1-labelled motor neurons of WT SOD1 control mice ( Fig. 4A-C) was similar to that in non-transgenic mice (data not shown) and uniformly distributed in small puncta around the nucleus. In contrast, in human SOD1-labelled motor neurons of G93A SOD1 mice aconitase immunoreactivity was found in large puncta already at the pre-symptomatic stage of disease ( Fig. 4D-F) and at the end stage it occasionally co-localized with human SOD1 also in neuropilar aggregates (Fig. 4G). Electron microscopy confirmed that the anti-aconitase antiserum used selectively labelled mitochondria in spinal cord samples of control and transgenic mice (Fig. 5). In the ventral horn of non-transgenic mice labelled mitochondria were present in myelinated axons, cell bodies and dendrites, close to unlabelled mitochondria ( Fig. 5A, B, C). In G93A SOD1 mice at both pre-symptomatic (Fig. 5D, E) and endstage ( Fig. 5F-G) ages, an intense aconitase staining was found in numerous mitochondria located in dendrites (Fig. 5D, E, G) and cell bodies (Fig. 5F). Several labelled mitochondria appeared swollen (Fig. 5E) and were frequently aggregated in clusters or apposed at the inner membrane of vacuoles (Fig. 5D, G). Only in end-stage G93A SOD1 mice the anti-aconitase antiserum also labelled clumps of amorphous material scattered in the cytoplasm of large neuronal cell bodies identifiable as motor neurons (Fig. 5H).
The Majority of the Insoluble Proteins Are Tyrosine Nitrated
We have previously shown a high level of protein nitration in the soluble fraction of the spinal cord of G93A SOD1 mice already at a pre-symptomatic stage of the disease, increasing as the disease progresses [16]. We took into consideration that protein nitration may be involved in the aggregation by altering the protein structure and stability. We analyzed protein nitration in spinal cord TIF at early symptomatic and end-stage disease. Figure 6 shows a representative 2D WB of TIF from early symptomatic G93A SOD1 mice probed with anti-nitrotyrosine polyclonal antibody. Similar results were obtained with the monoclonal antinitrotyrosine antibody. Surprisingly, the majority of the protein spots in TIF, 39 out of 69 (Table 2), were nitrated and gave a very intense signal (Fig. 6A), especially the mitochondrial protein aconitase (spots 11,12,13,14), HSC70 (spot 17) and the intermediate filament proteins, NFL (spot 16), alpha-internexin (spot 21), vimentin (spot 31) and GFAP (spot c); NFM (spot 1,4) and NFH (spot 2), although abundant ( Fig. 6B), were only mildly nitrated (Fig. 6A). Nitrated proteins in TIF from WT SOD1 samples were hardly detected ( Figure S4). To check whether there is a parallel with the human disease, the level of nitrated HSP90 (spot 8 in the mouse experiment), for which a specific antibody is available [20], was measured in the TIF of sporadic ALS patients and controls by dot blot analysis. Interestingly, the TIF of ALS patients showed enrichment of the nitrated protein (Fig. 6C). The level of nitrated HSP90 in the soluble fraction was not changed (data not shown).
L-NAME Reduces the Level of Detergent-Insoluble Proteins in a Cellular Model of fALS
To investigate whether protein nitration has a causative role in aggregate formation or is just a consequence of the longer exposure of protein inclusions to oxidative stress, we used the NSC-34 cell line expressing G93A hSOD1, or WT hSOD1 as control. These cells did not produce evident aggregates under basal conditions, however they were induced to accumulate insoluble proteins by treatment with a proteasome inhibitor (MG132), similarly to previous findings [21]. Under these conditions double the amount of TIF was isolated from WT and G93A SOD1 expressing cells compared to untreated cells (Fig. 7A). The cellular TIF from mutant SOD1 cells had the biochemical features of the one isolated from the spinal cord of mutant SOD1 mice: high levels of mutant SOD1 (Fig. 7G The proteins are categorized by their known function, in bold are the ones only found in G93A samples, the others are sorted by their fold change from highest to lowest. to examine the ab initio aggregate formation of some of the proteins found in the TIF of the mice, also present in the cellular TIF. We measured the effect of a non-selective nitric oxide synthase (NOS) inhibitor, L-NAME, on the insolubility of aconitase and HSC70, nitrated in the mice, CypA and SOD1, susceptible to other types of oxidative modifications [22][23][24], and nitrated HSP90 (Fig. 7C-G). Fig. 7A-B shows that L-NAME reduced the total MG132-induced TIF in NSC-34 cells expressing G93A SOD1 by 56% and this reduction paralleled the reduction of nitrated proteins (52%). Specifically, L-NAME reduced the amount of nitrated HSP90 by 81%, aconitase by 72%, HSC70 by 86%, CypA by 91% and SOD1 by 61% in TIF of G93A SOD1 NSC-34 cells (Fig. 7C-G). L-NAME also had an effect in TIF isolated from G93A SOD1 cells under basal conditions, and although small it was significant for nitrated HSP90 and aconitase (Fig. 7C,D). The reduction of MG132-induced TIF in WT SOD1 cells was smaller, 18%, and was never significant for the single proteins analyzed. It is noteworthy that MG132 alone did not raise the level of nitrated proteins in TIF from WT SOD1 cells (Fig. 7B). These data suggest that the increase in nitrated proteins in G93A SOD1 cells has to be attributed to the increased oxidative/nitrative stress caused, directly or indirectly, by mutant SOD1 [25][26][27][28]. The L-NAME treatment was therefore effective only in G93A SOD1 cells possibly because only there oxidative/nitrative stress played a role in the formation and consolidation of the aggregates. We measured cell death by quantifying extracellular LDH activity in cells lines expressing WT and G93A SOD1, treated with MG132, L-NAME or both (Fig. 7H). As expected, MG132 was toxic on both cell lines, but was significantly more toxic in G93A SOD1 expressing cells. Interestingly, L-NAME in combination with MG132 partially rescued cells from MG132-induced toxicity, reducing cell death by respectively 16% and 13% in WT and G93A cells.
Discussion
We previously reported that mutant SOD1 and its oligoubiquitinated forms are abundantly recovered in TIF of the spinal cord from G93A SOD1 mice [5]. However, from that study we deduced that there were several unknown proteins in addition to mutant SOD1. The present proteomic analysis enabled us to identify 66 protein spots exclusively present or more abundant in TIF of ALS mice than controls. To our knowledge, this is the first successful large-scale analysis of detergent-insoluble proteins in an ALS mouse model. This was possible because of the use of an optimized 2DE-based proteomic approach to isolate and analyze TIF. A previous attempt, based on liquid chromatography-electrospray ionization mass spectrometry, found primarily SOD1 and only traces of other abundant proteins [29]. It is known that mutant SOD1 forms aggregates in different cellular compartments such as mitochondria [10], endoplasmic reticulum [30] and perykaria [31]. We found insoluble proteins from these subcellular compartments, and showed that many of these proteins start to aggregate already at a presymptomatic stage of the disease as much as mutant SOD1 [5].
Intermediate filaments such as neurofilaments, vimentin and GFAP were the most abundant proteins recovered in TIF of WT and G93A SOD1 mice. However, high-Mw isoforms of NFM, NFL, vimentin and GFAP were only found in TIF of ALS mice. These may be ubiquitinated forms, but because of their low abundance we were not able to identify the modification by mass spectrometry. Immunocolocalization of ubiquitin and neurofila-ments has already been observed in neuronal hyaline inclusions in G93A SOD1 mice and ALS patients [32,33]. Fragments of intermediate filaments already accumulated at a pre-symptomatic stage of the disease. GFAP and NFL fragments have been observed in spinal cords of ALS patients [34,35] and this may indicate increased activation of specific proteases or oxidationinduced protein fragmentation [36].
Several enzymes important in energy metabolism were also present. Their aggregation may explain the defective mitochondrial respiratory chain activities and ATP production in the mutant mice [37]. While glycolytic enzymes are highly recovered mainly at symptomatic stages of disease, insoluble mitochondrial enzymes already accumulate at a pre-symptomatic stage. This Figure 6. Analysis of nitrated proteins in TIF of 17-week-old G93A SOD1 mice. 150 mg of TIF was loaded into the 2D gel and transferred onto a PDVF membrane. The blot was probed with anti-nitrotyrosine polyclonal antibody (A), after total protein SYPRO Ruby blot staining (B). Nitrated protein signals of the 2D WB were matched and localized in a twin 2D gel and proteins were identified by peptide mass fingerprinting. Spot numbers in (A) correspond to proteins in Table 2. a, b and c are spots corresponding respectively to laminin subunit beta-2, VDAC, GFAP, that were not specifically increased in G93A TIF in the proteomic screening. agrees with the observations of early alterations of mitochondria [38] and the presence of SOD1-rich aggregates in mitochondria of ALS mice [10]. Among the mitochondrial enzymes, mitochondrial aconitase, which is altered in aging and neurodegenerative diseases it is of special interest [39,40]. The enzyme is highly sensitive to oxidative inactivation and modifications [41]. We have reported that aconitase is susceptible to tyrosine nitration, as detected in the soluble fraction of the spinal cord of pre-symptomatic G93A SOD1 mice [16]. In this study, we found it was abundantly recovered, highly nitrated, in TIF. Accumulation is substantial already before the onset of disease, as confirmed by immunostaining analysis on spinal cord sections. In some cases it colocalized with SOD1, however it is likely that it can aggregate also independently from G93A SOD1. Accumulation of the insoluble protein was also detected in spinal cord tissues of sporadic ALS patients. This confirms a mitochondrial alteration in the animal model and in patients. It also candidates aconitase as a sensitive biomarker of the human disease.
One of the functional categories highly present in our analysis is the chaperone. Chaperones are potent controllers of protein aggregation, promoting protein folding and refolding, and cooperating to degrade irreversibly damaged proteins. They were greatly enriched in TIF of G93A SOD1 mice early in the disease and, except for HSC70, absent or scant in WT SOD1 mice. A specific interaction between chaperones and mutant SOD1, but not WT SOD1, is also indicated in other works [42,43]. Chaperone activity has been reported to be reduced in spinal cord of G93A and G85R SOD1 mice before the disease onset [44]. One possibility is that chaperones are sequestered by misfolded mutant SOD1, so are less available for cytoprotective functions. This notion is borne out by the fact that increasing expression of HSP70 by gene transfer protected cultured motor neurons from mutant SOD1 toxicity [45], although overexpressing only HSP70 was not effective in vivo [46]. As suggested by our analysis, which found several chaperones damaged, upregulating a panel of such proteins is likely to be a more successful pharmacological strategy.
Proteins involved in signalling were also enriched early in the disease. 14-3-3 protein gamma is a protein adaptor that recognize the phosphoserine-containing motif of several target proteins and regulates signal transduction pathways. 14-3-3 proteins have been found in Lewy body-like hyaline inclusions in ALS patients [47]. These proteins may recognize the phosphorylated serine residues of neurofilaments and promote their abnormal accumulation, or remain entrapped in the inclusions. A similar situation may arise with ERK. ERK1/2 are MAP kinases, which are activated by various mechanisms and have more than 100 different substrates, including NFM, NFH and alpha crystalline [48]. It is possible that ERK1/2 are aberrantly activated and sequestered with the substrates in the aggregates. Finally, TDP-43 was not found among the aggregated proteins in the G93A SOD1 mice, as already reported in another study [49].
What is peculiar is that most of the proteins found in TIF are intrinsically soluble and stable with no apparent reason to be copurified with insoluble mutant SOD1. The high affinity of chaperones for mutant misfolded SOD1 only partially explains the molecular determinants of aggregation. We have shown that the level of proteins carrying an oxidative modification, tyrosine nitration, possibly induced by mutant SOD1 [25,26], are increased in the spinal cord soluble fraction of G93A SOD1 mice already at a pre-symptomatic stage of disease [16]. Interestingly, some of these nitrated proteins were also recovered in TIF, including HSC70, alpha enolase and ATPase. Nitrated NFL has been shown to inhibit the assembly of unmodified neurofilament subunits and therefore may be at the basis of neurofilament aggregate formation [50]. Nitrated alpha synuclein and tau have been found in brain of patients with Parkinson's and Alzheimer's diseases [51,52]. However, in vitro, at least for alpha synuclein, the impact of nitration on aggregation is controversial [53,54].
Since no comprehensive study of the nitration pattern of insoluble proteins has ever been done, it was not possible to consider protein nitration as a potential general mechanism of protein aggregation. By using a proteomic approach we demonstrated that the majority of the proteins enriched in TIF of the ALS mouse was nitrated. In human tissues at least one nitrated protein, HSP90, was detected enriched in TIF of sporadic ALS patients. Thus nitration might have some role in aggregate formation in ALS. Nevertheless, from such experiments ex vivo we could not establish whether nitration was a consequence of the [24,55,56]. Inhibition of NO synthesis leads to a decrease in peroxynitrite formation, which in turn may reduce tyrosine nitration but also various cysteine oxidations, including disulfides and nitrosothiols. We therefore propose that L-NAME interferes more generally with oxidative modification-induced protein aggregation in the presence of mutant SOD1. Under this condition the reported decrease in the level of endogenous antioxidants might play a role [28,57]. However, in this cell paradigm we could not really evaluate the effect of reduced protein aggregation on cell viability. L-NAME only partially rescued cells from MG132 treatment, which is highly toxic at the concentration used to induce aggregate formation. It has been reported that in vivo treatments with NOS inhibitors were protective in animal models of motor neuron degeneration, but in other studies they were ineffective [58][59][60]. Although the role of NOS and the use of NOS inhibitors for therapeutic purposes is debated [59,61,62], our data provide additional indications of the importance of aiming pharmacological approaches at pathways that modulate nitrative stress which, if regulated as early as possible, may influence downstream aggregation pathways too.
In conclusion, a striking difference between WT and G93A SOD1 mice is in TIF and consists in the portion of the proteome that, damaged or altered in pathological conditions, loses its structural determinants and accumulates as insoluble material as the disease progresses. Some components of this insoluble fraction are also found in sporadic ALS patients suggesting that they could be novel markers of the human sporadic forms. Finally, characterization of tyrosine nitrated insoluble proteins showed that nitrative stress, induced by SOD1 mutation or other unknown instigation factor(s) in the case of the sporadic forms, may contribute to protein aggregate formation in ALS.
Transgenic Mice
Transgenic G93A SOD1 mice originally obtained from Jackson Laboratories and expressing about 20 copies of mutant human (h)SOD1 with a Gly93Ala substitution (B6SJL-TgNSOD-1-SOD1G93A-1Gur), or WT hSOD1 were bred and maintained on a C57BL/6 genetic background at Harlan Italy S.R.L., Bresso (MI), Italy. Transgenic mice were identified by PCR. The mice were housed at 2161uC with 55610% relative humidity and 12 h light. Food (standard pellets) and water were supplied ad libitum. Female G93A SOD1 mice were sacrified at pre-symptomatic (12 weeks of age), early symptomatic (17 weeks of age) and end-stages
Human Samples
Frozen spinal cord from controls and ALS patients were partly from the Netherlands Brain Bank (NBB), Netherlands Institute for Neuroscience, Amsterdam, and partly provided by Michael Strong, Robarts Research Institute, London, Ontario. Postmortem delay of the control subjects was ,12 h and of ALS patients was ,12 h (n = 3), ,24 h (n = 4). No abnormalities were detectable at autopsy in the spinal cord tissues of the three controls who died due to cardiac arrest, cancer and pneumonia. All ALS cases were negative for mutations in TDP-43 and SOD1. Table S5 reports the clinical and neuropathological characteristics of the ALS cases. All material has been collected and used in compliance with the ethical and legal declaration of the Netherlands Brain Bank and Robarts Research Institute after a written informed consent from donor or legal representative.
Extraction of Detergent-Insoluble Protein
Tissues were processed as previously described [5]. Briefly, they were homogenized in ice-cold homogenisation buffer, pH 7.6, containing 15 mM Tris-HCl, 1 mM DTT, 0.25 M sucrose, 1 mM MgCl 2 , 2.5 mM EDTA, 1 mM EGTA, 0.25 M sodium orthovanadate, 2 mM sodium pyrophosphate, 5 mM MG132 proteasome inhibitor (Sigma), 1 tablet of Complete TM / 10 mL of buffer, Mini Protease Inhibitor Mixture (Roche Applied Science). The samples were centrifuged at 100006g at 4uC for 15 minutes, obtaining a supernatant (S1) and a pellet. The pellet was suspended in ice-cold homogenisation buffer with 2% of Triton X-100 and 150 mM KCl, sonicated three times for 10 sec and shaken for 1 hour at 4uC. Samples were then centrifuged twice at 100006g at 4uC for 10 minutes to obtain Triton X-100-resistant pellets (TIF) and a supernatant (S2). The soluble fraction is considered the pool of S1 and S2 fractions. Proteins were quantified by the Bradford assay. To isolate TIF from human spinal cords, tissues were cut with a cryostat microtome and the sections were collected in a tube containing 10 volumes (w/v) of homogenisation buffer and processed as described for the mice tissues. To isolate TIF from cells the protocol was slightly modified. Briefly, cells were directly lysed in 0.2% of Triton X-100 and 150 mM KCl, sonicated and shaken for 1 hour at 4uC. Samples were then centrifuged at 100,0006g for 1 hour. The pellets were boiled in 50 mM Tris HCl pH 6.8 and SDS 2% and analyzed. Proteins were quantified by the BCA protein assay (Pierce).
2DE
Samples were dissolved in 7 M urea, 2 M thiourea, 4% (w/v) CHAPS, 0.5% (v/v) IPG buffer (GE Healthcare) and 12 mg/mL DeStreak TM Reagent (GE Healthcare). Samples were pools of TIF from five mice for each genotype. Aliquots of 75 mg were loaded in each 2D gel by in-gel rehydration (1 h at 0 V, 270 Vhr at 30 V) on pH 3-10 non-linear 7-cm IPG strips (GE Healthcare). IEF was done on an IPGphor (GE Healthcare) according to the following schedule: 200 Vhr at 200 V, 925 Vhr of a linear gradient up to 3500 V, 10500 Vhr at 3500 V, 14375 Vhr of a linear gradient up to 8000 V, 48000 Vhr at 8000 V. Strips were then re-equilibrated in NuPAGE LDS Sample Buffer (Invitrogen) and second dimension was run on precast, 4-12% polyacrylamide gradient gel, NuPAGEH Bis-Tris (Invitrogen). Gels were stained with SYPROH Ruby protein gel stain (Invitrogen).
2D Image Analysis and Quantification
Changes in protein spot volumes were calculated comparing gels from pools of five samples, from 26-week-old WT and G93A SOD1 mice, run in triplicate. Gel images were captured by the laser scanner Molecular ImagerH FX (Bio-Rad) and 2D image analysis was done with Progenesis PG240 v2006 software (Nonlinear Dynamics). The analysis protocol for the gel images included: spot detection, warping, background subtraction, averaged gel creation, matching and reference gel modification. Detection, warping, and matching of the protein spots were done using the ''combined warp and match'' algorithm, which uses a nonparametric pattern recognition clustering technique to align different gel images. The ''Total spot volume normalization'' algorithm was used to calculate each protein spot volume as the sum of the intensities of the pixels within the spot's boundary, minus the background level within that same boundary, normalized to the total spot volumes in the gel. Observed pI and Mr were calculated by the software based on protein spots of known characteristics.
Protein Identification
Protein spots were located and excised with an EXQuest TM spot cutter (Bio-Rad). Spots were processed and gel-digested with trypsin, as previously described [16]. Tryptic digests were concentrated and desalted using ZipTip pipette tips with C18 resin and 0.2 ml bed volume (Millipore). Peptide mass fingerprinting was done on a ReflexIII TM MALDI-TOF mass spectrometer (Bruker Daltonics) equipped with a SCOUT 384 multiprobe inlet and a 337-nm nitrogen laser using a-cyano-4-hydroxycinnamic acid as matrix, prepared as previously described [63]. All mass spectra were obtained in positive reflector mode with a delayed extraction of 200 ns. The reflector voltage was set to 23 kV and the detector voltage to 1.7 kV. All the other parameters were set for an optimized mass resolution. To avoid detector saturation low-mass material (500 Da) was deflected. The mass spectra were internally calibrated with trypsin autolysis fragments. The mass spectra were obtained by averaging 150-350 individual laser shots and then automatically processed by the FlexAnalysis software, version 2.0 using the following parameters: the Savitzky Golay smoothing algorithm and the SNAP peak detection algorithm. Database searches (Swiss-Prot, release 57.3, June 2009) were done using the Mascot software package available on the net (http:// www.matrixscience.com), allowing up to one missed trypsin cleavage, carbamidomethylation of Cys and oxidation of Met, as variable modifications, and a mass tolerance of 60.1 Da over all Mus musculus protein sequences deposited. A protein was regarded as identified if the following criteria were fulfilled: the probabilitybased MOWSE [64] score was above the 5% significance threshold for the database and the spots excised from at least two different gels gave the same identification.
DIGE Analysis
The four experimental groups were: G93A SOD1 mice at 12, 17 and 26 weeks of age and WT SOD1 mice at 26 weeks of age. Equal amounts of TIF from spinal cords of five mice from each group were pooled. Samples were labelled according to the manufacturer's instructions (GE Healthcare) with minor modifications. Briefly, 25 mg of each pool was labelled with 200 pmol of Cy3 or Cy5 dye for 30 min in ice in the dark. To exclude preferential labelling of the dyes, each sample was also reverse labelled. As an internal standard, aliquots of each pool were mixed and labelled with Cy2 dye. Four 2D gel were run as described in the 2DE section. Each gel contained two experimental groups, one Cy3-labelled, the other Cy5-labelled plus the Cy2-labelled internal standard. Gel images were captured by the laser scanner Molecular Imager FX (Bio-Rad). Image analysis was done with Progenesis PG240 v2006 software (Nonlinear Dynamics). The spots analyzed were those that were differentially expressed in the end-stage analysis. For each spot the normalized volume was standardized against the intra-gel standard, dividing the value for each spot normal volume by the corresponding internal standard spot normal volume within each gel. The values for each spot in each group were expressed as mean the of the values from the Cy3-and Cy5-labelled analyses.
Analysis of TIF from Human Samples
Total TIF is considered the ratio of the amount of isolated TIF to the total proteins extracted. Proteins were quantified by the BCA protein assay (Pierce). An aliquot of TIF (3 mg) from the postmortem samples was loaded on nitrocellulose membrane, Trans-Blot Transfer Medium (Bio-Rad), by vacuum deposition on the Bio-Dot SF blotting apparatus (Bio-Rad). Membranes were probed with the specific primary antibodies and then with goat anti-rabbit or anti-mouse peroxidase-conjugated secondary antibodies (Santa Cruz Biotechnology). Blots were developed by Immobilon Western Chemiluminescent HRP Substrate (Millipore) on the ChemiDoc XRS system (Bio-Rad). Densitometry was done with Progenesis PG240 v2006 software (Nonlinear Dynamics). Immunoreactivity was normalised to the actual amount of proteins loaded on the membrane as detected after Red Ponceau staining (Fluka).
Immunohistochemistry
Female mice (at least four for each group) were anesthetized with Equitensin (1% phenobarbital/4% (vol/vol) chloral hydrate, 6 mL/g, ip) and transcardially perfused with 20 mL of sodium phosphate buffer (PBS) followed by 50 mL 4% paraformaldehyde solution in PBS. Spinal cords were rapidly removed and post-fixed as previously described [66]. Immunolabelling was done on lumbar spinal cord sections (30-mm thick floating cryosections or 40-mm thick vibratome sections). Endogenous peroxidases were inactivated by 1% hydrogen peroxide in PBS (135 mM NaCl, 2.6 mM KCl, 10 mM Na 2 HPO 4 , 1.76 mM KH 2 PO 4 , pH 7.4). The sections were incubated with 5% normal goat serum (NGS) in PBST (PBS + 0.3% Triton X-100) 1 h at RT, then probed overnight at 4uC in 5% NGS, PBST with a rabbit polyclonal antiaconitase antibody (1:500, kindly provided by Dr. L.I. Szweda). Subsequently the sections were washed in PBS and incubated 1 h at RT in 5% NGS, PBST with a secondary biotinylated antirabbit antibody, diluted 1:200, from Vector. The secondary antibody was revealed with a TSA amplification kit, Cy5 (Perkin Elmer) as previously described [66]. For SOD1 labelling the mouse monoclonal anti-human SOD1, MO62-3 (clone 1G2, 1:3000, MBL, Japan) was used and for mitochondrial labelling the mouse monoclonal anti cytochrome oxidase subunit I antibody (clone 1D6, 1:200, Molecular Probes) was used. Fluorescencelabelled sections were mounted with Fluorsave (Calbiochem) and analyzed under an Olympus Fluoview or a TCS NT Leica laser scanning confocal microscope. Selected vibratome sections, permeabilised with ethanol instead of Triton, were processed for the ultrastructural detection of aconitase using a standard immunoperoxidase procedure using the Vectastain ABC kit (Vector) and diaminobenzidine tetrahydrochloride (DAB) as a chromogen. After visualization of reaction product with DAB, sections were osmicated, dehydrated and flat-embedded in epoxy resin. Selected areas of the embedded sections were then cut with a razor blade and glued to blank blocks of resin for further sectioning with an ultramicrotome. Thin sections collected on copper grids were counterstained with lead citrate and observed and photographed with a Zeiss 902 electron microscope.
Detection and Identification of Nitrated Proteins
TIF from WT and G93A samples were loaded on 7-cm IPG strips (pH 3-10, non-linear) and separated by IEF as described in the 2DE section, in duplicate. One 2D gel was stained with SYPROH Ruby protein gel stain (Invitrogen) and the other was transferred onto PVDF membrane (Millipore) and probed overnight at 4uC with anti-nitrotyrosine rabbit polyclonal antibody, provided by A.G. Estevez, or the anti-nitrotyrosine mouse monoclonal antibody (clone HM.11; HyCult Biotechnology). Results were visualized with the QdotH 800 goat anti-rabbit or anti-mouse IgG conjugate antibody (Invitrogen), capturing the images with the laser scanner Molecular Imager FX (Bio-Rad). Nitrated protein signals of the 2D WB were localized in the 2D gel by the specific warping algorithm of the Progenesis software and processed for identification by peptide mass fingerprinting. No false immunopositive spots were detected as tested by treating membranes with sodium dithionite, as described previously [16]. Dot blot analysis of TIF of human tissues was done with antinitrated HSP90 monoclonal antibody, prepared and characterized as described [20].
NSC-34 Cell Lines
WT or G93A SOD1 expressing NSC-34 cells were obtained by stably transfecting a NSC-34 derived line expressing the tetracycline-controlled transactivator protein tTA with pBI-EGFP containing WT or G93A hSOD1 cDNA cloned downstream to the tetracycline-responsive bidirectional promoter [67] and express similar amount of human and murine SOD1 ( Figure S5). Cell lines were kept in culture in DMEM (high glucose) supplemented with tet-screened FBS (5%, Clontech), 1 mM glutamine, 1 mM pyruvate, antibiotics (100 IU/mL penicillin and 100 mg/mL streptomycin), G418 sulphate (0.5 mg/mL) (Invitrogen) and hygromycin (0.2 mg/mL) (Invitrogen). For this study cells were grown without doxycycline in the medium, allowing full expression of transfected WT or G93A SOD1.
Cell Treatments
A 10-mM stock solution of MG132 (Calbiochem) was prepared in dimethylsulfoxide and a 30-mM stock solution of L-NAME (Sigma) in PBS. Cells were seeded at a density of 6850 cells/cm 2 in T25 flasks, and grown under standard conditions for six days, then treated with MG132, L-NAME or both (respectively 5 mM and 300 mM final concentration). After 24 h, the cells were detached with 1xPBS and washed once with 1xPBS, then recovered by centrifugation at 2506g for 10 minutes. The cell pellets were stored at 280uC until analysis of protein aggregates.
Analysis of Cellular TIF
Three cell pellets from different flasks for each genotype and condition were independently processed to obtain the TIF. A sample of 3 mg TIF for each condition was loaded on nitrocellulose membrane and analyzed on the Bio-Dot SF blotting apparatus (Bio-Rad) with the antibodies for the specific proteins, as described for the human samples. Immunoreactivity values were multiplied by the total TIF from each cell pellet. Total TIF is considered the ratio of the amount of isolated TIF to the total proteins extracted. Proteins were quantified by the BCA protein assay (Pierce).
Cytotoxicity Assay
Cell death was analyzed by quantifying extracellular lactate dehydrogenase (LDH) activity with the cytotoxicity detection kit of LDH (Roche Applied Science), as described [68]. For each sample, the ratio of extracellular to intracellular LDH activities was obtained. Results were expressed as percentages of the untreated control cells of each cell line. Figure S1 Representative anti-ubiquitin immunoblot of TIF from late-symptomatic G93A SOD1 and age-matched WT SOD1 mice. The same amount of TIF (30 mg) was loaded in each immunoblot. Found at: doi:10.1371/journal.pone.0008130.s001 (1.41 MB TIF) Figure S2 Representative Cy-dye 2DE maps of TIF from spinal cord of G93A SOD1 mice at 12, 17 and 26 weeks of age in comparison with WT SOD1 mice. The same amount of protein was loaded (75 mg) in each gel and contained a Cy3-labelled sample (25 mg), a Cy5-labelled sample (25 mg) and the Cy2labelled internal standard (25 mg). Gel images were captured by the laser scanner Molecular Imager FX (Bio-Rad). Image analysis was done with Progenesis PG240 v2006 software (Nonlinear Dynamics). The spots considered in the analysis were the ones found differentially expressed in the end-stage analysis ( Fig. 1 and Table 1). | 2014-10-01T00:00:00.000Z | 2009-12-02T00:00:00.000 | {
"year": 2009,
"sha1": "b38510dd5bad90a99f2d91b4c4d7d8f26aeab134",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0008130&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b38510dd5bad90a99f2d91b4c4d7d8f26aeab134",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
80538299 | pes2o/s2orc | v3-fos-license | Treatment for hepatocellular carcinoma in the caudate lobe: a report of 13 cases
Primary hepatocellular carcinoma (HCC) is one of the most common digestive tract tumors, and surgical resection is the preferred therapeutic method of HCC. However, as caudate lobe is located in the middle of the back of liver, surrounded by a large number of large vessels, it is difficult and risky to resect. Many doctors attempted to adopt TACE-based non-operative method for the treatment of HCC located in the caudate lobe, but its clinical effects are controversial. In addition, with the continuous improvement of surgical approach, surgical method and surgical instruments, there are more and more reports on caudate lobe resection, but the long-term ABSTRACT
outcome of surgical treatment of HCC located in the caudate lobe has been still unknown. This article mainly discusses the surgical treatment as well as the surgical approach and clinical effects of HCC located in the caudate lobe.
Clinical data
Thirteen cases of patients with HCC in the caudate lobe aged from 28 to 65 years old, including 11 males and 2 females, were included. All of the 13 patients had a history of hepatitis B, 10 of whom were accompanied with liver cirrhosis and Child's scoring of grade A. Blood tests upon admission included liver function test, two pairs of semi-hepatitis B test, coagulation profiles test, alpha fetoprotein test, carcinoembryonic antigen test, routine chest X-ray test, abdominal Color Doppler ultrasound, as well as CT or MRI for further examination of lesions, finding that tumors ranged from 1.4 to 1.8cm, averaging at 5.6cm. According to Couinaud's segmental anatomy of HCC in the caudate lobe, 2 tumors were located in I segment, 3 in I segment and invaded left lateral lobe, 1 in I segment and invaded left internal lobe, 3 in IX segment and invaded right posterior lobe of liver, 2 in I+IX segments, and 2 in I+IX segment and invaded left internal lobe ( Figure 1). 2 Of the 13 patients, 2 patients were not treated for their own reasons, 2 underwent TACE treatment (I and I+XI segments), and the other 9 patients received surgical treatment. Sildiner technique was adopted for transcatheter angiography of femoral artery, abdominal aorta, celiac trunk artery and hepatic artery, and chemoembolization were given to patients after the determination of nutrient arteries of tumors ( Figure 2).
Resection
Appropriate surgical approach methods and surgical methods were selected based on image findings, whole body condition and liver function of patients. Patients underwent partial or total caudate lobectomy combined with resection of other hepatic segments in reverse "L" incision. The surgical approaches included left surgical approach, right surgical approach as well as combined left and right surgical approach, and anterior surgical approach was excluded (Table 1).
Intermittent hepatic blood flow occlusion was adopted as the major blood flow occlusion method, and sometimes was combined with half hepatic blood flow occlusion. The challenge of the surgery was to make suprahepatic and infrahepatic vena cava blood flow occlusion zone.
RESULTS
Two cases of patients who underwent TACE well tolerated the treatment, and only had mild symptoms such as fever, nausea and abdominal distension after operation, which alleviated in short term after symptomatic treatment. One patient was found metastasis to the right posterior lobe of liver during postoperative reexamination for another TACE after one month; and the other patient found left hepatic lobe metastasis during the third time of TACE after two months; primary lesions changed slightly. After that, the 2 patients continued to receive TACE treatment for 3-4 times, which had poor effects. Nine patients received operation successfully with operative time ranging from 220 minutes to 350 minutes, averaging at 259 minutes; intraoperative vascular occlusion time was 21-45 minutes, averaging at 30. The 9 patients were followed up for 2 years, and their 1year and 2-year recurrence rate were 44.4% and 88.9% respectively, and 1 patient presented pulmonary metastasis; and their 1-year survival rate and annual survival rate were 66.7% and 44.4%, respectively.
DISCUSSION
Caudate lobe has complex anatomic structure and grows deeply in the liver, so operation of HCC in the caudate lobe is difficult with high risk, and many scholars attempted to treat HCC located in the caudate lobe by non-operative methods such as TACE, radiofrequency ablation and ethanol ablation, to avoid risks of operations. 3 Studies have shown that TACE is effective in the treatment of caudate lobe HCC, and the key is to accurately understand the blood supply to tumors and select appropriate embolization. 4 However, there are many feeding arties and nutrient arteries of tumors in the caudate lobe of the liver, so the effect of TACE is not very ideal. In this study, 2 cases of patients underwent TACE treatment and had poor treatment results and presented intrahepatic metastasis in the short term, indicating that the clinical effect of TACE for the treatment of HCC located in the caudate lobe is controversial. In addition, caudate lobe is surrounded by a large number of vessels, improper puncture may lead to massive bleeding, and it is relatively difficult to select appropriate approach to reach lesions, which limits the application of radio frequency and ethanol ablation. Therefore, surgical resection is still the first choice for the treatment of HCC in the caudate lobe.
Patients included in the study mainly adopted left approach method -the major approach method for the treatment of Spiegel liver cancer, and were combined with the resection of left lobe of liver if the tumor was large or involved left lobe of the liver. Right approach method is mainly used for the resection of HCC located in the right caudate lobe and close to paracaval portion of vena cava or caudate process, and the right lobe of liver is also removed when the tumor is giant or invades hepatic lobes; and combined left and right approach is mainly suitable for the resection of HCCs which invade whole caudate or those which are too big to be exposed by simple left or right approach methods. 5 In addition, anterior approach, also known as median approach (middle approach), is another common approach method mainly for the removal of giant caudate lobe tumors and those invading hepatic vein or patients with severe hepatic cirrhosis (to retain non-tumor-bearing hepatic tissues to the maximum and thereby preventing postoperative liver failure). 6 Yang et al. shown that anterior approach method is most optimal for single caudate lobectomy due to the advantages of full exposure of caudate lobe and low risks. 7 At present, people are constantly exploring surgical approach, and Vigano et al. 8 proposed ultrasound guided side surgical approach or top surgical approach method for the resection of HCC located in caudate lobe of liver cancer in combination with intraoperative ultrasound for precise positioning, to treat HCC located in caudate lobe with large poor liver function, providing a new idea for clinical treatment of HCC in caudate lobe.
This group of patients all received resection of part or whole caudate lobe along with other hepatic segments, although 2 cases of patients only grew tumors in caudate lobe, we still resected other hepatic segments without affecting liver function. Due to special anatomical location of the caudate lobe, it is very difficult to remove all margins of tumor through caudate lobectomy alone. Although there are more and more reports about single caudate lobectomy, it still needs to remove other liver segments in addition to caudate lobe without affecting liver function, so as to remove all margins of tumors. In addition, caudate lobectomy combined with removal of other hepatic segments could reduce operation difficulty due to full exposure of caudate lobe.
Recurrence and metastasis of HCC greatly reduce longterm curative effect of surgical treatment. Studies have found many risk factors of the recurrence of hepatocellular carcinoma after resection, including hepatitis B virus infection, tumor diameter, tumor number, tumor capsule, vascular invasion, portal vein tumor thrombus, surgical margin width, intraoperative bleeding and blood transfusion. [9][10][11] It has been reported that 1-year and 2-tear recurrence rates of HCC in our country are 38.7% and 57.9%, respectively. 12 In this study, 1-year and 2-tear recurrence rates of patients with HCC in caudate lobe were 44.4% and 88.9%, significantly higher than average recurrence rate as mentioned above, suggesting that the prognosis of surgical treatment of HCC in the caudate lobe is worse than that of HCC in other parts of liver, consistent with the results of similar studies. 13 Factors leading to worse prognosis of surgical treatment of HCC in caudate lobe than in other parts of liver are as follows: complex anatomical structure of caudate lobe, small growth space, invasive growth and easy to invade blood vessels; Insufficient removal of tumor margins due to poorly defined lesion boundary or lesion in the proximity of hepatic portal vein and hepatic vein; difficulty of peeling and exposure of caudate lobe during operation, and excessive turning and pulling of liver lobes increases the possibility of metastasis of HHC through portal vein and hepatic vein; more bleeding in caudate lobe surgery than non-caudate lobe surgery, as studies have shown that intraoperative bleeding and excessive bleeding is the independent risk factor of recurrence and metastasis of hepatocellular carcinoma after operation. [14][15][16] On the other hand, compared with other similar studies, this study showed that the recurrence rate of HCC was higher and the survival rate was lower. [13][14] The possible reasons are as follow: small sample size which led to information bias; all patients included in the study had a history of hepatitis B, and some patient included in similar studies did not suffer from secondary hepatitis B; patients with HCC in caudate lobe had relative large sizes with diameter of 1.8-14 cm, averaging at 5.6 cm, most of which involved other hepatic segments. While patients with HCC in caudate lobe included in other similar studies had smaller tumors, most of which only involved partial or total caudate lobe; Intraoperative blood loss and blood transfusion were higher in the study than others.
CONCLUSION
The resection of caudate lobe is difficult and risky, surgical resection is still the first choice for the treatment of HCC located in caudate lobe of liver. However, compared with HCC in other parts of liver, HCC in caudate lobe has poor surgical treatment effects, which may be related to excessive turning and pulling of hepatic lobes as well as more blood loss. It is hoped that with the development of medical technology and continuous accumulation of related surgical experience, it is possible to improve accuracy in dealing with major vascular areas and isolating liver tissues to reduce the risk of tumor metastasis. However, as a single center study, this study included few cases and did not compared with clinical control group, so research performed by more multi center with large sample sizes are needed to make more precise conclusions in the future ACKNOWLEDGMENTS Authors are extremely thankful to the Department of Hepatobiliary Surgery and Department of Intervention for their support. | 2019-03-17T13:05:52.214Z | 2017-07-24T00:00:00.000 | {
"year": 2017,
"sha1": "fcd744f58c794c3e13193817b9cf09d6bf3f46df",
"oa_license": null,
"oa_url": "https://www.ijsurgery.com/index.php/isj/article/download/1254/1344",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "b5df335728efb8013d534d1359ea6162eb2b0bad",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
44015755 | pes2o/s2orc | v3-fos-license | Evaluation of House Rent Prices and Their Affordability in Port Moresby, Papua New Guinea
Access to affordable housing has been a long-standing issue for households in most cities. This paper reports on a study of house rent prices in Port Moresby, factors influencing them, and affordability of the prices. Data was obtained from houses that were advertised for rent in Port Moresby for a period of 13 months and were analysed using the ordinary least squares (OLS) regression model. The results show that monthly house rent prices range from 2357 to 34,286 Papua New Guinea Kina (PGK), or 714 to 10,389 U.S. dollars (USD), and the median price was 7286 PGK (2208 USD). Houses located in the central business district had the highest median house rent price, whereas low-income areas had the lowest rent price. By dividing the median house rent price by gross household income, the housing affordability index was 3.4. House rent price was influenced by factors such as number of bedrooms and location. To make house rent prices more affordable for Port Moresby residents, it is necessary to supply more houses for rent relative to demand, especially in low-income areas. Relevant governmental agencies should put more effort toward unlocking more customarily-owned land for housing development and toward facilitating the private sector to construct more low-cost houses for rent, which are affordable for low to middle income households. This has the potential of improving Port Moresby residents’ access to affordable houses for rent. The findings could assist urban development managers and planners in allocating resources for housing by considering housing demand, supply, and house rent prices.
Introduction
Housing is one of the basic necessities for humans and it accounts for the largest share of the consumer price index [1]. However, providing houses at affordable rent prices has been a long-standing issue for governments of most countries [2][3][4][5]. For this reason, house rent prices and their affordability have continued to be an important policy issue [6][7][8]. Expenditures associated with house rents has forced some households to reduce their consumption of other necessities and consequently lowered their standard of living [9]. A house is a multidimensional good, which consists of a bundle of attributes that differ in quantity and quality and influence house rent price [10]. This includes physical attributes such as number of rooms, lot size, and housing type; community attributes such as population and characteristics of neighbourhood; and accessibility to the place of work. People's preferences for these attributes differ, and they often influence the amount of money that consumers would be willing to pay for house rent [11]. House rent price is determined by the price at which a house owner is willing to give out a house unit to a potential tenant for rent and the price tenant is willing to pay, i.e., equilibrium house rent price [12]. In other words, the house that was advertised for rent by a house owner meets the preferences and demand of a potential tenant. House owners often face a trade-off between the time it takes to rent a house and the price for which it is rented [13]. If a house owner sets too high a house rent price, he or she might discourage potential tenants and risk having the house unit on the house rent market for a long period. Furthermore, if the house rent price is too low, the house owner may give out the house for rent quickly at the cost of giving out the house for a lesser house rent price than could have been received with a better market exposure. For the house rent market to function properly, there is a need for house owners to provide house units that meets the preferences of potential tenants.
Several factors have been identified that could influence house rent price. These include a shortage in supply of houses for rent relative to demand, which increases house rent price [4]. In addition, low interest rates increase investment in housing and, consequently, increases supply of houses for rent in a housing market, which pushes down house rent prices [14]. Other factors include house quality, and location. For example, houses made up of high quality materials and located in the city centre attract higher house rent prices than those made up of inferior materials and found in low-income areas [15]. Houses in an area that have trunk infrastructure such as clean portable piped-borne water and electric power will attract a higher house rent price than houses in areas devoid of the infrastructure [15]. An increase in the human population of an area triggers an increased demand for houses for rent relative to supply, which contributes to increased house rent prices [16]. Government housing policy that promotes supply of houses and regulation of house rent prices, stabilizes the prices [9,17].
Shortages in the supply of land push up the costs of building houses and, consequently, push the house rent price up [18,19]. Shortages of skilled personnel, high cost of building materials, and high land allocation costs increase the cost of constructing houses, which push house rent price up [5]. Houses located in areas closer to the city centre attract higher house rent prices than houses further away [20]. Easy access to credit facilities for constructing houses increases the supply of houses relative to demand, which lowers house rent price in an area [21]. In addition, housing rent price is also influenced by industrial production, which increases supply of low-cost houses at a reduced house rent [22]. An increase in the rate of employment of an area might trigger an increase in demand for houses for rent relative to supply, which pushes up house rent prices [23]. House characteristics such as number of toilets, size of the house unit, and availability of balcony influence house rent price [24].
Others factors affecting house rent price are mortgage market features, which could encourage or discourage investment in the housing sector. If the market encourages investment in the housing sector, this will result in an increase in the supply of houses for rent and, consequently, reduces house rent price [25]. Larger rooms and more rooms lead to higher house rent prices [26]. An increase in household income level in an area increases willingness to pay for house rent, which pushes up house rent price [27]. An increase in demand for houses for rent from other countries, such as demand for houses for tourism, could increase house rent price [28]. Housing rent prices often mimic the trend of the economic cycle of a country. During boom periods (economic growth), house rent prices are expected to rise, whereas they fall during the bust period, i.e., economic decline [29]. These factors are important for designing an effective housing policy and making informed decisions on providing houses that have affordable rent prices in urban areas.
Port Moresby is the capital of Papua New Guinea (PNG), the largest city, and one of the fastest growing cities in the country. In addition, it has the largest housing market in PNG. Many people have continued to migrate from rural areas to Port Moresby in search of jobs and other opportunities to better their lives. As more people make the city their home, the social, economic, and environmental challenges associated with housing need to be addressed. For this reason, there is a need for developing an efficient and operational framework for the house rent market in Port Moresby to achieve sustainable housing and urban development. This study contributes to this filling this need. The findings could assist policy-makers and urban development managers in designing a more acceptable sustainable housing strategy that meets societal preferences and demand. The Independent Consumer and Competition Commission (ICCC) has been advocating for the development of a competitive real estate and property market in PNG [30]. However, the development of an informal house rent market has continued to change the landscape of Port Moresby [31]. In an attempt to address the housing problems in major cities of PNG, several initiatives that have the potential to promote affordable house rent prices by supplying more houses were introduced by the PNG government. The initiatives include the Land and Affordable Housing Program, Land Development Program, First Home Ownership Scheme, Duran Farm Housing Project, and Gerehu Stage 3B. Several large scale private property developers such as EDAI Town Housing Development and Glory Group have emerged in and around Port Moresby.
In Port Moresby, the state-owned land, which is often preferred by property developers due to the fact that it is linked to secure tenure and low transaction costs, is almost exhausted. For this reason, property developers are shifting their attention from state-owned land to customarily-owned land. However, it is often difficult to access the customarily-owned land for development, due to insecurity of tenure and high transaction costs associated with it. Furthermore, development projects such as the PNG liquefied natural gas project attracted people from different parts of the country and from abroad to Port Moresby, which increased human population in the city and, consequently, increased housing demand. This resulted in an increase in house rent prices, which most low to middle-income households in the city might not be able to afford. The inability of these groups of households to afford house rent prices will affect their welfare, because they might find it difficult to afford other necessities such as nutritious food and clothing. To cope with this situation, some households have taken steps that have the potential to generate conflict situations and public health problems. For example, there are cases where several households live in the same three-bedroom apartment where they use the same toilet and bathroom. As more people move to Port Moresby, it is necessary to develop an affordable housing strategy that promotes house rent prices that most residents of the city will be able to pay for and at the same time afford other necessities.
The aim of this study was to examine house rent prices in Port Moresby, factors influencing them, and whether residents can afford the prices. Potential urban development strategies for making house rent prices more affordable for residents of the city were explored. It is hoped that findings from this study will contribute to urban development planning and to policy discussions related to developing strategies for making house rent prices more affordable to Port Moresby residents. This has the potential of contributing to housing policy that incorporates the strategy for reducing house rent prices aimed at improving the current housing affordability level in Port Moresby and potentially other major cities in PNG.
The Study Area
As Port Moresby is the capital of PNG and is undergoing an economic boom, the city continues to attract people from different parts of the country and from abroad, who come to the city in search of jobs. This contributes towards increasing the population of Port Moresby. For example, in 2011, the city population was 364,125 people, whereas it was approximately 400,000 people in 2015 [32,33]. Of the land in Port Moresby, 60% belongs to the state and 40% is customarily-owned. Private property developers often prefer investing in state-owned land, because it is associated with secure title and low transaction costs. However, the state land is almost exhausted, and attention has shifted to customary land, which developers are often reluctant to invest in because of in-secure tenure, which is often associated with high transaction costs. Various initiatives such as the National Land Development Program (NLDP) have been introduced by the PNG government for promoting access to customary land. The NLDP established processes and systems for accessing customary land and recommended the establishment of an entity whose activities would focus primarily on the administration of customary land. For this reason, the Customary Land Development Office was established by the National Executive Council (NEC) on 16 February 2016 [34]. However, activities of some interest groups who were against the positive development led to the rescinding of the NEC decision and the office was abolished the same year. This led to an upsurge of an informal house rent market in Port Moresby. The houses are often found in areas where trunk infrastructure such as clean portable-piped borne-water, electric power, and sewerage are lacking [35]. Coupled with the problems associated with inaccessibility of secure land for development, most house building materials are imported from abroad, and there are shortages of skilled labour and equipment. This contributes to the cost of constructing houses and, consequently, the house rent prices [36]. The state also contributes toward the provision of houses for rent through the National Housing Corporation (NHC) and National Housing Estate Limited (NHEL). These state-owned entities are theoretically involved in the development and supply of houses for rent [37]. There are 15 suburbs in Port Moresby: Badili, Boroko, and Erima, 8 Mile, 5 Mile, and Gerehu, Gordons, Hohola, and Korobosea, 9 Mile, Sabama, and 6 Mile, Tokarara, Town (also known as Downtown) and Waigani.
For a more detailed description of Port Moresby, see Endekra et al. [38].
Data Collection and Statistical Analysis
The data were obtained from houses that were advertised for rent on the Homes and Property pages of The National newspaper. The advertisements were reviewed and the attributes of the houses and prices were registered on an Excel spreadsheet for a period of 13 months (March 2015 to March 2016). The house attributes that were collected include location of the house, number of bedrooms, and house type. Other attributes included house rent price, date that the advertisement was placed, and the name of the real estate agent that paid for the advertisement. The data were first analysed using simple average, median, and simple percentages, which focused primarily on differences between rent prices in relation to location, house type, and number of bedrooms. Secondly, the ordinary least squares (OLS) regression model was used to examine factors influencing rent price for houses in Port Moresby.
The Econometric Model
One of the Gauss-Markov assumptions is that the error terms in OLS have the same variance, i.e., homoscedasticity [39]. To explore whether the OLS regression model that was used in this study meets this assumption, the Breusch-Pagan test was conducted [40]. The test statistic was 186.71, and the critical value of chi-squared at 6 degrees of freedom at 1% statistical significance level was 16.81. The null hypothesis of homoscedasticity was rejected. This indicates that the error terms of the OLS model do not have equal variance, i.e., it is heteroscedastic. To correct for the heteroscedasticity, the log-linear form of OLS was applied. This involves the transformation of continuous variables in the model to log form. For this reason, the house rent price and number of bedrooms variables were converted to logarithm using the LIMDEP statistical package [41]. For the case of the Breusch-Pagan test for the log-linear form, the test statistic was 69.77, and the critical value of chi-squared at 6 degrees of freedom at 1% level was 16.81. This indicates that the log-linear form decreased the heteroscedasticity by 63%. The result from logarithmic transformation was corrected for heteroscedasticity using the White's heteroscedastically consistent variance estimator [40]. To explore multicollinearity in the independent variables, the variance inflation factor (VIF) of the variables was estimated. The VIF of each of the included independent variables did not exceed 2.8. This indicates that multicollinearity is not a serious problem in the estimated model, see Chatterjee and Price [42]. The valuation function for OLS was estimated as in Equation (1): where α is constant, β is a vector of parameters to be estimated, RentP is house rent price, TYPE h is house type, BedR is number of bedrooms, CBD is central business district, INCO m is a historically middle-income area, EN sub is 8 Mile and 9 Mile suburbs, INCO l is a historically low-income area, and ε is the error term, which is independent and identically distributed [40].
Results
Of the 615 houses that were advertised for rent, approximately 96% (591) was useable for analysis. Four percent of the advertised houses were excluded because the houses were either advertised jointly or prices were not included in the advertisement. For this reason, the analyses were based on 591 observations. Of the useable observations, 77% were apartments and 23% were standalone houses. The results show that Boroko had the highest number of apartments (24%) for rent, whereas 9 Mile, Sabama, and 6 Mile had the lowest (0.2% each). For the case of standalone houses, Waigani had the highest (15%) and Erima had the lowest (1.4%) availability, see Figure 1.
Results
Of the 615 houses that were advertised for rent, approximately 96% (591) was useable for analysis. Four percent of the advertised houses were excluded because the houses were either advertised jointly or prices were not included in the advertisement. For this reason, the analyses were based on 591 observations. Of the useable observations, 77% were apartments and 23% were standalone houses. The results show that Boroko had the highest number of apartments (24%) for rent, whereas 9 Mile, Sabama, and 6 Mile had the lowest (0.2% each). For the case of standalone houses, Waigani had the highest (15%) and Erima had the lowest (1.4%) availability, see Figure 1. The results show that weekly house rent price range from 550 to 8000 Papua New Guinea Kina (PGK) ( Table 1). The median weekly rent price for all advertised houses was 1700 PGK. Town had the highest median house rent price for apartments and standalone houses (3500 PGK; 5000 PGK), whereas 6 Mile had the lowest (800 PGK; 950 PGK). House rent price for 3-bedroom houses ranged from 650 to 7000 PGK. Town had the highest median weekly price for 3-bedroom (3500 PGK), whereas 6 Mile had the lowest (950 PGK). Badili, Erima, and 9 Mile were The results show that weekly house rent price range from 550 to 8000 Papua New Guinea Kina (PGK) ( Table 1). The median weekly rent price for all advertised houses was 1700 PGK. Town had the highest median house rent price for apartments and standalone houses (3500 PGK; 5000 PGK), whereas 6 Mile had the lowest (800 PGK; 950 PGK). House rent price for 3-bedroom houses ranged from 650 to 7000 PGK. Town had the highest median weekly price for 3-bedroom (3500 PGK), whereas 6 Mile had the lowest (950 PGK). Badili, Erima, and 9 Mile were excluded from this analysis because they had only a few observations, which might not be a true representative of house rent prices of the suburbs.
The results show that all the houses that were advertised for rent had an average of three rooms, and 16% of the houses were found in Town suburb (Table 2), which is also the central business district (CBD). The historically low-income areas had more houses for rent than did the historically middle-income areas. The OLS regression model was estimated to account for factors that might have influenced house rent price ( Table 3). The results show that the coefficients associated with number of bedrooms, CBD, and historically middle-income areas had positive and statistically significant effects. The coefficients associated with 8 Mile and 9 Mile suburbs and historically low-income areas had negative and statistically significant effects. Coefficient associated with house type had no statistical significant effect. In terms of elasticity, an increase in number of bedrooms by one per cent was associated with an increase in house rent price by 0.72%. The presence of CBD was associated with an increase in house rent price by 112%. The presence of historically middle-income area was associated with an increase in rent price by 22%. The presence of 8 Mile and 9 Mile and the presence of the historically low-income area were associated with a decrease in house rent price by 21% and 36%, respectively.
The gross monthly salary for public service workers ranged from 691 to 12,328 PGK (209 to 3736 USD) with a median of 2115 PGK (641 USD), see Table 4. The lowest weekly house rent price was 550 PGK, which corresponds to 2357 PGK monthly. This shows that public service workers who belong to the median salary scale (pay scale (PS) 10 and PS 11) cannot afford house rent price in Port Moresby. Using the median multiple indicator, which rates house rent price affordability by dividing the median house rent price by gross household income, the house rent price affordability index for public service workers in Port Moresby is 3.4. Using 30% of a household income as a measure of house rent price affordability shows that only workers that belong to PS categories 18 to 20 can afford the lowest house rent price in Port Moresby. Notes: PS is pay scale. The annual salary is the median salary point within each scale, and it includes base salary and allowances. Income tax is excluded. Salary is adopted from [43]. 1 USD = 3.3 PGK.
Discussion and Policy Lessons
The findings from this study show that house rent prices in Port Moresby are strongly linked to location. A segment of the city that is historically a high-income area was linked to the highest house rent price, whereas a historically low-income area had the lowest rent price. This suggests that the history of a neighbourhood influences the house rent price for houses found there. The findings are supported by findings from previously published papers such as Leung et al. [44], who found in a general equilibrium model that house price and consequently house rent price and affordability are determined by location and accessibility to trunk infrastructure. In a Malaysian study of developers' perspectives on housing price, Kamal et al. [16] reported that location is necessary in determining house rent price. This suggests the importance of considering location in urban development planning, especially in allocating land for residential areas and in providing trunk infrastructure and services. If the aim is to make house rent prices in Port Moresby affordable for most residents, it is important for urban development managers and planners to consider the demand and supply of houses in various segments of the city and their rent prices. It is also necessary to consider preferences of Port Moresby residents in the design of affordable house rent strategy. For example, more houses for rent could be built in areas further from the CBD. However, it is important to note that some residents of the areas might be engaged in travelling to and from the CBD for work and other businesses, which increases costs associated with house rent price. For this reason, it is necessary for urban development managers to consider providing necessary infrastructure and services in all built areas to reduce travel costs. In addition, supplying low-cost houses for rent only in areas located long distances from the CBD could discriminate against low-income households, because most of the households could only afford house rents in that areas. This suggests the importance of providing pockets of low-cost houses for rent in historically middle-income areas and the CBD, as well as pockets of executive houses for rent in historically low-income areas. The mix would contribute toward highlighting the heterogeneity of Port Moresby residents.
More houses were advertised for rent in historically middle to high-income areas than in low-income areas. Property developers are often motivated to build houses and consequently to rent houses in areas that have necessary trunk infrastructure and services, because it often costs less compared to building houses for rent in areas devoid of infrastructure. In Port Moresby, historically middle-income areas and the CBD are supplied with necessary infrastructure and services, as well as have less security concerns. This is due to the fact that the areas are where top government officials and expatriates live. For this reason, developers will prefer to build houses for rent there. In addition, Waigani, which used to be a low-income area, is becoming more important in the supply of houses for rent. This could be due to the fact that some governmental and non-governmental offices, as well as shopping malls and hotels, are springing up there, which contributes towards increased house rent price. As people often prefer living in locations closer to their workplace and markets, this could be a possible reason that more houses are being built in Waigani in response to demand. The findings are in line with that of Hanushek et al. [45], who found in a general equilibrium model of workplace choice and residential choice that welfare of households could fall when they are restricted from choice of locations where they live in relation to workplaces. The findings are also supported by Nao and Ezebilo [46], who found that, it is necessary to introduce trunk infrastructure before building houses to avoid loss of money in terms of house rent or mortgage. The findings indicate the availability of trunk infrastructure in an area has the potential of increasing house rent prices for houses found there.
The results reveal that most Port Moresby residents might find it difficult to afford rent price for houses found in the city. Only top level public service workers have the financial capacity for house rent prices. As low and middle level workers' salaries are not enough to pay house rent prices, it indicates that most public service workers cannot afford house rent price in Port Moresby. Furthermore, a median multiple indicator revealed that, house rent prices in Port Moresby is seriously unaffordable for public service workers. For this reason, it is common to see different households sharing a housing unit and the facilities in the house, which is often made for one household. This often has adverse effects on the welfare the households. For example, school aged children might find it difficult to study at home and household's privacy is compromised. To make house rent price affordable for public service workers and other Port Moresby residents in general, it is necessary for governmental agencies such as the NHC to facilitate the private sector in constructing more low-costs houses for rent, especially in low-income areas. Housing voucher schemes could also be introduced to assist low-income households to access affordable housing, see Leung et al. [44]. This entails the issuance of a household voucher by a governmental agency; the household finds a suitable house for rent, and the landlord is paid a subsidy directly by the agency. The household pays the difference between the actual house rent price and the subsidy. As low-income households are involved in choosing houses for rent that meet the requirements of the housing voucher scheme, they would not be limited to only renting houses located in low-income areas. This suggests that the scheme has the potential of meeting household's preferences. In addition, houses for rent earmarked for the housing voucher scheme must meet minimum quality standards, which should be determined by governmental agencies such as the NHC. This could serve as a potential strategy to see that low-income households have access to quality and houses at affordable rent price. As the housing subsidy is paid directly to the landlord by the governmental agency, it has the potential of reducing the tendency for a household to use the house rent subsidy for other purposes.
The findings show that an increase in number of bedrooms is strongly linked to an increase in house rent price. This is supported by previous published papers such as Salim [27] who found in a Turkish study that an increase in the number of rooms increases housing price and consequently house rent price. A possible reason is that, as the number of bedrooms increases, the floor size is likely to increase, which potentially increases the house rent price. An increase in the number bedrooms also indicates that more people who live in the house could have their own privacy, which is an added value to the house. This suggests that in accounting for house rent price, planners and urban development managers should consider the number of bedrooms, especially in the course of developing low-cost houses for rent with the aim of increasing the housing affordability level among Port Moresby residents.
Central business district (CBD) is often an area associated with commercial activities, workplaces, and availability of trunk infrastructure and services, which often increases demand for houses for rent there. For this reason, CBD is likely to attract a higher house rent price compared to other segments of the city. Findings from this study support this premise, which is also in line with findings from previously published papers in the literature such as Mulliner et al. [15] and Kamal et al. [16]. The authors of these papers found that location and access to services influence house price and consequently house rent prices. It is important to note that commercial activities, which are often found in the CBD, have the potential to contribute towards pushing up house rent prices. For this reason, if the aim is to promote affordable house rent prices for Port Moresby residents, urban development managers and land use planners could focus on providing commercial houses such as shops and workplaces in the CBD and residential houses for rent in other segments of the city. It is necessary to consider decongesting the CBD by moving some workplaces and businesses from the CBD to other segments of Port Moresby. This has the potential of developing underdeveloped segments of the city, redistributing traffic in the city, as well as reducing house rent prices in the CBD.
The historically middle-income areas of Port Moresby such as Boroko and Korobosea have been residential areas where top government officials and expatriates live, which should be one of the reasons it is linked to increased house rent prices. This is because the areas are well supplied with all necessary trunk infrastructure and services. For example, there are lot of schools in Boroko, and the general post office and police station are also found there. Its proximity to the Port Moresby international airport and the CBD, and fewer security concerns makes Boroko and Korobosea an area of choice for most city residents, which might have influenced the house rent prices. It is also important to note that the Port Moresby general hospital and a stadium are found in Korobosea and Boroko and those areas have good road networks. These could be the possible reasons that the findings from this study revealed that the presence of middle-income areas increases house rent price. It is necessary to consider expanding the middle-income area toward the nearby historically low-income areas, which could promote development of less developed areas of the city, as well as might contribute to reducing house rent prices in the middle-income area.
Houses located in areas that are long distances from the CBD and have limited trunk infrastructure would likely attract a lower house rent price because only a few people might prefer to live there. The findings from this study support this assertion. This might be a possible reason that the presence of historically low-income areas was strongly linked to a decrease in house rent price. The presence of 8 Mile and 9 Mile suburbs also follow a similar trend as that of low-income areas. Most of these areas lack trunk infrastructure and services. Security concerns are often higher in 8 Mile and 9 Mile compared to middle-income areas and the CBD, which might be a possible reason for the lower house rent prices in these areas. To provide houses associated with affordable rent prices for Port Moresby residents, more houses could be built in low-income areas, such as 8 Mile and 9 Mile suburbs. This should be accompanied with the provision of necessary trunk infrastructure and services in these areas. It is also necessary to address the security concerns in these areas. The construction of more houses in historically low-income areas, and emerging areas such as 8 Mile and 9 Mile, has the potential of lowering house rent prices in these areas. However, it is important to note that residents of low-income areas might incur costs associated with travelling to and from the CBD for work and other business activities, which increases costs associated with renting a house. This suggests the need for urban development managers and planners to consider location and accessibility to trunk infrastructure and services in decisions regarding the supply of houses that meet affordable house rent prices.
In regards to the methodology, it is important to note that the data used for this study were obtained from newspapers. This indicates that houses that were not advertised in the newspapers were not captured in the analysis. For this reason, the monetary value reported in this study might not reflect the monetary value of all transactions related to house rentals that took place in Port Moresby during the period of the study. Like in most developing countries, the housing market in PNG and Port Moresby in particular is not well organised. There is a huge informal house rental market transactions in the city for which it is difficult to account [31]. In addition, other sources in which house rent price data could be accessed, such as real estate agencies, often find it difficult to provide such data. For this reason, the newspaper was used as data source. Numerous factors such as lot size, house age, maintenance history, and distance to the city centre have been identified as influencing house rent price. However, these factors were not included in the newspaper advertisements and could not be explored in this study.
Regarding information on houses that were advertised for rent by real estate agents, it is necessary for the agents to improve information related to properties being advertised. Currently, information asymmetry exists in most of the houses that were advertised for rent. For example, information regarding floor space size, size of rooms, maintenance history of the house, and age of the house were not often included in the advertisements, which makes it difficult for potential tenants to know whether the houses being offered to them for rent matches their preferences. To move the house rent market in Port Moresby forward, all relevant information related to the houses being advertised for rent must be included in the advertisement. Correct information on houses offered to potential tenants for rent will contribute towards providing proper valuation for the houses.
If the aim of the government is to provide houses for rent at affordable levels for Port Moresby residents, it is necessary to facilitate the private sector in supplying more houses for rent in low-income areas such as Gerehu, Hohola, and Tokarara, as well as in areas located long distances from the CBD such as 8 Mile and 9 Mile as advocated by Ezebilo [47]. However, the supply of more houses for rent to these areas must be accompanied with the introduction of necessary trunk infrastructure and services. It is also necessary to increase the capacity of electricity, water supply, and sewerage in the areas to meet demand. Incentives and facilities that could attract more investments to areas such as 8 Mile and 9 Mile could be introduced so that residents could be more comfortable living there. This has the potential of decongesting areas in and around the CBD, which could contribute towards reducing house rent prices there. The supply of more houses for rent to low-income areas and in areas such as 8 Mile and 9 Mile has the potential of providing houses for rent at affordable prices, and more jobs might be created for the teeming population.
Policy Lessons for PNG Government
There are several housing policy lessons that could be drawn from the findings of this study that could be useful for urban development managers and land use planners in making informed decisions that could promote access to houses with affordable rent prices in major cities of PNG, including Port Moresby. The policy lessons are:
•
Low to middle-income public service workers cannot afford house rent prices in Port Moresby. House rent prices in Port Moresby are beyond the reach of low-income and middle-income workers, which appear to constitute the greater percentage of workers in the city. This might have adverse effects on the welfare of the workers' households. To increase house rent price affordability level for these household groups, governmental agencies such as NHC could facilitate private sectors to construct more low-cost houses for rent in historically low-income areas, as well as 8 Mile and 9 Mile suburbs. The private sector could be motivated to supply low-cost houses for rent by providing them tax credits.
•
Land is one of the most important factors of production that contributes to cost of constructing a house and consequently house rent prices. Access to secure land has been a long-standing issue that limits the housing industry in PNG. Attention has shifted from state-owned land often cherished by property developers to customarily-owned land, because the state land is almost exhausted [48]. It is necessary for the state to put more effort in developing an effective strategy for unlocking more secure customary land to supplement the remaining state-owned land. In the short-term, it is necessary to promote the construction of high-rise multi-family house units for rent at affordable prices. This will help maximise the use of the available land resources. In the long-term, access to secure customary land could be improved by invoking Section 10 of the PNG Land Act 1996 as advocated by Dr Charles Yala, the former Director of the PNG National Research Institute. This involves landowners leasing their land to the state through an urban development lease (UDL) processed by the Department of Lands and Physical Planning. The UDL (land title) is issued to the landowners without advertising it. This has the potential of releasing more lands for constructing more houses for rent. • Real estate agents do not often disclose some important information required by potential tenants for proper valuation of houses that are advertised for rent. To protect the interest of tenants, it is necessary for governmental agencies such as ICCC to develop guidelines for advertising houses for rent. The advertisement of houses for rent must be monitored by the agency to see that real estate agents adhere to the guidelines. It is also necessary for the NHC to monitor the quality of houses offered for rent.
•
Construction of more houses for rent in low-income areas should be accompanied with the establishment of trunk infrastructure there so that people could be attracted to move to those areas.
The state attempted to provide more houses and consequently houses for rent at 8 Mile through the Duran Farm Housing project, as well as in Gerehu through the Stage 3B housing project, but it has not been successful [47]. Basic trunk infrastructure was not introduced to the areas, which makes it difficult for completed houses to be occupied, as reported by Nao and Ezebilo [45]. For housing projects for providing houses for rent to be more effective in PNG, governmental agencies such as NHC must focus at playing facilitating roles, whereas private sectors must focus on building houses for rent as advocated by Webster et al. [37]. In addition, Ezebilo and Hamago [36] found that private property developers have the potential of financing the development of trunk infrastructure. However, the state will need to compensate the developers through schemes such as tax credits, which should contribute to reducing house rent prices in urban areas. • Areas in and around CBD are becoming congested, and this has led to the high house rent prices due to an increase in demand for houses there relative to supply. For this reason, there is a need to renew the areas by providing more houses for rent in the fringes of Port Moresby. This might be used for decongesting the city, which has the potential of reducing house rent prices in the CBD.
Conclusions
This study provides insight into house rent prices in Port Moresby, factors influencing them, and affordability of the prices. The findings revealed that house rent price is strongly linked to the number of bedrooms and the location of a house. Houses located in areas where basic trunk infrastructure and services are lacking attract the lowest rent price, whereas houses located in and around the CBD had a higher rent price. Most public service workers cannot afford house rent prices in Port Moresby.
Some policy related lessons that could be drawn from findings of this study include the need to provide more low-cost houses for rent in historically low-income areas and areas long distances from the CBD, which low-middle income households could afford. More high-rise multi-family low-cost house units for rent should be constructed to maximise use of the available land, which is similar to the model Singapore have used for limited land. There is a need to unlock more secure customary land for housing development to lower the cost of constructing houses, which could provide opportunities for people to construct more houses for rent, which could push-down house rent prices at affordable level for low income households. There is a need for more transparency regarding the information on houses advertised for rent. It is necessary to provide more information which could provide potential tenants with a better understanding of the valuation of the houses advertised for rent. To determine the necessary information required for advertising houses for rent, PNG could draw lessons from Sweden, where most information on characteristics of houses advertised for rent is often supplied.
If the aim is to increase the ability of Port Moresby residents to afford house rent prices, there is a need to build more houses for rent in low-income areas. It is important to reduce transport costs from these areas to the CBD where some people that live in low-income areas work by improving road networks. This could encourage people to live in low-income areas at a lower house rent price while they work in and around the CBD. The findings from this study contribute to potential strategies for increasing households' affordability level of house rent prices and the need for disclosing necessary information related to houses advertised for rent. This should assist urban development managers and land use planners in making informed decisions by considering house rent prices in Port Moresby. | 2018-02-14T05:48:56.963Z | 2017-12-04T00:00:00.000 | {
"year": 2017,
"sha1": "6f950dbc9e21d9b76340e8729c10f35bf8d88b67",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/buildings7040114",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "6f950dbc9e21d9b76340e8729c10f35bf8d88b67",
"s2fieldsofstudy": [
"Economics",
"Geography"
],
"extfieldsofstudy": [
"Engineering"
]
} |
213557531 | pes2o/s2orc | v3-fos-license | Measurement of four-photon absorption in GaP and ZnTe semiconductors
Intensity-dependent effective four-photon absorption (4PA) coefficients in GaP and ZnTe semiconductors were measured by the z-scan method using pump pulses of 1.75 μm wavelength, 135 fs duration, and up to 500 GWcm−2 intensity. A nonlinear pulse propagation model, including linear dispersion and 4PA was used to obtain the 4PA coefficients from measurements. The intensity-dependent effective 4PA coefficients vary from 2.6× 10−4 to 65× 10−4 cm5GW−3 in GaP, and from 3.5× 10−4 to 9.1× 10−4 cm5GW−3 in ZnTe. The anisotropy in 4PA was shown in GaP. The knowledge of 4PA coefficients is important for the design of semiconductor photonics devices. © 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement
Introduction
Semiconductor materials have been playing an important role in many photonics-based technologies since decades. Novel high-power ultrashort-pulse laser and parametric sources, operating at infrared wavelengths, have recently enabled new application areas in nonlinear optics [1,2]. For example, extremely efficient terahertz (THz) pulse generation has been investigated by optical rectification of infrared pump pulses in semiconductors [3][4][5][6][7][8][9]. THz-pulse-driven high-harmonic generation, recently demonstrated in GaSe semiconductor [10], may enable to explore and exploit electron wavefunctions and the band structure by all-optical methods, and even lead to the construction of intense CEP-stable solid-state attosecond sources. To design and optimize setups and devices utilizing intense optical driving of semiconductors, the knowledge of their nonlinear optical parameters, such as the nonlinear refractive index or multiphoton absorption coefficients, is very important.
Theoretical scaling laws for multiphoton absorption coefficients have been given for directbandgap semiconductors based on a two-band model [11]. An early summary on theoretical models and experimental values of two-(2PA) and three-photon absorption (3PA) coefficients for a few selected materials can be found in [12], Nathan et al. 2PA and 3PA of semiconductors have been extensively studied experimentally in the past. The anisotropy of 2PA in GaAs [13,14] and CdTe [13] has been studied with nanosecond and 30-ps pulses and found to reflect the anisotropy of the band structure because of large 2PA-generated free carrier absorption. The most commonly used method to measure 2PA and 3PA coefficients is the z-scan technique [15]. This has been applied to measure the tensor properties (anisotropy) [16], by using the three nonlinear eigenpolarizations [17], and the dispersion of third-order nonlinearities (2PA) [16,18] in various semiconductors. The dispersion and the anisotropy of 2PA and 3PA in GaAs has been measured in the 1.3-2.5 µm wavelength range by 100-fs pulses [19]. 3PA spectra have been calculated using a four-band model and compared to measurements in direct-band-gap semiconductors [20,21]. A maximum in the 3PA coefficient has been found both in theory and experiment near the 3PA cut-off wavelength.
In contrast, little is known about the four-photon absorption (4PA) and nonlinearities of even higher order in semiconductors and other important optical materials. Recently, multi-photon absorption in GeSbS chalcogenide glass up to the 11 th order has been reported for wavelengths between 1.1 µm and 5.5 µm [22], and values on the order of 10 −4 cm 5 GW −3 have been found for the 4PA coefficient. The knowledge of 4PA and higher-order nonlinearities can be crucial for applications driven by infrared pulses. For example, in the newly developed very promising semiconductor THz generators, 4PA can be a major design issue [9,23]. The efficiency of THz generation in ZnTe could be increased by two orders of magnitude, from 3.1 × 10 −5 [24] to as high as 0.7% [7,8]. The reason for this enormous increase was the elimination of both 2PA and 3PA by a sufficiently long pump wavelength [3,4,[6][7][8]25,26], thereby eliminating the associated free-carrier absorption in the THz range. However, 4PA can still be a limiting factor for THz generation in this case. A first attempt to estimate the 4PA coefficient in ZnTe has been made based on THz generation results [9], but this approach is very indirect and therefore subject to large uncertainties. GaP is another semiconductor nonlinear material of high interest for efficient THz generation [9,23], but no experimental data on its 4PA coefficient are available. Thus, there is a clearly perceived lack of knowledge on important material data.
In this paper, we report on the measurement of the intensity-dependent 4PA coefficient in ZnTe and GaP semiconductors. The anisotropy of the 4PA coefficient is also investigated. For many potential applications of optically driven semiconductors and practical device design, the knowledge of intensity-dependent 4PA coefficients can be indispensable.
Experimental setup
Nonlinear transmission measurements with fs pulses have been carried out by using the z-scan technique, which is a widely used method for measuring nonlinear absorption coefficients. The experimental setup is shown in Fig. 1. Pump pulses of λ 0 = 1.75 µm central wavelength were delivered at 1 kHz repetition rate by a tuneable optical parametric amplifier (OPA) (Light Conversion, HE TOPAS), driven by a Ti:sapphire laser system (Coherent, Legend Duo). The full width at half maximum of the nearly Gaussian spectral intensity distribution of the pump pulses was about 51 nm. A set of dichroic long-pass filters have been used to suppress possible parasitic spectral components below about 1.65 µm. A pulse duration of 135 fs was measured by autocorrelation.
A spatial filter was used to improve the pump beam quality to nearly Gaussian intensity profile and cylindrical symmetry. The spatial filter consisted of a pair of 400-mm focal-length lenses in a confocal arrangement, and a circular pinhole of 100 µm diameter, placed inside a vacuum chamber with two uncoated windows. A lens with 500 mm focal length was used to focus the beam for the z-scan measurement. The horizontal and vertical beam profiles have been carefully measured by the knife-edge technique at several positions along the propagation direction of the focused beam. The absence of astigmatism has been verified in test z-scan measurements as well. The waist radius of the focused pump beam was w 0 = 39 µm (at 1/e 2 of the peak intensity). The Rayleigh range was 2z R = 5.5 mm, significantly larger than the crystal length L. The sample crystal was mounted on a motorized linear stage to move it along the beam propagation direction (z-axis) around the focus. A large-area Ge photodiode of 19.6 mm 2 sensitive area (Thorlabs DET50B/M) was used to measure the power transmitted through the sample. A lens with 100 mm focal length and 25 mm diameter, placed between the sample and the photodiode, was used to avoid closed-aperture effects in the z-scan measurements. The laser power transmitted through the sample crystal has been measured as function of the crystal's z-position. Z-scan measurements with GaP and ZnTe crystals have been carried out at various pump energy levels. A step-variable neutral density filter (Thorlabs NDC-100S-4M) was used to attenuate the beam and regulate the pump intensity. In a second measurement series, in order to explore anisotropy effects, the electric-field polarization direction of the linearly polarized pump pulses has been rotated in the plane of the GaP crystal, by rotating the crystal about the beam axis.
The [110]-oriented GaP and ZnTe crystals of 10 mm × 10 mm size and L = 1 mm thickness have been measured at room temperature. The GaP crystals were purchased from two different manufacturers (Pi-Kem and Moltech); the ZnTe was from Moltech. Both GaP and ZnTe crystallize in the zincblende structure,43m. The (direct) bandgap energies (E g ) of GaP and ZnTe are 2.79 eV and 2.26 eV [27,28], respectively. In GaP, the smaller indirect bandgap of 2.27 eV has been found to play a less important role in multiphoton absorption induced by fs pump pulses [12,28,29], due to the smaller probability of the indirect transition than that of the direct one. The cut-off wavelength for 3PA, 3hc/E g , is 1.33 µm for GaP and 1.65 µm for ZnTe. Here, h is the Planck constant and c is the speed of light in vacuum. At pump wavelengths longer than the cut-off, interband linear absorption, 2PA, and 3PA are not effective in the respective material, but 4PA and higher-order multiphoton absorption can still be present. Our choice of 1.75 µm for the pump central wavelength ensured that the entire spectrum was located above the 3PA cut-off wavelength for both materials (for example, the spectral intensity at the 3PA cut-off in ZnTe was less than 2% of the peak intensity). We note that the 4PA cut-off wavelength, 4hc/E g , is 1.78 µm for GaP and 2.19 µm for ZnTe. In case of GaP, the long-wavelength wing of the pump spectrum containing about 10% of the pulse energy was located beyond the 4PA cut-off.
Theoretical model
A pulse propagation model in the slowly-varying envelope approximation was used, which included the effects of linear dispersion and 4PA inside the crystal. Diffraction and self-focusing effects have been neglected inside the crystal because relatively thin samples were used in the experiment. In the calculation, we have assumed, both in time and space, a Gaussian-shaped pump pulse, incident onto the crystal. The intensity of the pulse just after entering the crystal was given by Here, t is the time, z (0 ≤ z ≤ L) is the coordinate along the pulse propagation direction inside the crystal, ρ is the radial coordinate, z c is the coordinate of the input surface of the crystal in the z-scan measurement with z c = 0 corresponding to the focus. The radius at 1/e 2 of the maximum intensity of the Gaussian beam is w(z c ), the waist radius is w 0 , and the full-width-at-half-maximum pulse duration is τ. I 0 is the peak intensity (maximum in space and time) inside the crystal, taking into account Fresnel losses. The complex electric field envelope inside the crystal is related to the intensity as where 0 is the vacuum permittivity and n 0 is the refractive index of the crystal at the central frequency of the pulse. The variation of the intensity due to 4PA is given by the following equation: where β 4 is the 4PA coefficient. Therefore, the effect of 4PA on the field amplitude is given by By using a split-step Fourier method with ∆z step size, linear dispersion was accounted for in the spectral domain in the moving frame of the pulse according to the following equation: Here, F (F −1 ) denotes (inverse) Fourier transform, ω is the angular frequency, and n g (ω 0 ) is the group refractive index of the crystal at the ω 0 =2πc/λ 0 central frequency of the pulse.
Equations (4) and (5) were numerically solved for different radial coordinates ρ using a split-step Fourier method, similar to that in [30], Yin et al. The z-scan curve, i.e. the normalized transmission T through the sample crystal, as function of the crystal position z c can be obtained by
Results and discussion
The results of open-aperture z-scan measurements in [110]-cut GaP and ZnTe crystals, pumped at 1.75 µm wavelength, are shown in Fig. 2. The normalized transmission values refer to those inside the crystal and have been corrected for the Fresnel loss. Negative (positive) z c values refer to crystal position before (behind) the focus z c = 0. In these measurements, the polarization of the pump pulse was either along the [111]-direction or along the [111]-direction of the crystals. These directions correspond to the maxima of the second-order nonlinear polarization and are typically used for example in THz generation by optical rectification [7,8]. The intensity values given in the legends refer to peak intensities I 0 at beam centre inside the sample materials.
In case of GaP, Fig. 2(a), the minimum of the measured relative transmission rapidly decreases with increasing pump intensity. At about 150 GWcm −2 , saturation of the nonlinear absorption sets on, indicating, besides 4PA, the contribution of other nonlinear effects at higher intensities. In this regime, one can also observe a forward shift of the transmission minima and a change of the shape of the z-scan curves with increasing pump intensity. At the highest intensities (≥ 300 GWcm −2 ), the z-scan curves become asymmetric, which is the signature of closed-aperture effect due to nonlinear beam reshaping, and can also explain the forward shift of the transmission minima. This was observable despite using an additional positive lens before the detector (see Fig. 1). In case of ZnTe, Fig. 2(b), similar behaviour has been observed, except that the nonlinear change of the transmission is somewhat smaller at comparable intensities and the saturation effect become visible below 100 GWcm −2 .
In order to filter out closed-aperture effect at high intensities, the measured z-scan curves have been symmetrized according to T s (z c ) = [T(z c ) + T(−z c )]/2 [31]. The minima of the measured and symmetrized z-scan curves have been fitted by varying the 4PA coefficient in the nonlinear pulse propagation model described in Section 3. In each case, a few measured data points with the lowest transmission values were considered for fitting the least-square method. The only free parameter in the fitting was the β 4 coefficient. Examples of measured and symmetrized GaP (Pi-Kem) z-scan curves, corresponding to different peak pump intensities, are shown in Fig. 3, together with the calculated curves, using the best-fit 4PA coefficient value. The measured and the calculated z-scan curves are in good agreement in the intensity regime below about 100 GWcm −2 , where no saturation of the nonlinear transmission occurs (Fig. 3(a)). No significant closed-aperture effect is observed in this range, as illustrated by the nearly vanishing antisymmetric part of the z-scan curve, T a (z c ) = [T(z c ) − T(−z c )]/2. The agreement between measured and calculated z-scan data is gradually reduced with the onset of saturation (Figs. 3(b) and 3(c)). In this range, increasing closed-aperture effects are also clearly shown by the increasing amplitudes of the antisymmetric parts. The obtained 4PA coefficients are dependent on the peak pump intensity, which is consistent with the presence of other nonlinear effects besides 4PA. We note that considering the obtained intensity dependence of the 4PA coefficient in calculating the individual z-scan curves could possibly improve the agreement between experimental and calculated z-scan curves. However, this would add significant numerical complexity to the model but would not change the 4PA values because they are obtained from the measured transmission minima. For this reason, such a correction has not been carried out here.
The intensity-dependent 4PA coefficients, obtained from fitting the measured-symmetrized z-scan curves, are shown in Fig. 4 as function of the on-axis peak pump intensity. The numerical values are also shown in the inset table in Fig. 4. In case of GaP, the 4PA coefficient increases monotonically from 2.6 × 10 −4 cm 5 GW −3 at 37.5 GWcm −2 pump intensity to its maximum of 56.5 × 10 −4 cm 5 GW −3 at 183 GWcm −2 intensity. The maximum is 64.9 × 10 −4 cm 5 GW −3 in case of the Moltech GaP crystal. The maximum is followed by a monotonic decrease at still higher intensities. There was no significant difference between the two GaP crystals. Similar behaviour was found for ZnTe, but the maximum value of the 4PA coefficient was about 6 The intensity dependence of the determined 4PA coefficients and the occurrence of closedaperture effect at higher intensities, hinting to strong beam reshaping by nonlinear phase shift, are signatures of additional nonlinear effects not considered in our model. These effects may contribute both to nonlinear absorption as well as the nonlinear refractive index. Nonlinear absorption can be caused by a number of effects, such as five-photon and higher-order multiphoton absorption, the absorption of free carriers generated by multiphoton absorption and tunnel ionization from the valence to the conduction band [10,32], or non-phase-matched second-harmonic generation and eventually subsequent 2PA. Contribution to the nonlinear phase can arise from the nonlinear refractive index n 2 and from free carriers generated by the effects mentioned above [33]. The nonlinear phase can also change the beam size, besides changing the temporal shape, which can lead to the change of the free-space propagation of the beam, but also to the change of the intensity and the nonlinear absorption within the sample.
At higher intensities, free-carrier generation by 4PA can contribute to the nonlinear absorption. Figure 5 shows the estimation of the free-carrier absorption (FCA) coefficient α fc (λ) = 4π · Im[ fc (λ)]/λ as function of the intensity. Here, λ is the wavelength and fc (λ) is the free-carrier contribution to the complex dielectric function [23]. The calculation, which may overestimate FCA, is based on the 4PA coefficients in Fig. 4. According to Fig. 5, α fc · L>1 holds for L = 1 mm above about 120 GWcm −2 intensity in GaP. This suggests that FCA can contribute to the increase of the 4PA coefficient of GaP at intermediate intensities between about 120 GWcm −2 and 200 GWcm −2 . Possible saturation of FCA due to the scattering of the free carriers into valleys with low mobility and/or anharmonicity of the valleys causing low mobility of highly excited electrons [34] can contribute to the decrease of the 4PA coefficient observed at still higher intensities. In ZnTe α fc · L>1 holds for L = 1 mm above about 170 GWcm −2 intensity, and the contribution of FCA and its saturation to the increase and subsequent decrease (above about 70 GWcm −2 ) of the 4PA coefficient may be less significant. We note that the saturation of FCA in a ZnTe THz source has been reported in [35], Ku et al. Non-phase-matched second-harmonic generation, followed by 2PA is estimated to be less likely to contribute. The highest intensity used in case of GaP was 471 GWcm −2 , which gives a Keldysh parameter of about 3.2, indicating the absence of strong tunnelling effects. Furthermore, a significant redistribution of the electron population due to the intense excitation can even lead to additional changes of the linear and nonlinear optical parameters, and other material parameters, such as the bandgap. Further experimental and theoretical investigations are needed to clarify possible contributions of the mentioned effects. Such studies, which are beyond the scope of the present work, may include the development of more sophisticated theoretical models and optical pump-THz probe measurements of transient carrier dynamics.
4PA can be a major efficiency-limiting effect in semiconductor THz sources pumped at wavelengths longer than the cut-off for 3PA. In our previous work, we gave the estimation of (3 ± 1) × 10 −5 cm 5 GW −3 for the 4PA coefficient in ZnTe by fitting simulation results to experimental data on THz generation efficiency [9]. The prediction of simulations using this value of the 4PA coefficient agreed well with the observed saturation of the THz generation efficiency and the maximum efficiency found at 14 GWcm −2 pump intensity. However, it is about one order of magnitude smaller than the smallest 4PA coefficient measured here by z-scan at about two times higher intensity (Fig. 4).
In a further z-scan measurement series, the polarization direction of the linearly polarized pump pulses has been varied with respect to the dielectric Z[001]-axis of the [110]-oriented GaP crystal. The pump intensity was 65 GWcm −2 . The results, shown in Fig. 6, clearly reveal the anisotropy of the 4PA coefficient, which varies between 3.4 × 10 −4 cm 5 GW −3 and 14.5 × 10 −4 cm 5 GW −3 . The error bar in determining the 4PA coefficient value can be estimated to about 6.5% due to laser intensity fluctuation. The two largest 4PA coefficient values can be seen, at 55 • and 135 • polarization angles, measured from the Z-axis. The extraordinarily large value at 55 • is most probably due to the effect of crystal surface errors or dust. No polarization-dependent z-scan measurements have been carried out with ZnTe here, but a similar behaviour can be expected due to symmetry reasons. We note that the second-order nonlinear susceptibility, responsible for second-harmonic generation (SHG) or optical rectification, also exhibits a qualitatively similar anisotropy (see the red empty symbol and the red dashed curve in Fig. 6 Fig. 6), which are close to the 55 • and 135 • angles where 4PA maxima were found. We note that qualitatively similar anisotropy has been reported for 2PA [14] and 3PA [19] in [110]-oriented GaAs, which has the same symmetry group as GaP and ZnTe. Good agreement was found between 2PA anisotropy measurements of GaAs with ps [14] and ns [13] pulses. In the latter case, 2PA-induced free-carrier absorption was large, and the 2PA anisotropy was found to reflect the anisotropy of the band structure. An expression for the pump polarization dependence of the effective third-order nonlinear susceptibility, relevant for 2PA, can be easily derived from the tensor symmetry properties [14,36]. The description of 3PA anisotropy requires the knowledge of the symmetry properties of the fifth-order nonlinear susceptibility tensor, which is already sparse in the literature [37,38]. 4PA anisotropy requires to deal with the seventh-order nonlinear susceptibility tensor (of rank 8), which we have not found in the literature. Due to the uncertainties in the exact physical origin and the mathematical complexity, a theoretical discussion of 4PA anisotropy is beyond the scope of this work.
For comparison, we note that an estimation of 10 −7 cm 5 GW −3 has been given for the 4PA coefficient of LiNbO 3 based on THz generation results with 1.03 µm pump wavelength [3]. This value is less reliable, as it is dependent on another fitting parameter [3]. The values obtained in the present work for GaP and ZnTe are three to four orders of magnitude larger. 4PA values on the order of 10 −4 cm 5 GW −3 have been reported for GeSbS chalcogenide glass [22], which are of similar order of magnitude than our values for ZnTe. Our values for GaP are of similar order or one order of magnitude larger, depending on intensity. The relatively large values of 4PA coefficients in the investigated semiconductors underlines the importance of their knowledge, because 4PA can be significant even at moderate optical intensities.
It is also noted that we have carried out closed-aperture z-scan measurements at a pump intensity of 57.2 GWcm −2 to estimate the nonlinear refractive index at the 1.75 µm pump wavelength. A value of n 2 = 1.9 × 10 −18 m 2 W −1 was found for GaP and n 2 = 1.6 × 10 −18 m 2 W −1 for ZnTe. Values at some other wavelengths can be found in [28], Liu et al. and [39], He et al.
Conclusion
Intensity-dependent four-photon absorption (4PA) coefficients in GaP and ZnTe semiconductors have been measured by the z-scan method using pump pulses of 1.75 µm wavelength and 135 fs duration. The choice of the pump wavelength longer than the cut-off for 4PA ensured that no interband linear, as well as two-and three-photon absorptions had to be taken into account. The intensity-dependent 4PA coefficients, obtained in a very broad range of pump intensities from about 30 GWcm −2 up to about 500 GWcm −2 , vary from 2.6 × 10 −4 to 65 × 10 −4 cm 5 GW −3 in GaP, and from 3.5 × 10 −4 to 9.1 × 10 −4 cm 5 GW −3 in ZnTe. The anisotropy of 4PA has been shown in GaP.
The knowledge of 4PA coefficients is important for the design of semiconductor nonlinear optical devices. Prominent examples are extremely efficient monolithic semiconductor THz sources [7][8][9]23], demonstrated recently, which are potential next-generation drivers for high-field applications such as THz nonlinear spectroscopy and particle acceleration.
Disclosures
The authors declare no conflicts of interest. | 2020-02-20T09:15:05.785Z | 2020-04-13T00:00:00.000 | {
"year": 2020,
"sha1": "3dca91b4875c224174ad30836ada909545314e46",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1364/oe.382388",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "4daa3c95c096c852d24b9091eb96258d675a28eb",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Medicine",
"Materials Science"
]
} |
222247449 | pes2o/s2orc | v3-fos-license | The Stability of Supply and Rice Price in Sukoharjo Regency
The economic conditions of rice, whether aspect of supply, demand, or rice price is continue to fluctuate due to changes of the phenomena. Therefore, this commodity needs to be examined in regarding its supply, demand and price aspects. This study aims to analyze the supply and price stability of rice. The study used a secondary data method. The study was conducted in Tawangsari and Mojolaban Districts of Sukoharjo Regency. Data were analyzed by Co variance analysis. The study results showed that supply and rice consumption were surplus and stable. The stability of prices and supply for paddy and rice is occurred in Tawangsari and Mojolaban Districts and Sukoharjo regency as well.
INTRODUCTION Background
Food has become a serious concern of the government and the public in early 2013. This is highly related to Indonesia's population of more than 250 million people who need a huge production and consumption of food commodities. Merely food availability is not enough to bring food security into realization but food access and food absorption are also important factors. If these three indicators i.e. food security, food access, and food absorption cannot be fulfilled, food insecurity as a condition where it is unable to obtain sufficient food will occur. If there is food insecurity, then economic, political, and social stabilities of a country will be threatened. Food insecurity is one of the causes of inefficient land use due to the limited land tenure of farmers. This in turn will result in low productivity.
Development of agricultural commodities requires an understanding of market prospects, resource capabilities and technological potential. The imbalance between supply and demand will affect the price and profitability, so that it requires an intervention policy and planning to deal with the situation. Projection of supply or demand is very important for production planning which will have an impact on what level of supply to maintain price stability. Food supply and price stabilization is a problem faced by almost every region in Indonesia. Some factors that affect the stability of supply and food prices are the amount of production, population increase, demand, climate change, trade barriers.
Rice is a strategic commodity that can affect economic, social and even political stability. Rice commodity is still one of the key commodities in influencing the stability of general prices. The increase in rice prices can trigger an increase in the prices of other goods (Sari 2010). This can be seen from the significant role of rice in people's lives, among others: (i) a staple food of most of Indonesia's population; (ii) from perspective of household expenses, 63% is spent for food and 17% is allocated for rice consumption; (iii) contributors to calorie and protein requirements and; (iv) the rice industry involves total of 18 million farmers, most of whom are small farmers, as well as workers involved in the supply of production inputs and factors, processing, and marketing (Saifullah, 2005, Widadi andSutanto, 2012). Thus, it is not surprising if the rice situation has a strong correlation with the development of economic and non-economic situations. History has proven that the instability of food supplies, especially rice, has triggered riots and criminal acts at the beginning of reformation era. This indicates the important role of the government in maintaining rice availability throughout the year, as well as its even distribution and stable prices.
The economic conditions of rice, whether related to the aspect of supply, demand, and price are continue to fluctuate. Thus there are many quantitative economic relations found between the economic models of rice, whether concerned to the supply, demand, and the price stability of both paddy and rice.
One of agricultural characteristics is scattered production in several areas. The price per unit volume or per unit for the similar commodity is different from one area to other areas. According to Chen and Shagaian (2016), variations in the price of one particular commodity between two regions are caused by: 1) different ability in commodity production and transfer costs between regions, 2) differences in farming operational costs, 3) differences in demand condition and local supply, 4) market imperfections.
Agricultural commodities will move from surplus areas with relatively cheap prices to deficit areas with relatively high prices, thus there is movement from surplus to deficit areas. In fact, transfer costs are required for trading purposes between two areas, which include terminal area and transportation costs. The flow of agricultural commodities from surplus area Y to deficit area X will be stopped if the transfer costs are equal to the difference in prices between the deficit area and the surplus area, supply stability and price stability will occur.
Based on the background, the formulation of the problem is how the conditions of paddy and rice supply in the study area. While this study aims to analyze the paddy and rice stability of supply and price in the study area.
METHODS
The analysis of stability of paddy and rice supply and price in Sukoharjo Regency, especially in Tawangsari and Mojolaban Districts, was conducted by using the analytical descriptive method to get a systematic, factual, and accurate description of the situation of study area concerning the facts, nature and relationship between the phenomena studied (Nasir, 1988 Sukoharjo Regency is determined as the location of the study because it has the highest rice productivity in Central Java, which is 7,466 tons/ha while the average rice productivity in Central Java is 5.74 tons/ha. Whereas the determination of Tawangsari and Mojolaban Districts is based on activities related to efforts to increase production and land productivity, namely the existence of a land consolidation program. Secondary data method was used in this research with the kind of data are paddy production, rice supply, population, rice consumption, paddy prices and rice prices in the year of 2016, 2017 and 2018. Meanwhile the variety of paddy was IR64 and C4. The analytical method used is descriptive analysis by observing the rice supply and demand so that its supply stability is known, while the other method is coefficient of variance analysis. Fluctuations of paddy and rice price / supply are measured by the coefficient of variation (CV) (Setiawan, 2012 andProborini, 2018) where the supply is said to be stable if the price variation (coefficient of variation) of rice on the market is less than 9 (The Indonesian Ministry of Trade, 2019).
The coefficient of variation (CV) is the ratio between the standard deviation and the average value, expressed as a percentage, which is useful for looking at the distribution of data from the calculated average (Walpole, 2000). Where:
General Description of Study Location
The area of Sukoharjo Regency is 46,666ha wide or 1.43% of the total area of Central Java. The land in Sukoharjo Regency is allocated for rice fields of 20,617ha (44.18%) and nonrice fields of 26,049ha (55.82%). Rice fields are classified into technical irrigated of 14,655ha (71.08%), semi-technical irrigated of 2,161ha (10.47%), simple irrigated of 1,967ha (9.54%), and rain fed area of 1,834ha (8.89%). The population of Sukoharjo Regency in 2017 was 871,397 people consisting of 431,686 male (49.54%) and 439,711 female (50.46%). An overview of the regional potential is presented in Table 1. The Stability of Supply ….. (Ekowati, et.al) Central Java Province is one of the main food producers for national stock lead to promote paddy productivity. In 2018, the level of wetland paddy productivity is about 6.099 tons/ha, with the harvested area 1.80 million ha and production of wetland paddy 11.00 million tons. Meanwhile, Sukoharjo Regency is one of regencies that support food in Central Java, so the productivity food crops, especially paddy, is continually increased. In 2018 productivity of paddy reached 7.208tons/ha, while production of paddy reached 391,675 tons and 54,339 ha of harvested area. Productivity of paddy in Sukoharjo is higher among the paddy productivity in other regency/municipality, while the lowest productivity was recorded in the Pekalongan Regency in the amount of 4.312 tons/ha. The high productivity of paddy in Sokoharjo showed that the ability of farmers in paddy farming has been going well.
The research approach taken is paddy and rice stability. So there is a conversion from paddy to rice. The Central Statistics Agency said that the conversion rate of milled unhusked rice (GKG) to rice that is now used is 64.02%. This figure is up from the previous calculation at 62% which is often used as a reference for farmers and rice mills before. The change in number was caused by an improvement in the paddy production sector, "The processing technique has improved.
Supply Stability
Stability of price/supply represents fluctuations (increase or decrease) in price / supply over a certain period of time. The smaller the price/supply fluctuations during certain period, the price/supply conditions are said to be stable, and vice versa. Fluctuations of food price/supply re measured by variation coefficient values (CV).
Demand is very important for production planning to be have an impact on how big the rate is supply to keep price stability. Total commodity demand is useful for food as one input in determining food commodity production targets, how much is needed as well overview of future price developments. Meanwhile the number of supply is useful for food commodities as description of the level of commodity production concerned agriculture that can achieved based on assumptions that are used. By comparing the results of demand and can be known the condition of the demand balance and supply of the relevant commodity whether in a state of surplus or deficit. In the short term and medium condition will be related to current distribution of food commodities which impact on supply and price stability.
The main actors in development agriculture is a farmer that cultivate the certain agricultural commodities. Farmers have an important position as one of the subject actor's economy in the local, regional order even nationally. Important thing farmers are expected to be sustainable farm is price certainty. So that agricultural business that it runs able to provide an income feasible and sustainable, then commodities which should be cultivated properly is a prospective commodity on the market.
a. Stability of Rice Supply
Distribution of rice availability and demand for consumption needs to be known, so that regions with potential rice production can be better developed and areas with no potential of rice production can develop their approriate food potential. The aim is to increase the rice 56 AGRARIS: Journal of Agribusiness and Rural Development Research availability. The balance between supply and demand of rice consumption is strongly influenced by the population. If the rice availability is greater than the consumption, then the area is said to be a rice surplus area, otherwise the area is said to be a rice deficit if the rice availability is smaller than its consumption. This is consistent with Nuryanti (2005), that the fluctuating dynamics of bidding are highly vulnerable because the population increases so consumption also increases.
The amount of food available must meet the interests of all people, whether sourced from domestic or imported production. Second, accessibility both physically and economically. Physical affordability requires that food is easily accessible to individuals or households. Whereas economic affordability means the ability to obtain or buy food or is related to the people's purchasing power for food. Third, the aspect of stability (stability), refers to the ability to minimize the possibility of food consumption below the level of standard needs in difficult seasons (famine or natural disasters) (Fuad, 2009).
One aspect of food, namely food availability, has a correlation with rice field area (Tambunan, 2008), harvested area, planting area (Suwarno, 2010), rice productivity (Mulyo and Sugiarto, 2014), and rice production. The increase of rice field area, harvested area, planting area, rice productivity, and rice production can increase the rice availability. The net production of rice is assumed to be the condition of rice availability. In this case, the operational limit used is rice availability from the perspective of domestic production generated to meet the demand of community consumption without considering the rice produced from the study area. The rice demand for consumption can be calculated through the following formula: Rice Consumption Demanded = Population x 113.48 kg/capita/year. The figure of 113.48 kg/capita/year is the standard value of rice consumption demand per-capita determined by Central Bureau of Statistic. This figure means that each population needs 113.48 kg of rice per year. This study assumes that each population has the same amount of rice consumption needs. In this case, the assumption used is that all the rice available in an area is entirely used to meet rice consumption needs in the area. If the stock of rice available is greater than the needs of rice consumption, then the area is said to2be a rice surplus area, whereas if the stock of rice available is smaller than the needs of rice consumption, then the area is said to be a rice deficit.
It was found that the results of rice production in Tawangsari, Mojolaban and Sukoharjo Regency each is 32,115 tons, 46,795 tons and 391,675 tons respectively, with total population of 48,021 people for Tawangsari, 93,841 people for Mojolaban, and 871,397 people for Sukoharjo Regency. The amount of rice supply can be seen from the conversion of paddy to rice by 62.74%. The values of production, consumption, and supply stability are summarized in Table 2.
The results of rice production in the Districts of Tawangsari, Mojolaban and Sukoharjo Regency were converted to rice using a reference rate of 65.4%. The conversion results illustrate the availability of rice which is a source of public consumption. The need or 57 The Stability of Supply ….. (Ekowati, et.al) consumption of rice is the result of the population with a reference to rice consumption, which is 113.48 kg / per capita / year. Of the availability and need of rice in the Districts of Tawangsari and Mojolaban and Sukoharjo Regency are in a surplus condition. The existence of a rice surplus shows that the consumption needs of the population have been met. However, the surplus condition needs to be studied further to find out the stability of supply. Meanwhile the projected supply useful for food commodities as description of the level of commodity production concerned agriculture that can achieved. -201632,12 21,003.21 47,94 5,439.78 15,563.43 6.35 -201735,17 22,999.87 47,99 5,446.13 17,553.74 -201832,39 21,181.10 47,95 5,441.14 15,739.96 Mojolaban 201646,79 30,603.93 93,845 10,641.57 19,962.36 7.88 201745,64 29,846.60 95,06 10,779.69 19,066.91 201840,53 26,503.35 96,27 10,916.79 15,586.56 Sukoharjo 2016391,68 256,155.45 871,39 98,886.13 157,269.32 7.78 2017387,98 253,738.92 878,37 99,677.88 154,061.04 2018
Source: Central Java in numbers of 2018, Tawangsari in numbers of 2019, Mojolaban in numbers of 2019
The results of rice production in the Districts of Tawangsari, Mojolaban and Sukoharjo Regency were converted to rice using a reference rate of 65.4%. The conversion results illustrate the availability of rice which is a source of public consumption. The need or consumption of rice is the result of the population with a reference to rice consumption, which is 113.48 kg / per capita / year. Of the availability and need of rice in the Districts of Tawangsari and Mojolaban and Sukoharjo Regency are in a surplus condition. The existence of a rice surplus shows that the consumption needs of the population have been met. However, the surplus condition needs to be studied further to find out the stability of supply. Meanwhile the projected supply useful for food commodities as description of the level of commodity production concerned agriculture that can achieved. Changes in the amount and supply of rice are presented in Illustration 1 and 2.
The amount of rice supply and consumption in the Tawangsari District and Mojolaban District showed that both regions have been able to meet the consumption even in the third year the supply of rice is decerase. It is mean that the amount of rice supply is higher than rice consumption resulting in a surplus of rice supply. The occurrence of a rice surplus is one of the factors of stability. The amount of rice supply is stable if the co variance value is smaller than 9. The results of the CV analysis in the study area showed a value smaller than 9, namely 6.35; 7.88 and 7.87 respectively. This shows that the availability of rice in the study area is stable. The surplus and stability of a region's rice supply illustrates that rice is a potential commodity in the area. Furthermore, with the existence of a surplus and stability the distribution of these commodities is possible to other regions.
The amount of rice produced at the study area showed its ability to meet the demand or consumption of the population. This can be seen from the amount of production or supply and consumption where the amount of supply is greater than the value demand, so it can be said that in the three areas of the rice supply is stable. Therefore, Sukoharjo Regency and the two study locations are areas with supply stability that can meet their regional consumption and it is also possible to have distribution outside the region as presented in the supply chain.
Based on the results of one sample t test analysis, it is known that rice in the study area is in a stable condition. This is indicated by the significance value of each CV namely 0.036; 0.041 and 0.046 respectively that less than 0.05. This showed that paddy and rice 59 The Stability of Supply ….. (Ekowati, et.al) commodities are potential commodities for Tawangsari and Mojolaban Districts and Sukoharjo Regency.
The stability of rice supply is a potential for the region to distribute to other regions. Because the amount of availability is greater than population consumption and is an area that has food security, especially rice.
b.
Price Stability Annual production pattern of paddy and rice in the production center shows that paddy and rice production during the main harvest is always abundant while monthly demand for paddy and rice is relatively stable. This matter causing the price of paddy and rice to fall. Conversely when it does not occur harvest, less paddy and rice production so it is lower than paddy and rice needs. As a result, prices will increase and not affordable, which occurs when farmers actually do not have inventory. This shows that the price of paddy and rice fluctuates according to season.
Fluctuations in commodity prices basically occur due to an imbalance between the quantity of supply and the quantity of demand needed by consumers. If there is an oversupply, the commodity prices will go down, on the other hand the commodity prices will rise if there is a lack of supply. For agricultural commodities that depend on the season, price fluctuations during the harvest season and non harvest season will occur.
Prices play an important role in the market economy. Price is one of the factors that determine every decision of producers and consumers in allocating limited resources in order to go to the optimal Pareto condition or balance condition (Brummer et al., 2009). According to Nicholson (2004), market prices have two main functions, namely: (i) as information about the quantity of commodities that producers should offer to obtain maximum profits; and (ii) as a determinant of the level of demand for consumers who want maximum satisfaction.
There are at least two reasons why an analysis of rice prices is important to do, in this case related to the purpose of conducting a price analysis namely (1) to estimate certain economic coefficients (parameters) such as the elasticity of demand for rice prices, and (2) to forecast (price) in the future and the factors that influence the price level of rice.
Price fluctuations are actually a normal thing and are needed to keep the market functioning, ie creating a competitive market. Changes in prices will become a problem if prices soar very high and unpredictable, which in turn will create uncertainty that can increase risks for producers, traders, consumers, and of course also the government.
Supply stability is illustrated by the price, so it can be examined to the stability of paddy and rice prices. The paddy data were approached with prices at each harvest season, while the rice price can be approached every month. Based on the analysis results, it is known that there is price variation in paddy prices in Tawangsari and Mojolaban Districts and Sukoharjo Regency with the C4 and IR64 varieties. It can be said that there is no difference in the paddy in price of C4 variety in research location for 2017-2018 period, however there has been a flat price reduction of IDR 42.8/kg in Mojolaban District. This is because the supply or harvest during the planting season 1 of 2018 in Mojolaban is greater than Tawangsari so that the price declines. This is in accordance with the opinion of Brummer et al. (2013) concluded that price fluctuations are basically strongly influenced by supply and demand in the market. For agricultural commodities, input markets and fossil fuels greatly affect price fluctuations. Stock can also affect prices, low stock will cause prices to increase in the market. Price fluctuations in the study area did not cause price volatility. This is indicated by the CV value lower than 9, which means that the price of paddy is in a stable condition. Price variations also occur in rice prices. Variations in rice prices in the study area were approached with C4 and IR64 varieties in 2017 and 2018. It is presented at Table 4. In 2017, the average price of C4 variety in Tawangsari District was IDR 10,500/kg with Coefficient Variance of 0. This is happened because throughout 2017 period the price of C4 variety is stagnant so that it could be said to be very stable. Likewise, the price of IR64 variety in 2018 is also stagnant. Unlike the price variations of C4 and IR64 varieties in Mojolaban District, the average price of rice in Mojolaban is lower than Tawangsari, but there are variations in price progression for the two rice varieties above. The value of Coefficient Variance for the two rice varieties in Mojolaban is greater than Tawangsari with the largest CV from IR64 of 7.0. If it is carefully examined, the CV values of both C4 and IR64 varieties for the two Districts and Sukoharjo Regency can be categorized into minor because the values are less than 9. This is consistent with Proborini (2018) that the minimum standard of CV value for price stability set is less than 9%. Based on inferent and descriptive analysis, it can be stated that the stability of rice prices in the study area is maintained. | 2020-08-27T09:08:55.844Z | 2020-08-12T00:00:00.000 | {
"year": 2020,
"sha1": "9cf80970839db67c8e6857e229dae9a9031748ac",
"oa_license": "CCBYSA",
"oa_url": "https://doi.org/10.18196/agr.6190",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "2f14b99aa3926fd2213cb34f737ef3b84f5dbbbc",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Economics"
],
"extfieldsofstudy": [
"Economics"
]
} |
159414558 | pes2o/s2orc | v3-fos-license | Model forests in Russia as landscape approach: Demonstration projects or initiatives for learning towards sustainable forest management?
Implementing sustainable forest management (SFM) policy on the ground is not straightforward, and depends on the social-ecological context. To meet the need for place-based stakeholder collaboration towards regionally adapted knowledge production and learning in support of SFM an integrated landscape approach can assist. Hosting most of the circumboreal forest Russia is a key global player. To transition boreal forestry in the Russian Federation from wood mining towards SFM after the collapse of USSR several initiatives were initiated. Our aim is to review the outcomes and consequences of the initiatives employing the international Model Forest concepts' six principles in Russia. To identify candidates for the study we identified 12 local initiatives using this term, all in Russia's boreal forest biome. However, while seven demonstration forests focused on improving wood production practices, five were long-term stakeholder-driven development processes aimed at SFM, and were approved members of the International Model Forest Network. The five latter were selected for a detailed study to understand their temporal dynamic in the circumboreal Model Forest context, and the extent to which they complied with the six principles of the Model Forest concept as an example of a landscape approach. The sources, amounts and durations of these initiatives' funding affected both outcomes and consequences on the ground. All five had developed a partnership that formally shared a commitment to SFM. However, not all areas were large enough to represent all dimensions of SFM. Not all Model Forests developed a representative, participative, transparent, and accountable governance structure, which affected the programs of their activities. Finally, knowledge-sharing, capacity-building and networking at multiple levels was variable. In spite of Russia hosting most of the circumboreal forest the Model Forest concept was not sustained in Russia due to ending of foreign project funding, to limited continuity of committed local capacity, and poor support from national-level decision makers. The exception is the Komi Model Forest's transition to a successful consulting company focusing on SFM. To develop regionally adapted approaches to implement SFM policy we stress the importance of sharing experiences from Model Forests as well as other landscape approach concepts among countries and regions with different landscape histories and governance arrangements. To enhance this, we propose a general analytic framework for learning through evaluation about place-based long-term initiatives that integrate evidence-based knowledge about states and trends of sustainability and cross-sector multi-level governance. https://doi.org/10.1016/j.forpol.2019.01.005 Received 1 July 2017; Received in revised form 5 January 2019; Accepted 8 January 2019 ⁎ Corresponding author. E-mail addresses: per.angelstam@slu.se (P. Angelstam), marine.elbakidze@slu.se (M. Elbakidze), robert@manrax.com (R. Axelsson), bas.pedroli@wur.nl (B. Pedroli), evgeny.zabubenin@ikea.com (E. Zabubenin). 1 Post address: Schelinska gatan 4, SE-732 32 Arboga, Sweden. 2 IKEA of Sweden AB, PO Box 702, SE-343 81 Älmhult, Sweden. Forest Policy and Economics 101 (2019) 96–110 Available online 08 February 2019 1389-9341/ © 2019 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY license (http://creativecommons.org/licenses/BY/4.0/). T
Introduction
Sustainable forest management (SFM) is an agreed international explicit grand policy vision for the sustainable use and governance of forest landscapes' goods, services and values (Anon, 1995;Montréal Process, 2009;Forest Europe, 2011). To implement this vision, governance, spatial planning and forest management to satisfy economic, ecological and socio-cultural objectives need to be integrated at multiple levels of societal steering and spatial scales. Given that different SFM objectives are often rival (e.g., Mönkkönen et al., 2014;Triviño et al., 2015;Angelstam, 2018a;Eyvindson et al., 2018;Naumov et al., 2018), policy implementation requires collaborative learning among stakeholders based on active adaptive management (sensu Shea et al., 2002). This involves both the societal process of sustainable development (Baker, 2006), such as landscape or regional stewardship (Giessen, 2010;Angelstam and Elbakidze, 2017) as multi-level governance by stakeholders from public, private and civil sectors , and the consequences of this process on the level of economic, ecological and socio-cultural sustainability. Governance, planning and management towards sustainability thus requires continuous monitoring of the states and trends of economic, ecological and socio-cultural dimensions of forest landscapes as socialecological systems. Place-based integration of knowledge production and learning towards sustainable landscapes has become termed landscape approach (World Forestry Congress, 2009;Axelsson et al., 2011;Sayer et al., 2013;Freeman et al., 2015;Sabogal et al., 2015). However, the context for policy implementation varies considerably among states and regions due to biophysical, historical, social and political conditions affecting forest landscapes as social-ecological systems (Pierce Colfer and Capistrano, 2005;Lehtinen, 2006;Angelstam et al., 2005Angelstam et al., , 2011aAngelstam et al., , 2017aAngelstam et al., , 2017bAngelstam et al., , 2018aAngelstam et al., , 2018bBlicharska et al., 2012;Henry and Tysiachniouk, 2018;Naumov et al., 2016).
Being the world's largest country hosting the majority of the circumboreal forest biome, the Russian Federation harbours a large part of the world's wood resources (Niskanen et al., 2003) and carbon stocks (Krankina et al., 1996;Goodale et al., 2002), and hosts many of the last intact boreal forest landscapes and biodiversity hotspots (Potapov et al., 2008). The Russian Federation is thus a key global player for the implementation of SFM policy in the circumboreal biome. Russia is also interested in forestry intensification (Nordberg et al., 2013;Naumov et al., 2016;Angelstam et al., 2017a), and is challenged with retaining social and cultural values of forest landscapes associated with traditional land use practices supporting sustainability and governance Lehtinen, 2006;Nordberg et al., 2013;Stryamets et al., 2015).
Since the end of the Soviet Union in 1991, and still with a top-down governance system (Malysheva, 2005;, forestry and the forest sector in the Russian Federation are in transition from planned to market economy (Nilsson and Shvidenko, 1998;Pappila, 1999). Forest legislation has been revised (e.g., Krott et al., 2000;Malysheva, 2005;Tornianen et al., 2006), forestry has been exposed to international pressure to develop market economy (Karvinen et al., 2006), and to the challenges of biodiversity conservation (Aksenov et al., 2002), increased use of bioenergy (Gerasimov and Karjalainen, 2009) and rural development (Blagovidov et al., 2006;Lehtinen, 2006).
The Russian Federation is committed to SFM through both the Montréal process since 1995 and more recently the Pan-European forest policy process (Forest Europe, 2011). However, implementation of SFM policy in the Russian Federation, and other former socialistic countries, face numerous specific challenges. Economically, the transition from planned to market economy and reformation of the forest sector, forest tenure and land ownership, and forest intensification issues are key topics (e.g., Carlsson, 2000;Carlsson et al., 2001;Solberg et al., 2010;Nordberg et al., 2013;Naumov et al., 2016). Ecologically, while policy objectives about forest biodiversity conservation and the related tools for implementation reflect state of the art, they are not well understood by those in charge of biodiversity policy implementation (e.g., Lazdinis et al., 2007;Blicharska et al., 2012). There are also limited knowledge, resources and tools for planning of functional habitat networks and collaboration, poor connections between local and regional planning, and weakly developed public participation (e.g., Blicharska et al., 2011). Furthermore, the maintenance of social and cultural capital is crucially important, especially in rural remote areas (Carlsson, 2000;. Finally, the current governance contexts, including both the federal-regional balance and degree of international influence, are clearly different today compared to the Soviet period (Lehtinen et al., 2004;Malysheva, 2005;Nysten-Haarala, 2009). New additional challenges are the emerging uncertainty and instability linked to global economic and climate change (i.e. 'Anthropocene'), which make prediction and control difficult (Lawrence, 2016). This stresses the need to transition from planned top-down decision-making towards regionally adapted solutions where forest landscape stewards and managers need to collaborate, learn, assess risk, innovate, share findings and evaluate alternatives.
Russia has a legacy of mining wood for industrial use (Knize and Romanyuk, 2005), but is now seeing the need for new approaches (Naumov et al., 2016). An interesting aspect of the forest-related development in the Russian Federation is a suite of national and international initiatives using the term model forest to support the implementation of SFM, or aspects thereof, on the ground that, beginning in the 1990s appeared in regions with different forest ecosystems and forest history (Gromtsev, 2002;Elbakidze et al., 2007Elbakidze and Angelstam, 2008). These initiatives form an important, but so far poorly utilized, pool of knowledge that is needed to the understand how different concepts and initiatives succeed with their intentions to promote SFM in the Russian context (Angelstam et al., 2009).
The aim of this study is to analyse place-based initiatives that used the term Model Forest sensu IMFN (2008) in the Russian Federation to understand and learn from their contribution to the implementation of SFM policy on the ground. First, we mapped all local and regional initiatives using the term model forest in Russia. Given the global role of the circumboreal forest biome we mapped the temporal dynamic of Model Forests in Russia, Canada and Fennoscandia. Second, for those initiatives in Russia currently or once listed on the web site on the International Model Forest Network (www.imfn.net), and thus committed to the Model Forest concept as an example of landscape approach, we analysed how these initiatives complied with the six core principles of the Model Forest concept according to IMFN (2008): landscape, partnership, commitment to sustainability, governance, program of activities, and knowledge sharing, capacity building and networking. We discuss lessons learned, and the need to encourage collaborative learning processes by comparative studies of the effects of Model Forest as well as other landscape approach initiatives on landscapes as social-ecological systems in different governance and landscape history contexts in Europe and beyond (Angelstam et al., 2011aLehtinen et al., 2004;Nysten-Haarala, 2009). Finally, to enhance this, we propose a general theoretical neutral analytic framework to operationalise learning through evaluation from multiple place-based landscape approach concepts and initiatives applying them.
Theory: model forest as a landscape approach concept
Providing a wide range of consumptive and non-consumptive benefits to people, firms and society, landscapes' renewable natural resources and values continue to be at the centre of many policies about forests, water, agriculture and rural development. To translate policies to practices that sustain an increasing range of benefits it is essential to navigate the complexity of interactions within coupled social and ecological systems of many kinds. Sustainable development theory recognizes since long the need for participants in resource management issues to work in partnership to build consensus on options (e.g., Sinclair and Smith, 1999). This implies both sustainable development as an inclusive societal process (e.g., Baker, 2006), and ensuring sustainability as consequences on the ground (e.g., Norton, 2005). A key challenge is to actually operationalise this (e.g., Freeman et al., 2015). To maintain natural capital and enhance human well-being, modified landscapes often require capacity-building in social systems to help achieve resilience towards maintenance and restoration of landscapes as social-ecological systems (Dawson et al., 2017). This involves both place-based modification of the biophysical environment, coordination of human management of land and water, and motivation of stakeholders and actors to act sustainably.
The term landscape approach emerged with the aim to cope with this complex web of interactions on the ground Sayer et al., 2013Sayer et al., , 2015Freeman et al., 2015;Sabogal et al., 2015). To enhance regionally adapted implementation of policies aimed at sustainable development and sustainability in landscapes as local social-ecological systems, a wide range of landscape approach concepts aimed towards place-based knowledge production and engaged stakeholder collaboration have emerged . Model Forest, Biosphere Reserve, Ecomuseum and Long-Term Socio-Ecological platform are four examples of landscape approach Angelstam and Elbakidze, 2017). These concepts are paralleled by other efforts towards area-based rural development programmes (e.g., Giessen, 2010), such as based on regional governance of collaborative development processes involving forestry, agriculture and water.
Landscape is a well-established concept that can aid knowledge production and learning by fostering the necessary transdisciplinarity, thus integrating researchers and other knowledge producers representing different disciplines, as well as stakeholders representing different sectors at multiple levels (Termorshuizen and Opdam, 2009;Freeman et al., 2015). The term landscape captures the manifold dimensions of areas and places where people live and work . Consideration of landscapes' biophysical, anthropogenic and perceived dimensions at multiple scales represents a holistic approach to implement policy about sustainable use of natural capital through spatial planning and integrated land use management across landscapes as large areas (Primdahl et al., 2017). Climate, terrain, soil and the flow of water determine the particular types of natural ecosystems, and form the biophysical checkerboard that underpins the delivery of ecosystem services. These range from tangible goods and ecological functions to habitat for species and cultural values. Human land use has modified once natural ecosystems, and resulted in cultural landscapes, agricultural fields, managed forests and built infrastructure. Furthermore, landscapes' different land covers provide intangible cultural values, including sense of place to people. When landscapes have been intensively used to deliver one kind of ecosystem service, other ones may not be satisfied or disservices may occur.
Public participation builds on the hypothesis that if those affected by the use and conservation of natural resources system are involved in the decision-making steering the system, it will be easier to realize the aims expressed in policies (e.g., Blicharska et al., 2011). However, because ecological, economic and socio-cultural contexts vary in space and time, participatory approaches need to be adapted to specific local and regional context (e.g., Freeman et al., 2015;Angelstam and Elbakidze, 2017). To encourage the necessary collaborative learning among actors and stakeholders, there is a need to develop integrated place-based partnerships involving all key players across different sectors at multiple levels in local landscapes that matter to people living there, and beyond . The term landscape approach captures this, and focuses on strengthening cultures of maintaining inclusive social processes on the one hand, and sharing different actors' and stakeholders' views and needs of what multi-functional landscapes and regions should satisfy, and how to accomplish that on the other. "Partnership towards sustainable landscapes", i.e. Model Forest such as defined by IMFN (2008) is one type of landscape approach.
Satisfying the different dimensions of SFM and other natural resource policies requires governance systems, which support coordination and co-operation across the horizontal and vertical organizational dimensions of a landscape as a social-ecological system (e.g., . Therefore, participation and deliberative consensus-building processes with the goal of enhancing cooperation and coordination among a diverse range of stakeholders is crucial (e.g., Healey, 2006). Thus, development of a partnership among stakeholders that represent private, public and civil sectors, as well as different levels of governance is one of the major conditions . Partnerships may support implementation of approaches towards solving problems in landscape, land or water management; elaboration of new mechanisms of knowledge and experience exchange; enhancement of the resource basis (technical, financial, etc.) for common targets; understanding of values and qualities of each sector necessary for integration and sustainability of society (Tennyson and Wilde, 2000). Ideally a partnership replaces competition by collaboration and leads to the development of improved personal, organizational and institutional capacity of partners. Freeman et al. (2015) identified three meanings of landscape approach, namely a large spatial extent, a sectoral approach, and an integrated approach. Focusing on an integrated landscape approach, like the Model Forest concept, they examined five criteria that emerged from their analysis of landscape approach, i.e., multifunctionality, participation, inter/transdisciplinarity, sustainability and complexity ( Table 5 in Freeman et al., 2015). Axelsson et al. (2011) presented a practical operationalization of landscape approach using five core attributes that should be satisfied: (1) a sufficiently large area that matches management requirements and challenges to deliver desired goods, services and values, (2) multi-level and multi-sector stakeholder collaboration that promotes sustainable development as a social process, (3) commitment to and understanding of sustainability as an aim among stakeholders, (4) integrative knowledge production, and (5) sharing of experience, results and information, to develop local tacit and general explicit knowledge. All five require collaborative participation. Sayer et al. (2013) noted that at the international level there has been a shift from conservation-orientated perspectives towards increasing integration of poverty alleviation goals. They listed ten summary principles (see also Table 2 below in the discussion) to support implementation of a landscape approach, all emphasizing adaptive management, stakeholder involvement, and multiple objectives.
The application of the analytic framework we used to address compliance with the international Model Forest concept followed the approach used by Blicharska et al. (2012) to analyse the implementation of the Polish Promotional Forest Complex's 19 local initiatives, a concept aimed at promoting the Polish interpretation of SFM, against the principles that define that concept. Regarding the Model Forest concept, there are six core principles established by IMFN (2008) that define the acceptance of a Model Forest as a member into the international network: (1) a landscape large enough to present an area's diverse forest uses and values; (2) an inclusive and dynamic partnership; (3) a commitment to sustainability; (4) a governance structure that is representative, participative, transparent and accountable; (5) a program of activities reflective of partner needs and values; (6) a commitment to knowledge-sharing, capacity building and networking from the local to the international levels.
Methods
To map initiatives using the term model forest in the Russian Federation we first made an internet search using the string "model forest" AND "Russia" in both English and Russian in 2011, which was the last year with an active model forest initiative (see Table 1). Russia hosts the majority of the circumboreal forests. Therefore, to put Russian Model Forest initiatives in a global perspective, we first mapped the temporal dynamic of initiatives in the circumboreal biome by using the website of the international and Canadian networks of Model Forests, and updating this data through interviews with the coordinators of the Russian, Fennoscandian and Canadian network of Model Forests in 2017. Next, we focused the analysis on the five initiatives that once were committed to the Model Forest concept in the Russian Federation, and following the guidelines in IMFN (2008).
To understand the drivers behind the establishment of initiatives using the term model forest, and to assess compliance of individual model forest initiatives with the Model Forest concept's principles (IMFN, 2008) we used mixed methods (Greene, 2007). The qualitative analysis was inspired by grounded theory (Glasser and Strauss, 1967). First, open-ended qualitative interviews were conducted with Model Forest coordinators, facilitators and leaders, as well as with stakeholders and actors at different levels of governance. The total number of face-to-face interviews amounted to 36 in Kovdozersky, 102 in Komi, 77 in Pskov, and 30 in Kologrivsky. The interviews focused on the development process of the Model Forests from idea to implementation, maturation and eventually termination. This included the nature of stakeholder participation and governance structures of Model Forests to understand correspondence to the IMFN principles 2, 3, 4 and 6. Second, to analyse how the programs of Model Forests' activities reflected the partners' needs and values (Model Forest principle 5), we analysed documents and reports stored in Model Forest archives, and published information in regional newspapers, journals and magazines. Our aim was to assemble an overview of the local and regional issues or concerns related to natural resource management that eventually led to the establishment of a Model Forest initiative. Third, we made literature searches for peer-review research articles and papers in professional forestry journals. Data were collected during the period 2005-2017 during multiple periods of research in the Model Forest initiatives in Kologrivsky (by AK, BP), Komi and Pskov (by ME, MT, PA), and Kovdozersky (by ME, PA). Gassinski was terminated before this study commenced, but was observed by EZ; here we relied on two key informants who were involved with this initiative as auditors. Finally, regarding principle 3 we developed one ecological, one economic and one social sustainability argument regarding what is a sufficient size of a Model Forest landscape. In the results section each of the Model Forest principles are summarised, after which the results are presented.
Survey and dynamics of model forest initiatives
The Internet search to map place-based initiatives using the combined key words "model forest" and "Russia" resulted in 36,600 hits in English, and 17,400 in Russian. Using the first 500 results from each of the searches we identified 12 initiatives (Table 1, Fig. 1). There were, however, clear differences between the information on the English and Russian web sites. On web sites in English the five Model Forests (Gassinski in Khabarovsk kraj, Kologrivski in Kostroma oblast, Komi in the Komi Republic, Kovdozersky in Murmansk oblast and Pskov in Pskov oblast) once listed by the International Model Forest Network (Table 1) showed up. Also as the Taiga and Volga Forestry initiatives in the Republic of Karelia and Nizhny Novgorod oblast, respectively, were listed in English (Table 1). On Russian web sites all these seven initiatives were listed, as well as an additional five initiatives (Table 1, Fig. 1). In terms of the frequency of occurrence of all initiatives, Komi Model Forest ranked highest on English web sites and Pskov Model Forest highest on Russian sites. Gassinski Model Forest ranked third in both languages. General information and discussion about Russian initiatives was represented in 12-13% of web sites in both languages. Finally, English web sites comprised a high proportion of sites without relevant content to Model Forest initiatives in Russia (49%) compared to Russian sites (1%). The initiatives varied much in terms of origin (Russian or foreign), year of appearance (1994-2011) and stage of development (from idea to terminated project) ( Table 1).
The temporal dynamic of the five Russian Model Forest initiatives
Drivers for establishment of model forest initiatives
Since the early 1990s organizations and policy processes in several different countries were involved with plans and initiatives to support implementation of SFM policy on the ground in the Russian Federation (Lehtinen et al., 2004;Besseau et al., 2008). Official plans to establish Model Forests in Russia began in 1992, and in 1993 a limited competition among several proposals resulted in the creation of Gassinski Model Forest in 1994 with Canadian funding and Russian in-kind support (Anon, 2006). Within the framework of the Barents Council, which includes Russia, Finland, Sweden and Norway, high level meetings were held and argued for place-based initiative using the term Model Forest (Barents Region Forest Sector Initiative, 2001). Three Model Forests (Kologrivski, Komi and Pskov) emerged independently, being initiated and financed from abroad, i.e. the Netherlands, Switzerland and Sweden/Finland, respectively (Elbakidze et al., 2007 started the planning process to identify a suitable new Model Forest area in NW Russia. An open call for Model Forest proposals was launched and questionnaires were sent to the regional committees of natural resources in Karelia, Murmansk, Novgorod, Vologda and Leningrad oblasts. In 2003 the FSTF received three proposals from Archangelsk region and one from Murmansk. Archangelsk's suggestion included three alternative forest management units. Murmansk region proposed the Kovdozersky state forest management unit as the location for developing a Model Forest (Elbakidze et al., 2007).
Bordering Russia, Finland has a long history of forest-related collaboration with Russia. This includes the establishment of the Taiga Forest initiatives, and general information about Russian Model Forest initiatives on the internet based on search using the words "Model Forest" + "Russia" in English and in Russian (n = 500 for each category). Only five of the 12 Model Forest initiatives (Gassinski, Kologrivski, Komi, Kovdozersky and Pskov) were members of the international model forest network (www.imfn.net). P. Angelstam, et al. Forest Policy and Economics 101 (2019) 96-110 obtaining knowledge on how SFM can be implemented in regions with different systems of management and level of economic development to satisfy social needs in forest products, services and values; (4) development of approaches, which include local communities in the processes of making decisions in the sphere of natural resources management on local and regional levels. The web site http://modelforest.ru created in 2010, but now gone, and operated on behalf of the Russian Forestry Agency by the International Forest Institute and All-Russian Scientific Research Institute of Forestry and Mechanisation (VNIILM) was a clear sign of the ongoing Russian-Canadian collaboration. In 2007 the Russian Federal Agency of Forestry, inspired by the experience of the Canadian Model Forests, had a plan to create 31 Model Forests in all forest zones of the Russian Federation (Elbakidze et al., 2007;Elbakidze and Angelstam, 2008). Each Model Forest initiative was supposed to include both a geographical area and apply a special approach to SFM based on partnerships of all representative stakeholders (V. Roshchupkin, pers. comm; Anon, 2006). Following the appearance of the new Russian Forest Code in 2007, the idea was to develop Model Forests as "forest laboratories" and examples of how to implement policies about SFM based on domestic and international experiences (Zheldak, 2008;Martynuk et al., 2009). Zheldak (2008) listed 22 different sites in a proposed (or potential) Russian Model Forest network. From a geographical perspective, the idea was that each Model Forest should embrace a concrete territory, and the Model Forest network would represent all types of forests, and forest goods, services and values in Russia. However, due to changes in the political establishment in the Russian Federation the idea of development of Russian Model Forest network disappeared.
Summing up, the 12 Model Forest initiatives varied much in terms of origin (Russian or foreign), year of appearance and stages of development (from idea to completed project) during the period 1994-2011 (Table 1). In 2016 three Model Forest initiatives (Komi, Pskov and Kovdozersky) were still listed on www.imfn.net. In spite of this they were not active (Table 1, Fig. 2). In 2018 only Komi was listed, and the co-ordinating body for Komi Model Forest, the Silver Taiga foundation, had developed into a successful consultancy company focusing on developing different aspects of SFM in NW Russia. The Pskov Model Forest project ended in 2008. To continue the dissemination of the experiences of forestry intensification developed by the Pskov Model Forest by adopting Scandinavian experiences in the Russian context, the NGO Green Forest established in 2002 continued as a consultant NGO (P Hazell, pers. comm.). Former Pskov Model Forest staff members still continue the dissemination of how to introduce intensive forest management in the boreal biome in Russia (B. Romanyuk, pers. comm.). This is supported by the international pulp-and-paper industry that depends on local and regional wood resources. These big companies lobbied for the intensive forest management model to the Russian Federal forest authority; and from 2015 the Federal Agency of Forestry provided their official support to dissemination of this model. IMFN (2008) the area of a Model Forest should therefore include the diversity of forest use and regional values, and to represent large ecosystems. It must not, however, be excessively large so that forest users lose the sense of place. These criteria are fluid and dependent on context. To provide some idea of the size of a landscape we provide one ecological, one economic and one social sustainability argument. Using the minimum occurrence thresholds of breeding birds listed in the EU Habitat Directive at the home-range and landscape scales Angelstam et al. (2004) estimated that the average minimum area needed for 100 females was about 250,000 ha for a dynamic managed boreal forest landscape. Assuming that viable populations would need to encompass an effective population of 500 females (Meffe and Carroll, 1994), the area needed for viable populations would thus exceed 1,000,000 ha for the bird species in the example above. An economic example is a fictive boreal forest management unit that delivers 2,000,000 cubic meters of timber and pulpwood to sawmills and paper and pulp industries every year. Assuming a growth rate and harvest of 4 cubic metres per year the size of the forest management unit would be about 500,000 ha. From a regional planning perspective, and assuming that forest cover in this region is 50%, thus excluding bogs and wetlands, cities and settlements, agricultural and other land, this would also be equivalent to an administrative region or catchment of about 1,000,000 ha. Without the wood supply provided through commercial thinning, as in Sweden and Finland, the "growth rate" would be about 2 cubic meter per ha, and the management unit twice as large. Finally, from a social system perspective the daily home-range of people can be estimated based on the observation that across time and space people do not commute to work > 1.5 h per day, i.e. corresponding to ca 50-60 km one-way travel distance by car or train (e.g., Lindelöw, 2018). With a radius of 56 km around a regional center a social system landscape also covers 1,000,000 ha.
The sizes of Gassinski, Kologrivski, Komi and Kovdozersky Model Forests were all several 100,000 to 1,000,000 ha in size while Pskov Model Forest was much smaller (Table 1). This suggests that the territories of the four first Model Forests listed above were of sufficient size to address economic, ecological and social sustainability issues at multiple spatial scales, while the area of Pskov Model Forest was too small to cover all sustainability dimensions. For example, the size of Kologrivski Model Forest was large enough to test the relevance of the catchment-based approach for planning logging sequences as well as to elaborate specific technologies adapted to various abiotic landscape units (e.g., forest site types linked morainic, glaciofluvial and loess soils). However, there were also limitations. Landscape-ecological tools proposed for decision-making in the Model Forest assume priority of natural boundaries, which are curvilinear in most cases. However, the regional-scale planning documents insist on following strict linear boundaries of logging areas. Measures aimed at protection of oldgrowth forests were often perceived by practitioners as undesirable due to the lack of other easily accessible coniferous stands. Tenants of state forest lands did not enough stimuli to build networks of roads and prefer to use existing ones. This leads to overexploitation of accessible forests and emergence of huge monotonous young stands with low biodiversity. At the same time remote stands are not harvested. Although some experienced forest managers, showed awareness of the necessity to high conservation value forest stands such as old-growth remnants and riparian forests, they prefer to use conventional technologies in order to avoid fines (e.g., for leaving groups of trees in a logging area).
Principle 2: An inclusive and dynamic partnership 4.3.2.1. General.
Our analyses of the coordination and co-operation at different governance levels and among sectors in order to improve local governance show that the five Russian Model Forests committed to IMFN principles varied in their approaches to partnership development.
Gassinski model forest.
The ultimate goal of this Model Forest was to provide support for socio-economic development in the remote and natural resource dependent Nanajski district in Khabarovsk kraj in easternmost Russia. There was a heavy dominance of partners from public (government) sector at the creation in 1994, because the state was the only owner of forests and natural resources. Canadian funding of 1,000,000 CAD/yr was granted for the first 5-year period of this project, and in kind contributions of the same order of magnitude were provided by the Russian counterparts. This allowed for sustaining open discussion and deal with elitism, which later evolved into participation of civil and indigenous people, governmental organizations and businesses, including saw mills, wildlife management, and forest inventory. However, according to Gatsuo's (2002) survey of local stakeholder's perception of a new national park within the Gassinski Model Forest area many did not have clear idea of the Model Forest's boundary, and that the Model Forest's projects and activities concerned with local involvement and public communication were perceived as insufficient. This suggests a low level of participation (sensu Arnstein, 1969).
Kologrivski model forest.
This Model Forest project ran under a grant from the Dutch government during the period 2006-2009, which was spent to conduct research and organise public and stakeholder involvement needed to formulate the main tasks of Kologrivsky Model Forest (Ladonina and Pedroli, 2009). At the start it was recognised by the project leaders that the local stakeholder commitment to SFM in this area, located far from the European market, was very weak (Anon., 2005). The majority of stakeholders from local to regional levels did not understand the idea of SFM, and were negative to the development of a Model Forest. They were afraid that it would bring serious limitations to the exploitation of forest resources for the national market. The almost simultaneous formal establishment of the Kologrivski Forest Federal Strict Nature Reserve (Zapovednik) in January 2006, although in line with the Model Forest concept, did not make the position of the Kologrivski Model Forest easier. Local forest managers argued that the Model Forest had no sufficient legal credentials to implement environmentally-friendly harvesting techniques, which in many cases turned out to be in contradiction with present-day practice. The loss of a local champion had a negative impact on the further development of the Model Forest.
Komi model forest.
A total of 51 partners participated in the project's later phases 2000-2006, and represented public, civil and private sectors. Since 2006 it was continued in the form of the nonprofit foundation "Silver Taiga". This partnership successfully created a platform for discussing a broad spectrum of local and regional SFM problems, and for gradual development of collaboration. However, the majority of partners were from the public sector, and the private sector was less represented. The partners from civil sector actively participated in implementation of different Model Forest activities on the ground by working closely with local schools, museums and local youth clubs (Majevski et al., 2008;Tysiachniouk, 2005Tysiachniouk, , 2010Tysiachniouk, 2006). Although not primarily focussing on the model forest issue, also Russian-Dutch cooperation on the Pechora River Integrated System Management 2003-2008 was strongly engaged in SFM issues in the Komi Republic ( Van der Sluis et al., 2004;Van Eerden et al., 2005).
Kovdozersky model forest.
This initiative began in 2006. > 30 organizations representing local and regional business, local administration and regional non-governmental organizations responded to the invitation to become partners of the Model Forest project. However, the public sector dominated, and the majority of partners did not understand the ideas of the Model Forest defined by the IMFN (2008) (see Elbakidze et al., 2007Elbakidze et al., , 2012. . This partnership was initially directed at solving a number of specific issues in forestry associated to introducing Fennoscandian intensive forest management, and with hope that this would help to change forest legislation to accommodate this transition (Tysiachniouk, 2008(Tysiachniouk, , 2010(Tysiachniouk, , 2012Tysiachniouk, 2006;Tysiachniouk and Tulaeva, 2008;Tysiachniouk and Meidinger, 2012).
Principle 3:
A commitment to sustainability 4.3.3.1. General. According to official documents and statements, the objectives of all the Model Forests were to implement SFM by satisfying its economic, ecological and social dimensions at the regional level, and to disseminate new experience in the Russian Federation. Due to regional differences, determined by natural characteristics of the local forest landscapes, however, their environmental history and current differences in economic development, the motivations of Model Forest development were different. This, in turn, influenced the priority of SFM-related tasks of the Model Forests.
Kologrivski model forest. This Model Forest was initiated as a continuation of two Russian-Dutch cooperation projects in Kostroma
Oblast, namely the Kologrivski Forest strict nature reserve and Kostroma-ECONET (Ladonina and Pedroli, 2009;Glazov and Pedroli, 2011;Nemchinova and Khoroshev, 2011). The main goal of this Model Forest initiative was to implement sustainable multiple use forestry through spatial planning based on landscape ecological principles (Khoroshev, 2008). An additional objective was to certify the forest management according to the Russian National standard of forest management developed by the Russian National Committee of forest certification. There was a plan to create a Model Forest information centre. However, these plans were not realized (Ladonina and Pedroli, 2009). The regional wood-processing industry showed no active interest in cooperation for preparation of forest certification procedures. A typical explanation from timber harvesting companies involved was that there was no need for certification given their preferred orientation on Russian markets only. However, some managers demonstrated interest in cooperation with the Model Forest concerning an update of the state forest inventory results and recalculation of available timber resources taking into account actual landscape diversity.
Komi model forest.
Protection of pristine forests from wood harvesting was the original motivation for developing the Komi MF. In the beginning of the 1990s, several foreign forest companies began logging operations in the naturally dynamic forests adjacent to the Pechora-Ilych Reserve in the eastern Komi Republic (Elbakidze and Angelstam, 2008;Tysiachniouk, 2012). To prevent exploitation of these last large intact forest landscapes (Yaroshenko et al., 2001;Van der Sluis et al., 2004;Van Eerden et al., 2005), researchers from Russia and Sweden prepared a project with an aim to elaborate approaches to their sustainable management and submitted it to World Wildlife Fund (WWF) International. The project idea was accepted and began in 1996. The Swiss Agency for Development and Cooperation (SDC), which supported SFM implementation in countries in transition in the mid-1990s, funded the project. In 1999, SDC decided to shift the focus of the project to southwest Komi, and to use the term Model Forest. Criteria for selecting a Model Forest area were formulated and the area of Priluzje state forest enterprise in south-westernmost Komi was chosen for the Komi MF development. Although the main focus of the Model Forest activity at the beginning was on ecological aspects of forest management, analysis of the project materials and publications shows that priorities had been shifted from mainly ecological to economic and socio-cultural issues in forest management, and dissemination and education.
Kovdozersky model forest.
This Model Forest project was initiated by the Council of the Barents-Euroarctic region within the collaboration in the Barents region. To implement SFM in NW Russia, a working group of the Council on economic collaboration started in 2002 a search for the most optimal region for developing the Model Forest in the North-West Russia. In 2003, the Kovdozersky state forest enterprise in the south of Murmansk Oblast was chosen out of several candidates to develop the Model Forest. In 2004, consent of the Federal Agency of Forestry and the Ministry of Natural Resources of the Russian Federation was obtained. The main objective of developing the Model Forest was to revive forestry and the forest industry, which were the main employers during the Soviet time in the region, but declined dramatically in recent times which led to negative social consequences in the villages, such as high level of unemployment, depopulation and alcoholism (Elbakidze et al., 2007). Thus, the initial aim of the Model Forest was related to social and economic aspects of forest management. The Kovdozersky Model Forest initiative was not able to realize its ambitions and ended soon after when the foreign donors stopped their financial support.
Pskov model forest.
In the 1990s, the Pskov region, bordering the Baltic States, began playing an important role in timber trade and came into the zone of economic interests of the trans-national corporation StoraEnso. To ensure regular wood supplies, the company decided to initiate logging operations in the Pskov region. However, use of modern Scandinavian practices contradicted the existing Russian system of forestry regulations. To reach economic efficiency, StoraEnso had to develop new norms for their operations, which would be appropriate under the conditions of the Pskov region as one with extensive secondary forests after previous clear-fellings, and which could ensure economic profitability of the timber industry in the given market situation. In 2002 the company decided to initiate a project targeted at developing new ways of intensive forestry, adapted to the conditions in NW Russia. Given the partnership between Stora Enso, WWF and the Swedish International Development Agency (SIDA) and Russian authorities there was a balance between economic, ecological and social motives, in terms of poverty alleviation through strengthening Russian forest sector business capacity (Tysiachniouk, 2012).
Principle 4:
A governance structure that is representative, participative, transparent, and accountable 4.3.4.1. General. The governance systems of Russian Model Forests had similar features in terms of (1) a donor, financing and controlling project development; (2) project executives (or a non-governmental organization), implementing it; (3) a council of observers, which represents the interests of donors and coordinates activities; (4) a coordinating board (or a working group), consisting of partners' representatives and participating in the elaboration of the action plan. While the Model Forests had large partnerships, their achievements seem to have depended on the efforts of a dedicated core group (e.g., Anon, 2006). Hence we present the results for three aspects rather than for each of the five Model Forests.
The decision-making process in the Model Forests generally included (1) a project-funded NGO or project executives, who through consultations with a working group or coordinating board, revealed problems in forest use or management, evaluated them and found solutions. These were then discussed with (2) stakeholders, and then with (3) donor representatives. For example, in the Komi Model Forest the process was initiated by the donor, whose representative became a key actor in the governance system. Later, the NGO Silver Taiga took over a key role in the design of a strategic action plan to implement SFM with regard to regional and local nature resources and economic conditions and interests of the forest stakeholders. To realize the plans a working group with representatives of main partners was formed. The discussions among partners on different aspects of SFM were hot and opinions were often difficult to reconcile.
The implementation of decisions generally began after they were approved by the donor or the project leadership (see also for Pskov and Komi Model Forests). NGO-based project teams as executives worked with target groups, stakeholders and governmental organizations to realize adopted decisions, the donor or its representatives controlling this process. In addition to involving donors into the decision-making process, the transparency of the system of management was ensured by the work of the public relations group, which disseminated information on the project implementation via mass media and publication of various materials.
The local population participated in the decision-making process and its implementation through (1) public hearings on the questions of forest use, which were also required by FSC certification in Komi and Pskov; (2) formation of forest forums as neutral platforms for discussing questions of forest use between local population and other stakeholders; (3) provision of small grants for different activities in the Model Forest, such as forest forum events, ecology festivals and preparation of ecological trails. Libraries, schools, and cultural establishments were the main recipients of grants (Tysiachniouk, 2006(Tysiachniouk, , 2012. Educational activities were important components in the Model Forest governance system (see e.g., Ladonina and Pedroli, 2009, p. 27-29). Local and regional questions and problems of forest use and management became the topics of educational programs of field seminars and excursions for forest users and representatives of governmental structures of different levels. Some of these seminars showed evidence that the employees of the local forest management departments were disappointed by the necessity to spend too much time not in the forest but in the office for bureaucracy needs, especially following frequent alterations in legislation. They complained that recent legal regulations deprived them of some important rights and duties for forest recovery and fire protection ("we know how but have no right") while tenants in contrast "must do that but have no enough skills" (Ladonina and Pedroli, 2009). Then they were discussed on regional, federal and international levels that made the situation in the region more open and transparent, and attracted public attention to issues of forest use. Furthermore, new specialists were trained, who then possessed practical knowledge in dealing with the SFM issues.
Principle 5: A program of activities reflective of partner needs and values 4.3.5.1. General. Although the initial motives for creation of Model
Forests were different, given sufficient time, they all developed programs directed at the balance of major dimensions of SFM (ecological, economic, socio-cultural). They all also stressed the need for developing education, and improvement of the system of forest governance. The main direction of activities in the programs were similar and envisaged the following program of activity items: maintaining ecological functions of forests, conservation of biodiversity and pristine forests, support of FSC-certification process; economics of forest use; involvement of local population into the process of decision-making related to forest management; educational courses; development of regional forest regulation. Thus, all Model Forests' programs reflected interests and values of partners supported by discussions, collaborative work on projects and by requirements of donor organizations. 4.3.6. Principle 6. A commitment to knowledge-sharing, capacity-building and networking at multiple levels 4.3.6.1. General. We identified three stages in the development of Model Forests in Russia, each characterized by different scales of their network activities and capacity-building. During the first "childhood stage", a regional team of specialists was formed that acquired professional experience and knowledge for identification, evaluation and solution of concrete problems of forest use on local and regional levels. These specialists also learnt to collaborate with Model Forest partners and other stakeholders, and to establish contacts with governmental structures for implementation of decisions on SFM policy. Donor organizations, like caring and demanding parents, helped projects to develop continuously.
At the second "maturing" stage, Model Forests, having funding and the experience and knowledge on solving difficult problems related to forest use, realized the necessity of presenting themselves on the national level as knowledgeable and experienced stakeholders in the process of forest sector reformation in the Russian Federation. This evolutionary period of Model Forests was accompanied by a number of important events. One example is that the representatives of five Model Forests signed an agreement for collaboration within the framework of the Initiative Network of Model Forests of Russia in June 2006 (Elbakidze and Angelstam, 2008).
During the third "adult" period of Russian Model Forests began their independent life without donor assistance. The foreign financial support ended in 2006 for the Komi Model Forest and in 2009 for Pskov and Kologrivski Model Forests. At present two Model Forest projects resulted in the evolution into project dependent-independent organizations, Silver Taiga and Green Forest, respectively. Today, only Silver Taiga maintains its niche in addressing questions of SFM on local and regional levels with partners at different levels through projects. Kologrivski Model Forest has gone into hibernation, but a group of former participants of the Model Forest project still continue working out methodology of region-specific multifunctional forest management (Khoroshev and Koshcheeva, 2009;Nemchinova and Khoroshev, 2011).
Model forests in Russia: projects with different meanings
According to IMFN (2008) a Model Forest is characterized by operating in a landscape and having a partnership of stakeholders committed to sustainability. All principles listed by IMFN (2008) were formally shared by the five Model Forest initiatives once listed at the IMFN web site. The adoption in Russia of the Model Forest idea in the early 1990s, just shortly after its creation, was related to the Russian concept "best practices demonstration forests", which was established to develop and show forest management practices adapted to different regions (e.g., Yatsenko, 1999). This Russian concept is close to the term Model Forest, but does not imply the Model Forest principles concerning neither sustainable development as a societal process nor sustainability of multiple dimensions as outcomes. Thus, the term model forest, as originally used in Russia, referred to a demonstration or research site (Kolström and Leinonen, 2000), and did not include a partnership of stakeholders committed to SFM as a deliberative social process in landscapes with its ecosystems and social systems as defined by IMFN (2008). This is consistent with two barriers to implement SFM through landscape approach in Russia applied to forests. The first is the legacy of top-down decision-making, including a consensus of scholarly analyses showing that, if Russia ever did enter a transition to democracy after the end of USSR, that transition was not successful (Evans, 2011). The second is the clear focus on timber and pulpwood, which is illustrated by the new 2006 Russian Forest Code that came in force 2007, and which according to Hitchcock (2011) lacks commitment to environmental issues.
Hence, there are two views regarding how the term model forest is defined in Russia. The first can be described as demonstration of best practices in forest management that focuses of wood production. Those were top-down demonstration projects that were not linked to the International Model Forest Network's approach (IMFN, 2008), and which aimed at demonstrating and introducing best practices with a focus on forestry intensification (e.g., the Finnish-funded Taiga and the Volga Forestry Model Forests) (Leinonen and Kolström, 1999;Indufor Oy, 2009). The second was a landscape approach based on applying a place-based societal sustainable development process towards sustainability that is based on integrative knowledge production regarding both social and ecological systems, respectively (e.g., Axelsson et al., 2011;Kläy et al., 2015). Three Model Forests that were linked to IMFN began top-down with funding from abroad, but then developed into more participatory governance (Gassinski and Pskov, and especially Komi Model Forest, see ). Foreign donors' support to collaborative learning was crucial for this transition. Two Model Forests linked to IMFN had a bottom-up more Russian-led regional approach from the beginning (Kovdozersky and Kologrivski).
The Model Forest in Komi is an example of a successful project, which transformed into the non-profit consultancy company Silver Taiga focusing on economic, ecological and socio-cultural dimensions of forest landscapes. During the period 1996-2006 a Swiss-funded project was carried out using the name Priluzje Model Forest, which was facilitated by the Silver Taiga Foundation under the active supervision of a donor committed to transdisciplinary knowledge production (Kläy et al., 2015). However, the site www.komimodelforest.ru has not been updated since January 2009, and visitors are re-directed to www. silvertaiga.ru. Similarly, the Pskov Model Forest project was carried out 2000-2008 and the most recent news on the site www.wwf.ru/pskov/ was from February 2009. The Green Forest Foundation, which was established by the donor, was expected to continue the dissemination of Pskov Model Forest experiences. However, the most recent news from the site www.green-forest.org is from January 2007. Nevertheless, in the case of the Pskov Model Forest, the experiences of introducing the Nordic intensive forestry model in Russia, as well as approaches to planning for biodiversity conservation, continue to be disseminated to other regions in Russia (B. Romanyuk, pers. comm.).
In 2007 the Russian Federal Agency of Forestry claimed that it would take administrative and financial responsibility for developing a network of Model Forests in Russia. However, during the transformation of the Russian forest sector after the new Forest Code in 2007, and the subsequent restructuring of state agencies and changing jurisdictions, the support for this network ceased. Unfortunately, the accumulated considerable practical experience in solving regional questions of SFM and its governance in Russian Model Forest and other placebased initiatives, are therefore not used to their full potential to address current challenges (see Nordberg et al., 2013;Naumov et al., 2016;Angelstam et al., 2017a). According to Crotty et al. (2014) the passing of the Russian NGO Law in 2006 has led to a reduction in NGO activity, as well as curtailment of civil society development. Foreign support to Russian civil society is considered as agent activity, and has ceased. Instead groups funded and therefore controlled by the state dominate. This has implications for landscape approach initiatives' focus on multilevel governance as part of democratic development.
All Russian Model Forest initiatives committed to IMFN (2008) and reviewed in this study were short-term or long-term projects, which appeared as a result of a successful timing and the combination of donors interested in Russian SFM development, and sometimes a strong local or regional champion. These factors made it possible to promote and implement new decisions in order to change and improve forest management according to the desires of stakeholders. The majority of the activities in the decision-making and implementation processes were initiated, facilitated and financed by foreign donors.
The type of donor chosen by the local champion determined the interests and approaches towards the implementation of SFM in Russia. With the exception of Gassinski and Kovdozersky initiatives all Model Forests were actively using international FSC forest certification (or, in the case of Kologrivski, the Russian forest certification system (Zheldak, 2008) as a tool for SFM implementation. This was also a way to get funding since the initiatives became important structures for the FSC implementation process, and thus kept the Model Forest initiatives alive (Tysiachniouk and McDermott, 2016). However, our results shows that local governance arrangement supported financially, and partly professionally, from abroad cannot be adaptive in the long run, including a "post-project" life. Similarly, without place-based facilitation, international origin forest certification frequently faces problems to include social interests and engaging local communities (Henry and Tysiachniouk, 2018;Tysiachniouk and Henry, 2015).
International sharing of knowledge among Model Forests
As a landscape approach concept focusing on forests, the Model Forest concept emerged with the aim to establish a flexible adaptive approach based on large-scale experimentation and pilot areas to demonstrate potentially useful approaches to sustainable forest management (Brand et al., 1996). A fundamental principle of the Model Forest concept is the generation and sharing of knowledge through research, innovation and collaboration Bonnell, 2012). Hosting the largest proportion of the circumboreal forest biome, Russia is of key importance for conservation and sustainable use of boreal forests. However, the route towards implementation of SFM policy varies depending on a particular region's or landscape's history, social and community base, economic development, and ecological context. Our analysis of Model Forest initiatives in Russia shows that there is a rich pool of experiences that can be used to gain needed knowledge to support the implementation of SFM, and for the development of local to regional adaptive governance initiatives not only in Russia, but also internationally. In spite of multiple reasons for encouraging application of a landscape approach such as Model Forest in the circumboreal forest biome's different contexts, it is surprising to the observe the disappearance of the Model Forest concept in Russia, and also the decline of Model Forest initiatives in Canada (Fig. 2). This diversity of path dependencies offers unique opportunities for systematic analyses of multiple Model Forest initiatives. Brand et al. (1996) and Bonnell (2012) Analyses of multiple Model Forest initiatives offer opportunity for learning about participation and governance in different contexts (e.g., as well as particular aspects of SFM (Nordberg et al., 2013). For example, the European continent's West and East form a 'time machine', which provides unique potential for mutual collaborative learning towards multi-functional landscapes and regions . This is possible due to the steep gradients in land use history whereby the gradual exploitation and intensive management of forest resources has spread like a tidal wave from areas of high demand in the West to more and more remote regions in the East (e.g., Naumov et al., 2018). Similarly, there are large regional differences in governance arrangements and social and cultural capital.
Achieving increased sustained yields of wood, fibre and biomass is a key issue in Russia (e.g., Indufor Oy, 2009;Naumov et al., 2016), and there is an interest in learning about the experiences from the development of the Nordic model for intensive forest management developed in Finland and Sweden (Nordberg et al., 2013). This was the key function of the Pskov MF. However, identifying both positive and negative consequences of intensification for different SFM dimensions is crucial (Angelstam et al., 1997(Angelstam et al., , 2011a. For example, comparisons of indicators of economic vs. ecological sustainability in Sweden and Russia, as well as in countries of intermediate intensification stages, demonstrate an inverse relationship (Angelstam et al., 2018a;Naumov et al., 2018). This calls for the design of functional networks of protected areas prior to the onset of forestry intensification in Russia (Degteva et al., 2015), and not afterwards, as in Sweden (Angelstam et al., 2011b). Additionally, the functionality of habitat networks need to be analysed . As illustrated by continued wood mining in protective zones near water in the Komi Republic (Naumov et al., 2017), satisfying rival forestry objectives continues to be a challenge in Russia. On the other hand, restoration of biodiversity in countries like Sweden and Finland, both applying the Nordic intensive forest management approach, requires knowledge that can be obtained through comparative studies of species, habitats and ecological process using Russia's remaining intact forest landscapes as benchmarks (Yaroshenko et al., 2001). Such knowledge exchange is particularly crucial as a strong bio-economy discourse is emerging (Eyvindson et al., 2018). Also regarding social sustainability and its governance there is opportunity for mutual learning among countries and regions. Focusing on forest landscapes as arenas for rural development (Giessen, 2010), both Sweden and Russia have legacies of a clear focus on producing raw material for the industry. In contrast, in Turkey a key topic for Model Forest development based on the country's forest policy is multiple-use aimed at rural development to reduce urbanisation (see Tolonay et al., 2014). Regarding governance, a comparison of Model Forest initiatives in Russia and Sweden showed that while they were predominantly top-down projects in Russia, they were processes initiated from below in Sweden . This pattern was found also when comparing the application of another landscape approach concept (Biosphere Reserve) in Ukraine with topdown enforcement and Sweden with the focus on collaboration with local stakeholders .
Application of landscape approach concepts such as Model Forest implies that to balance different kinds of economic use, human wellbeing and conservation of ecological and cultural values, there is a need to combine integrated landscape strategy with collaborative learning among stakeholders. Continuous monitoring of and communication among stakeholders about state and trends of different SFM dimensions is crucial. According to Sayer et al. (2013) "Landscape approaches seek to provide tools and concepts for allocating and managing land to achieve social, economic, and environmental objectives in areas where agriculture, mining, and other productive land uses compete with environmental and biodiversity goals". Also implementation of SFM policy requires consideration of both social and ecological systems, and how they interact. Thus, a landscape commonly hosts many land cover types with associated policies (e.g., Giessen, 2010). Therefore any proactive partnership aiming at sustainable forest, water, agricultural, rural or urban landscapes should become actively involved in the development of collaborative learning and knowledge production for sustainable landscapes together with public and civil sector actors at multiple levels (Kläy et al., 2015;Angelstam et al., 2017b). A landscape strategy has three fundamental components (Primdahl et al., 2017): (1) visions and goals; (2) a spatial extent and long-term plan for the direction of development, and (3) a number of specific short-term projects. To support this process requires development of inter-sectoral spatial planning and zoning to accommodate different values, ideally with a catchment-based landscape perspective, and collaborative and participatory approaches to planning and governance.
To be successful, a landscape strategy process depends on a high degree of willingness to develop consensus among stakeholders in a landscape; success is therefore not likely in areas with unresolved conflicts. The role of the local stakeholder community as a mediator between the public (international, state and local administrative units) and the individual domains of land owners has, however, changed dramatically during the last 200 years . Rural landscapes are subject to a complex arrangement of issues. The private sector must cope with dynamic markets and technologies, as well as a complicated suite of public regulations. The public sector is faced with an increasing number of contradictory policies and planning systems among different sectors and levels of governance. The experience of Model Forests in Russia demonstrates that a state transitioning from planned to market economy, and frequently changing its Table 2 Landscapes provide a wide range of consumptive and non-consumptive benefits to people, firms and society. Landscapes' renewable natural resources and values therefore continue to be at the centre of many policies about forests, water, agriculture and rural development. This table presents four general criteria as a base for learning through evaluation that combines landscape as space, infrastructure and Lee's (1993) idea of compass and gyroscope. We compare these four criteria with three operational landscape approach concepts and two general frameworks for landscape approach.
Criterion (inspired by Lee, 1993) Model Forest (IMFN, 2008) Biosphere Reserve (UNESCO, 2008) LTSER platform (Grove et al., 2013;Mirtl et al., 2013;Angelstam et al., 2018a) General framework General framework (Sayer et al., 2013) A. Landscape as space (area, region, catchment) Angelstam, et al. Forest Policy and Economics 101 (2019) 96-110 legislation, constrains managers from long-term planning, especially in disadvantaged remote regions. Forest companies prefer to harvest as much timber as possible from easily accessible areas (Naumov et al., 2017). Proposals for catchment-scale or landscape-scale planning of wood harvest, which also address runoff regulation and biodiversity conservation (e.g., Khoroshev and Koshcheeva, 2009) face bureaucratic obstacles and lack of appropriate regulations. Hence, Model Forest initiatives need either to rely on well-developed legislation or to elaborate scientifically-based proposals for improving legislation and a system of state incentives to environmentally-friendly technologies of forest use. From the point of view of the local or regional social-ecological system, different sectors' planning systems form silos that hamper collaboration at the larger spatial scale than the one dealt with by the private sector forest manager (e.g., Blicharska et al., 2011). Finally, the resulting sectoral professionalization is also a challenge for civil sector organizations of various kinds (Primdahl et al., 2018). The comparative approach that we advocate is captured by the terms integrative and transdisciplinary research, or better knowledge production and learning (e.g., Tress et al., 2006;Axelsson, 2010;Van Paassen et al., 2011;Kläy et al., 2015). To achieve this vision there is a need to review policies and empirical knowledge about not only forest management, regulation vs. market economy, but also rural development and biodiversity conservation in terms of composition, structure and function of both terrestrial and aquatic ecosystems, as well as approaches to stakeholder participation for adaptive landscape governance (Pinto Correia et al., 2018). The latter has to be adapted to regional context in terms of biophysical conditions, forest history and governance arrangements (e.g., Angelstam et al., 1995Angelstam et al., , 2005Angelstam et al., , 2009Angelstam and Lazdinis, 2000;Knize and Romanyuk, 2005). Individual Russian Model Forest initiatives made repeated informal use of this comparative approach by making study tours to learn about the Nordic intensive model to forestry in Sweden (e.g., , about the post-Soviet transition in Latvia, and to Canadian Model Forest initiatives (Ladonina and Pedroli, 2009;Bonnell, 2012). This provided insights into the future development of the transition trajectory from planned to market economy (Anon., 1998;Asunta, 2000). However, multi-level learning processes take very long time to develop (Giessen, 2010). Even with committed stakeholders it may take a decade to develop the trust and collaboration that enables a partnership in a landscape and several partnerships internationally to collaborate towards learning for sustainability (e.g., Axelsson et al., 2013). This is why it is crucial to get started bottom-up to establish "a strategically managed niche within which scholars and practitioners from many different disciplines can engage in a long-term common learning process" (Kläy et al., 2015).
Towards a general framework for learning through evaluation
Evaluation as a professional activity plays an important role to improve the understanding about "what really works". The concept learning through evaluation captures this challenge (e.g., Lähteenmäki-Smith, 2007;Brulin and Svensson, 2012). Comparative studies of initiatives applying different landscape approach concepts can enhance learning about how to implement policy aiming at sustainable landscapes in different contexts such as environmental history (e.g., Blickarska et al., 2012;Naumov et al., 2016), and legacies of governance, planning and management of natural resources, ecosystem services and landscape values (see Angelstam and Elbakidze, 2017;Angelstam et al., 2018b).
The global diversity of landscapes as social-ecological systems, and contexts, is a valuable asset to use to learn about how to operationalise the idea of Model Forest and other landscape approach concepts as partnerships towards sustainable landscapes (Angelstam et al., 2017b). However, this requires a set of unified clear criteria for what landscape approach is. The terms "landscape" and "landscape approach" are increasingly applied to address how multiple objectives related to both environmental and social goals can be satisfied on the ground (Freeman et al., 2015). Noting the different and ambiguous use of the terms, several studies have made valuable efforts towards defining commonalities among different concepts advocating an integrated landscape approach Sayer et al., 2013;Freeman et al., 2015).
However, also empirical studies about what takes place on the ground are needed, such as this study about all the Model Forest initiatives in Russia, and a previous study of the 19 Polish Promotional Forest Complex initiatives (Blicharska et al., 2012). Another example is a recent assessment of the performance on the ground of another landscape approach concept, the Long-Term Socio-Ecological Research (LTSER) platform with 67 initiatives in Europe. Angelstam et al. (2018b) developed a normative model for that landscape approach concept by integrating Grove et al. (2013) architectural metaphor "siting-construction-maintenance" and Mirtl et al. (2013) triangle of region and actors (i.e. landscape as a coupled social-ecological system), research, as well as infrastructure and co-ordination, and the need for networking among LTSER platforms that represent social-ecological gradients in Europe. This approach resulted in four criteria and generation of 16 indicators for which verifier variable data were collected using both quantitative and qualitative methods.
Such empirical studies show that the partners in a place-based landscape approach initiative have the potential to support the transition from research and development projects to long-term learning processes (Giessen, 2010;Kläy et al., 2015). Sharing of both positive and negative experiences among both landscape approach concepts, and initiatives applying them on the ground, is thus a powerful tool towards social and collaborative learning (e.g., Angelstam et al., 2017b). However, evaluation should not just be a formal requirement to fulfil protocols. It should also be part of a reflexive and interpretative process to capture transferable practice-based knowledge to support participation and sustainability in local and regional socio-ecological systems.
This approach to learning through evaluation should ideally sample multiple place-based initiatives applying different landscape approach concepts in gradients that represent variation in landscape history and societal steering, such as on the entire European continent including NW Russia (Angelstam et al., , 2018b. However, this calls for a focused and pedagogic narrative to introduce a comprehensive analytic framework that is neutral to existing landscape approach concepts, and which can be applied to any concept or initiative. We suggest four criteria, viz. (1) a concrete landscape representing both space and place, which is supported by (2) an appropriate administrative infrastructure, and combined with the use of Lee's (1993) idea of (3) compass (sustainability as consequences), and (4) gyroscope (sustainable development as a societal process). In Table 2 we compare these four general criteria with those of Model Forest, Biosphere Reserve, and Long-Term Socio-Ecological Research Platform, as well as two studies proposing general attributes or principles for an integrated landscape approach. The high-level praise of landscape approach as a tool (e.g., World Forestry Congress, 2009;Sayer et al., 2013;Freeman et al., 2015) needs to be matched by comparative studies of what place-based initiatives applying any integrated landscape approach actually deliver in social-ecological systems. Bridging barriers in terms of competition between organizations and concepts that focus only on their own version of what landscape approach means is also needed. We therefore encourage wide use of systematic learning through evaluation of placebase applications of different landscape approach concepts.
Conclusions
In Russia the term model forest has two meanings: (1) the Russian concept "best practices demonstration forests" aimed at showing operational forest management practices focusing on wood production adapted to different regions; and (2) the international Model Forest concept promoting partnerships towards sustainability in landscapes. The Russian experience from place-based initiatives applying the international Model Forest concept is a valuable resource for the production of knowledge and social learning needed to develop sustainable circumboreal forest landscapes. We identified two barriers to sustaining the international Model Forest concept in Russia. The first is the legacy of top-down decision-making, unsuccessful transition to democracy and limited bottom-up processes. The second is the narrow focus on intensification of wood harvest and silviculture. The period of living Model Forests 2004-2011 can be viewed as a window of opportunity to implement the idea of partnerships for sustainability in a landscape. This allowed the Komi Model Forest's transition from a long-term project to a successful consulting company able to facilitate collaborative processes, guided for a decade by a mentoring donor. Sharing experiences internationally among countries and regions with different forest histories and governance arrangements is crucial to encourage learning by evaluation. We encourage the use of Lee's (1993) idea of compass and gyroscope in concrete landscapes, and with a reasonably solid infrastructure in terms of committed partners that can facilitate the transition from research and development projects to long-term learning processes. | 2019-05-21T13:04:21.643Z | 2019-04-01T00:00:00.000 | {
"year": 2019,
"sha1": "a78a7d4035250e1414c667ce07160cee0d8e1d80",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.forpol.2019.01.005",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "05f76b9d1c53305691f4b64fef1092d7e5781bae",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Business"
]
} |
18503784 | pes2o/s2orc | v3-fos-license | Gastrointestinal helminth infection in pregnancy: disease incidence and hematological alterations.
BACKGROUND
The incidence and hematological effects of helminth infection during pregnancy were investigated among pregnant women in Isiala, Mbano, Southeast Nigeria.
METHODS
Totally 282 pregnant women were enlisted for the study between October 2011 and September 2012. Stool samples were examined for intestinal helminths using formalin-ether sedimentation technique. Hemoglobin (Hb) and Packed Cell Volume (PCV) levels were evaluated in venous blood samples using Sahli's and microhaematocrit methods respectively.
RESULTS
Forty six (16.3%) subjects were infected with at least one helminth parasite; 24 (8.5%) hookworm, 14(5.0%) and 2(0.7%) A. lumbricoides and Trichuris trichiura infections respectively. Intestinal helminthiases in pregnant women was significantly associated with age (P<0.05). The prevalence of intestinal helminthiases by parity was also significantly different (P<0.05) with primigravidae having the highest infection rate (27.5%). Hematological assessment showed that the prevalence of anemia among the women was 58.9% (mean±SD = 9.3±1.0). The differences in hemoglobin levels by age groups was statistically significant (P <0.05). The contributory effect of gastrointestinal helminths in anemia showed that infected pregnant women had lower mean hemoglobin (8.60±0.22g/dl) than the uninfected (9.72±0.07g/dl). Significant difference (t-value = 5.660, P<0.05) was observed between the Hb of the infected and uninfected pregnant women. In addition, infected pregnant women had mean PCV of 26.09±0.65% while the uninfected had 34.54±2.96%. The mean PCV of infected pregnant women was significantly different (t-value= 0.013, P<0.05) from that of the uninfected.
CONCLUSION
Anti-helminthic therapy after the first trimester should be part of the antenatal programme. Intestinal helminth infection showed significant negative correlation with Hb and PCV and contributed moderately to anemia.
Introduction
"Intestinal helminths are among the most common and widespread of human infections, contributing to poor nutritional status, anemia and impaired growth" (1). Intestinal helminthiases are also known to aggravate pre-existing anemia by decreasing appetite and thus food and iron intake (2,3). Worldwide, anemia is an important reproductive health problem because of its association with adverse pregnancy outcome such as increased rates of maternal and perinatal mortality, premature delivery, low birth weight, etc (4). Women in developing countries spend half of their reproductive lives pregnant and lactating and a high proportion of women in developing countries become anemic during this period. Women of reproductive age who are iron deficient but not anemic may become anemic during pregnancy as a consequence of increased iron requirements and expanded plasma volume. Other causes of anemia include parasitic infestations such as malaria and intestinal worms. "Epidemiological surveys have revealed that poor sanitation and inappropriate environmental conditions coupled with indiscriminate defaecation, geophagy and contamination of water bodies are the most important predisposing factors to intestinal worm infection" (5). Practices such as hand washing, disposal of refuse, personal hygiene, wearing of shoes and others, when not done properly may contribute to the infection or picking of these worms from the environments (6). This research investigates the prevalence of helminth infection and its hematological alterations during pregnancy Findings of this study will serve as a tool in evidence based health education on the need to intensify efforts at preventing helminthiases and its attendant risk of anemia during pregnancy.
Study Population
Two hundred and eighty-two pregnant women between the ages of 18 -45 years, in their various trimesters and of various parities (0 -10) were enlisted. The women were enlisted at various antenatal clinics (ANC). The Mbano Joint Hospital laboratory was used as the base analytical centre.
Determination of Helminth Infection
Fresh stool samples for helminth screening were collected from each of the 282 subjects in dry, clean, leak proof and sterilized sample containers. The samples were examined for consistency and presence of cysts, proglottids and adult worms. Concentrated saturated sodium chloride floatation and formol-ether concentration techniques were used for fecal analysis. The total number of eggs was counted under X40 magnification of a compound microscope Stool samples were processed within 8 hours of collection and examined microscopically within one hour of preparation to avoid over clearance of hookworm ova. Based on the thresholds recommended by the World Health Organization (WHO), helminth intensities were classified as light, moderate or severe (7).
Determination of Hemoglobin Concentration
Using a sterile syringe, 3mls of venous blood was collected from each of the subjects and transferred into a capillary tube. The specimens were centrifuged using a microhematocrit centrifuge at 3000 rpm for 5 minutes. The PCV of each specimen was determined using a Hewkley microhematocrit reader and classified as follows: mild (PCV 27-29%), moderate (PCV 19-26%), and severe (PCV below 19%). The World Health Organization (8) benchmark for anemia defined as Hb < 11g/dl was differentiated as Hb < 4g/dl-'very severe anemia', Hb < 8g/dl-'severe anemia', Hb < 9g/dl-'moderate anemia' and Hb <11g/dl-'mild anemia'.
Data Analysis
Data entry and validation was performed in excel, and statistical analysis was done using Statistical Package for Social Sciences (SPSS) version 17.0. Values were considered statistically significant when p-values were less than 0.05 (p<0.05). Pearson Chi-square, t-test and correlations were used to determine the association between hemoglobin concentrations and helminth infection as indicators of anemia.
Permission and Ethical Approval
At the onset of the study, the community and household heads were well briefed on the objectives of the study. Thereafter, they were given informed consent forms to sign for their communities and households after their contents were translated to them in local languages. The study protocol was approved by the State Health authorities.
Helminth species and levels of infections in relation to age, trimester and parity
The gastrointestinal helminth parasites observed in this study were hookworm (8.5%), Ascaria lumbricoides (5.0%) and Trichuris trichiura (0.7%), while mixed infection accounted for 2.1% (Table 1). Of the 282 pregnant women examined, 46(16.3%) were infected with at least one parasite species. Age specific prevalence showed that subjects of 18 -20 years age group had the highest rate of infection (27.0%) while those of 41-45 years had the least rate (0%). The difference in infection by age groups was statistically significant (P<0.05, χ 2 = 28.759, df=12). Within the trimester, pregnant women in their first trimester had the highest infection rate of 20.9% while those in their third trimester had the least (12.9%). The differences were however not statistically significant (P>0.05; χ 2 = 6.895, df =8). Table 1 also shows the prevalence of infection on the basis of parity. The Primigravidae had the highest prevalence (27.5%) while the Gravidae 7 group had the least rate (7.7%). Differences in the prevalence of helminth infections by parity groups was statistically significant (P<0.05; χ 2 = 32.437, df=12).
Intensity of Infection
The intensity of infection among pregnant women (
Effect of Gastrointestinal Helminths on Anemia in Pregnancy
The contributory effect of gastrointestinal helminths on anemia is shown in Table 4. Pregnant women who were infected with one helminth or the other were observed to have lower mean hemoglobin (Hb) of 8.60±0.22g/dl than that of the uninfected (9.72±0.07g/dl). Significant difference (T-value = 5.660, P<0.05) was observed between the Hb of the infected and uninfected pregnant women. In addition, pregnant women infected with one helminth or the other had a mean PCV of 26.09±0.65% while the uninfected had 34.54±2.96%. The mean PCV of infected pregnant women was also significantly different (t-value= 0.013, P<0.05) from that of uninfected pregnant women.
Correlation between helminth infections and indicators of anemia in pregnant women
The correlation between Hb, PCV and helminth infections is shown in Table 5. Hookworm infection was observed to have a moderate highly significant negative correlation with Hb (r= -0.389, P<0.01) and PCV (r= -0.277, P<0.01). Mixed infections (Hookworm and Ascaris lumbricoides) were also observed to have a mild highly significant negative correlation with Hb (r=-0.179, P<0.01) and PCV (r=-0.192, P<0.01).
Discussion
The prevalence of intestinal helminth infections among the study population (16.3%) is epidemiologically significant considering the fact that this is an epidemiological survey involving asymptomatic subjects. It has been observed that any helminth ova or larvae present would be in very low level and possibly undetectable (9). The high prevalence of hookworm infection compared to the A. lumbricoides and T. trichiura infections may be attributed to the cultural practices of the subjects especially agriculture and also high level of unhygienic practices. This is consistent with the report of a study (10). The prevalence of parasitic infections among pregnant women differed significantly (P<0.05) within the age groups, indicating gestational-age dependence. Helminth infections were also found to decrease with trimester. Findings in this study show that pregnant women in their first trimester were more infected than those in second and third trimesters. This can be attributed to the fact that treatment of helminthiases during ante natal visits is done after the first trimester. That is, pregnant women are given anthelminthic drugs after their first trimester (11). When gestational age was related to anemia, women in their second pregnancy trimester were more anemic than their counterparts in their first and third trimesters. The reason for this outcome is not apparent. However, anemia in many areas of Africa was described as usually most severe in the second trimester of gestation, especially following a period of acute infection, e.g. malaria, in the first trimester (12,13). This study established an association between the intensity of helminth infections and lower hemoglobin (Hb). Pregnant women with light infections were found to have low hemoglobin levels, but women with heavy infections had lower hemoglobin levels. The pathenogenicity of helminth infection shows that the disease manifests in three main phases, with the intestinal phase representing the most important period. A moderate hookworm infection according to studies will gradually produce anemia as the body reserves of iron are used up, with the severity depending on the worm load and the dietary intake of iron (12). The burden of disease imposed on helminth-infected girls and women of childbearing age, especially when pregnant, may very well define the single most important contribution of intestinal para-sitic infections to the calculation of their global disease burden. This study reveals a significant difference (P<0.05) in the mean Hb and PCV of the infected and uninfected pregnant women. Pregnant women who were infected with at least one helminth parasite presented not just a higher frequency of anemia but also significant lower level of hemoglobin and PCV. Ascaris lumbricoides and T. trichiura infections were also observed to have a negative correlation with Hb and PCV among infected pregnant women but were not significant (P>0.05). Thus, helminth infections exacerbated anemia in this setting. The WHO has suggested that anemia is of "moderate" public health importance where its prevalence is between 20% and 39.9% and "severe" if it occurs in 40% or more of the population. Given these results, the importance and potential impacts of intestinal helminthiases during pregnancy, such as anemia, are quite obvious. This indicates the need for periodical stool examinations during pregnancy as part of routine laboratory test in the prenatal control of helminthiases. A single course of anthelminthic therapy in addition to iron-folate supplementation would significantly increase hemoglobin concentrations and improve iron status in pregnant women. As has been stated in other studies, it is necessary to modify some preventive measures of information and education and to give specific treatment before the pregnancy in order to increase some of the pregnant women's health indicators. Also, anthelmintic therapy which is inexpensive and safe during pregnancy after the first trimester should be part of the antenatal programme since malaria diagnosis and treatment is also part of the antenatal programme (14).
Conclusion
Pregnancy is a risk factor for intestinal helminth infection. This study established an association between the intensity of helminth infections and lower hemoglobin (Hb). There is need for periodical stool examinations during pregnancy as part of routine laboratory test in the prenatal control of intestinal helminth infection.
Ethical considerations
Ethical issues (Including plagiarism, Informed Consent, misconduct, data fabrication and/or falsification, double publication and/or submission, redundancy, etc) have been completely observed by the authors. | 2018-04-03T02:42:05.675Z | 2013-05-01T00:00:00.000 | {
"year": 2013,
"sha1": "4d3369099f4a58b94541999881717b3d85b10b7a",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "4d3369099f4a58b94541999881717b3d85b10b7a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
157501031 | pes2o/s2orc | v3-fos-license | Student food insecurity at the University of Manitoba
While rates of food insecurity among various sectors of the Canadian population are well documented, food insecurity among post-secondary students, as a particularly vulnerable population, has emerged in recent years as an area of research. Based on a survey of 548 students in the 2015/16 school year, this exploratory study examines the extent of food insecurity among undergraduate and graduate students at the University of Manitoba. Our study revealed that 35.3 percent of survey respondents faced food insecurity according to a 6-item survey. Of these students, 23.5 percent experienced moderate food insecurity, while 11.8 percent were deemed to be severely food insecure. Using chi-square tests and regression analysis, we compare these rates with various demographic indicators to assess which students appear to be at greater risk of food insecurity, factors contributing to food insecurity, and its effect on their student experience, their health, and their lives in general. In contemplating funding for post-secondary institutions and increases in tuition fees, provincial governments need to consider how this will affect student food insecurity.
Introduction
Life as a student should not be this hard, education should not be this hard to obtain 1According to the UN Food and Agriculture Organization, food security is defined as "when all people, at all times, have physical and economic access to sufficient, safe and nutritious food to meet their dietary needs and food preferences for an active and healthy life" (FAO, 2006).Alternatively, food insecurity at the household level is experienced when people are economically unable to purchase sufficient quantities of food or balanced meals that they need (Davis & Tarasuk, 1994;Tarasuk, Mitchell, and Dachner, 2014).The nature and prevalence of food insecurity is a growing concern in Canada and other affluent nations.The 2012 UN Special Rapporteur on the Right to Food Mission to Canada helped shed light on various dimensions of the growing problem of food insecurity in the country (De Schutter, 2012).While evidence has suggested high and increasing rates of food insecurity among certain populations of university students (such as campus food bank users or students receiving financial aid) (Abbott, Abbott, Aird, Weyman, Lethbridge, & Lei, 2014;Farahbakhsh, Ball, Farmer, Maximova, Hanbazaza, & Willows, 2015;Meldrum and Willows, 2006;Nugent, 2011;Willows & Au, 2006), quantitative investigation of food insecurity among post-secondary students in Canada has more recently emerged.Numerous factors contribute to student food insecurity, but most notable are the increasing financial burdens faced by post-secondary students (Farahbakhsh et. al., 2015;Nugent, 2011;Cummings, 2015;Silverthorn, 2016).
This exploratory article contributes to the emerging research on food insecurity at Canadian post-secondary institutions by examining the extent of, and factors related to, food insecurity affecting a population of students at the University of Manitoba.To set the context, we begin by presenting some general data on, and contributing factors of, food insecurity within the general population in Canada and Manitoba.This is followed by a brief exploration of studies that have examined food insecurity at Canadian campuses.
Food insecurity in Canada and Manitoba
According to national surveys, rates of food insecurity in Canada are rising.As of 2014, 12 percent of Canadians are affected, which is an increase from 7.7 percent in 2007/08 (Tarasuk, Mitchell, & Dachner, 2014).Within the young adult population, or those 20-34 years old, the rate of food insecurity was 11.6 percent as of 2012/13 (Statistics Canada, 2013).
In Canada, low income is the most reliable predictor of food insecurity (Tarasuk, Mitchell & Dachner, 2014).Over one million Canadians receive social assistance, not including the 780,000 individuals on disability.Of these Canadian whose primary source of income is social assistance, 60.9 percent are food insecure; for those receiving employment insurance or workers compensation, their rates are 35.6 percent (Tarasuk, Mitchell & Dachner, 2014).Rates of food insecurity and food bank use are disproportionately higher among certain populations, including single-parent families headed by women, and households with children under the age of 18 (Tarasuk, Mitchell, & Dachner, 2014).Indigenous Canadians are also over-represented, with rates of food insecurity among non-reserve Aboriginal populations of 25.7 percent in 2014 (Tarasuk, Mitchell, & Dachner, 2014).This statistic is likely significantly lower than rates of food insecurity on First Nations Reserves and in Northern Indigenous communities, which are monitored differently through other measures of Indigenous food insecurity in Canada (Huet, Rosol, & Egeland, 2012;FNIGC, 2012;Egeland Pacey, Cao, & Sobol, 2010).
In Manitoba, current statistics regarding food insecurity are lacking due to the province's omission of measures of food security in the 2014 Canadian Community Health Survey.However, in 2012 provincial household food insecurity affected 12.1 percent of Manitobans, up from 10 percent in 2010, but down from 12.4 percent in 2007 (Tarasuk, Mitchell, & Dachner, 2014).Much like the rest of the country, low income has been identified as the primary reason for food insecurity in Manitoba (Wiebe & Distasio, 2016).In 2015, Manitoba had the highest national rates of food bank use, distributing emergency relief to 63,791 individuals, which represents a 58 percent increase compared to 2008/09 (Pegg, 2008).Almost 5 percent (4.93 percent) of Manitobans resorted to food banks, in comparison to the 2.83 percent national average (McCracken, 2016).It is important to note, however, that this is an under-representation of food insecurity, as studies show that only one-third of food insecure Canadians seek food bank services (Farahbakhsh, 2015).
Food insecurity at Canadian universities
The prevalence and severity of food insecurity among university students across Canada is beginning to receive attention.Recently, researchers initiated a number of studies on several campuses, to assess the prevalence of food insecurity among various university student populations.Selected questions from the Household Food Security Status Module (HFSSM) of the Canadian Community Health Survey (CCHS) were the primary tool used in these studies for assessing rates of food insecurity.The results of these studies, aggregated by a national organization called Meal Exchange 2 , show that the average rate of food insecurity among students at five major Canadian Universities is 39.2 percent.The results of Meal Exchange's report are summarized below (Table 1).Preliminary results from unpublished studies from two additional Canadian universities show that 38.1 percent of students at the University of Acadia and 28.6 percent of students at the University of Saskatchewan were food insecure (Frank, Engler-Stringer, Power, & Pulsifer, 2015).Methodologies used in these two studies are closely comparable to the Canadian Community Health Survey, and therefore less comparable to the surveys tailored specifically to post-secondary students, employed at the University of Manitoba, and the five institutions listed above where some CCHS questions were used, but in a slightly modified form.
Prior to the studies undertaken by Frank et. al. (2015) and Silverthorn (2016), food security in Canadian post-secondary students was assessed mainly through food bank use.Since the 1990s, the number of food banks on university campuses has continued to rise; for example, as of 2011 there were over 70 food banks located on university campuses, up from 56 in 2006 (Gordon, 2011).In recent years, the University of Alberta, the University of Calgary, the University of Ottawa and Ryerson University have all cited increases in campus food bank use (Nugent, 2011;CBC News, 2012). 3he effects of food insecurity are far reaching.Food insecurity among post-secondary students in Canada has been linked to higher levels of stress and increased rates of depression and mental disorders (Stuff, Casey, Szeto, Gossett, Robbins, Simpson, Connell, & Bogle, 2004).Among students surveyed at five Canadian universities in 2016, physical and mental health were identified as the primary factors negatively affected by students' experiences with food insecurity (Silverthorn, 2016).Additionally, students identified (in descending order) that their social life, grades, class participation, and extra-curricular activities were the next most negatively affected factors of their personal and academic lives (Silverthorn, 2016).Among elementary students, food insecurity has been associated with lower math and reading scores, in addition to decreased memory (Cady, 2014).Among adult populations, food security can have negative effects on individuals' sense of community and belonging (Willows, Veugelers, Raine, & Kuhle, 2011).
In addition to scoring lower in measures of mental health, food insecure individuals in Canada specifically have been cited as exhibiting nutritional inadequacies, including deficiencies in protein, vitamin A, B-6, and B-12, thiamin, riboflavin, folate, phosphorous, and zinc (Kirkpatrick & Tarasuk, 2008).Likely as a result of these deficiencies, and additional negative health repercussions, food insecurity in Canada has been associated with higher use of public health care services, including inpatient hospital care, emergency medical services, and prescription drug use.Rates of public health care usage among food insecure households are double, compared with food secure households (32 percent vs 16 percent), indicating a disproportionate burden on the Canadian healthcare systems (Tarasuk, Cheng, de Oliveira, Dachner, Gundersen, & Kurdyak, 2015).
Participants
The University of Manitoba is the largest post-secondary institution in the province of Manitoba, and is located in the capital city, Winnipeg.In 2016 when the study was conducted, total enrollment was 28,804, with 3,654 graduate level students, or 13 percent.Of the total student population, 54 percent was female, and 46 percent was male (University of Manitoba, 2016).To engage student participants, the survey used census-style sampling and was sent out to all undergraduate and graduate students through e-mails from the University of Manitoba Student Union, various individual student groups including the Graduate Student Association, and sustainable food initiatives on campus.
Data collection
The online survey was conducted in January and February 2016, and administered by Survey Monkey.The survey was open to all students over the age of 18.The study was approved by the University of Manitoba Research Ethics Board, and participants provided consent through the survey interface.The survey included 33 questions, and rates of student food insecurity were assessed using six questions designed for Canadian post-secondary student populations (Silverthorn, 2016).These questions are listed in Table 2.The survey also included two questions that directly asked students to rate their own food insecurity.
Other questions regarding specific demographic populations were modeled on unpublished surveys administered at Acadia University and the University of Saskatchewan.These demographic questions asked students to identify their student status, sex, age, living arrangement, primary source of income, aboriginal and immigration status, marital status, and whether or not they payed rent, or had children or dependents.Two questions about selfidentified mental and physical health were asked, based on the Canadian Community Health Survey.Questions pertaining to students' experiences with, and coping mechanisms for food insecurity were adapted from questionnaires administered at several other Canadian postsecondary institutions 4 (Silverthorn, 2016).These questions are identified in Table 3.The survey included two questions asking students to identify what factors contribute most to their food insecurity, and the primary areas of their lives affected by their food insecurity.These questions, and the selection of responses provided to the survey participants, are identified in Table 4. Finally, the survey included an open-ended question that allowed students to provide additional comments pertaining to their experiences with food insecurity.The response rate, based on the University of Manitoba's student population of 28, 809, was 1.9 percent, or 548 students.Rates of student food insecurity were assessed using the six questions listed in Table 2.
Table 2: Questions used to assess moderate and severe food insecurity 1. I/we worried whether my/our food would run out before I got money to buy more.
2. The food that I bought just didn't last, and I didn't have money to buy more.
I/we couldn't afford to eat meals with a variety of foods, or a number of different kinds of foods,
according to what I/we prefer.
4. I had to sacrifice buying healthy (nutritious or diversified) foods in order to afford enough food.
5. I/ we skipped meals because there wasn't enough money to buy food 6.I/we did not eat for a whole day because there wasn't enough money for food
Data analysis
Based on the methodology used in similar studies of student food insecurity (Silverthorn, 2016), we categorized respondents into three categories of food insecurity depending on their answers to the six questions in Table 2. Students who responded positively ("often" or "sometimes true") to 0-1 of the six questions were assessed to be food secure, whereas students who responded positively to 2-4 of the questions were assessed as moderately food insecure, and students who responded positively to 5-6 questions were categorized as severely food insecure.Descriptive and univariate analyses were applied to the survey data.Chi-square tests of statistical significance were used in order to assess which demographic groups were at greater risk of food insecurity, and univariate regressions were employed to assess the odd ratios of these demographic populations experiencing food insecurity.The statistical tests were conducted based on moderate, severe and overall rates of food insecurity.The total number of food insecure students within each population was calculated based on the total number of students assessed to be either moderately or severely food insecure.The data was analyzed using SPSS software, and the findings of these analyses, as well as discussion of results are summarized below.
Findings
As represented in Table 3, the majority of 548 students who completed the survey were female, younger than 25, and studying at the undergraduate level.The majority of students were also single, and lived without any children or dependents.Among respondents, just over two-thirds identified as having fair to poor mental health, with 24.3 percent reporting fair to poor physical health.Within this category, 19.9 percent reported having fair physical health, and 4.4 percent assessed their physical health as poor.Mental health was reported to be fair by 23.5 percent of students, while 12.3 percent assessed their mental health as poor.The majority of students were employed, and cited employment or loans as their primary source of income.Just under half of students reported living alone or with a roommate or spouse, while just over half pay rent, mortgage, or residence fees for their accommodation.
This study found that 35.3 percent of the students surveyed were food insecure.Of these students, 23.5 percent faced moderate food insecurity, while 11.8 percent experienced severe food insecurity.The proportion of students assessed to be food insecure based on this study's methodology was considerably higher than the number of students who self-identified as food insecure, based on the provided FAO definition of the term provided to them (13.2 percent of survey respondents).
Among those who participated in the survey, several demographic groups were found to be more food insecure than others.Almost three-quarters of Indigenous students were assessed to be food insecure.As represented in Table 4, 42.1 percent of Indigenous students experienced moderate food insecurity, and almost a third were severely food insecure.Of the total number of students found to be food insecure, Indigenous students are between five and ten times more likely to report food insecurity than non-Indigenous students.Almost one-third of newcomers to Canada and exchange students were food insecure, and of these, over one-quarter were severely food insecure.
Students' primary source of income was also relevant to students' experiences with food insecurity.Severely food insecure students were almost five times more likely to be on student or bank loans.Almost 60 percent of these students were food insecure.Comparably, 34.3 percent of students who were employed, and 30 percent of students whose income came from scholarships, grants, or parents were found to be food insecure.
Students who experienced food insecurity were significantly more likely to experience poor to fair mental and physical health than those who were food secure.Among students with self-identified fair to poor mental health, almost half faced food insecurity (48.5 percent), while among students with fair to poor physical health, 53.9 percent were food insecure.
Slightly more than one-fifth of students identified financial barriers that affected their access to food.For example, they experienced periods when buying enough food was sacrificed to pay for tuition or textbooks, or because there was no longer enough money.4), based on the selection of answers provided to them.High cost of food was identified by one-fifth of all respondents, and 70% of respondents who identified as food-insecure, as the primary barrier affecting their access to food.In the written responses to the survey, students identified that they experienced periods when buying enough food was sacrificed to pay for tuition or textbooks, or because there was no longer enough money.Limited time was the second most cited factor.Apart from these, financial barriers such as tuition fees, housing costs, and inadequate financial support in the form of loans or grants were also significant.These responses correlate with the univariate findings that identify higher rates of food insecurity among students who live independently, and therefore bear a larger proportion of their own food and residence costs.
Both mental and physical health were found to be the most significant areas of life most impacted by food insecurity (Figure 2).Almost two-thirds cited grades as being negatively affected.This was followed closely by students' extracurricular activities, and social lives.In response to increased financial burdens, students use a variety of coping mechanisms to sustain themselves.
In answering the open-ended question located at the end of the survey, several students remarked on their need to work in addition to taking classes in order to meet their financial obligations.One student in this survey stated, "It's sad that I cannot feed myself after work[ing] 40 hours a week."Another student remarked on the effects that additional work has on their studies, stating that "something has to take a back seat and for me that is school so I have enough money to live comfortably."Another responded that "If I have to work more, then I take % of respnses (N = 180) % of students identifying factors contributing to their food insecurity less courses, it's that simple in my opinion."One student commented on the time constraints caused by their multiple commitments, "studying really limits the time I have to buy groceries and cook, as well as exercise and my social life."The same student also reflected on the difficulties this creates in finding the time to prepare meals, "I enjoy cooking but most days I rely on frozen dinners and quick fixes because I don't have the time to cook things, and am very stressed."Some students engaged in unconventional coping mechanisms, such as food bank use and stealing.Questions pertaining to coping mechanisms are described in Table 3.Despite the 11.9 percent of severely food insecure students, in total less than 4 percent of the population of respondents, and less than 10 percent of students who were food insecure identified as having used a food bank or hunger relief program in order to have enough to eat.Additionally, less than 4 percent of students identified as having to engage in unconventional coping mechanisms, such as "dumpster diving" or stealing.One student shared that they relied on credit cards in order to purchase food in times when they lacked cash, but added, "We don't want to go into a lot of debt, but we want our child to be healthy and we hope this is only temporary."
Discussion
Food insecurity is a reality for more than one-third of students surveyed at the University of Manitoba.Almost one-quarter of them experience moderate food insecurity, while more than one in ten surveyed were severely food insecure.In comparison with other Canadian post-secondary institutions where 39.4 percent of students experience food insecurity (Silverthorn, 2016), rates at the University of Manitoba were slightly lower.However, while the University of Manitoba has the lowest number of moderately food insecure students, at 11.8 percent it has the second highest rate of severely food insecure students, next to Lakehead University in Northern Ontario.Several factors could explain why the overall rate of food insecurity among students at the University of Manitoba might be comparatively lower than other provinces such Nova Scotia and Ontario.
Manitoba boasts relatively low costs of housing and tuition, as well as different student demographics.While transportation and food costs remain slightly above national averages, average housing costs in Manitoba are slightly lower than provinces such as Saskatchewan and Alberta.The cost of shelter was most recently assessed at $14,481 per year in Manitoba, significantly lower than the $17,160 national average, and when compared to provinces such as British Columbia, Ontario, and Alberta which average $18,497, $19,409, and $20,676 respectively (Statistics Canada, 2016a).On a national scale, post-secondary tuition fees in Manitoba are also comparatively low.In relation to the $6,373 national yearly average paid by domestic students across the country, students in Manitoba paid an average of $4,058 in the 2016/17 academic year (Statistics Canada, 2016b).Largely due to the 2011 provincial legislation tying tuition fees to inflation, Manitoban students pay the third lowest tuition fees in the country, after Quebec and Newfoundland and Labrador (Statistics Canada, 2016b).
Financial factors
Low income and financial obligations (tuition fees and loans) contributed significantly to the food insecurity of participants, despite Manitoba's low tuition and housing costs.Many rely on student loans.Although Manitoba students work fewer hours to cover the cost of a year's tuition fees (366 hours of work at minimum wage are required to pay for the cost of tuition, compared to the national average of 570 hours), this figure still marks a 100 percent increase compared to 1975 (Moore, 2014).Studies of Canadian university student populations, and specifically campus food bank users, cite lack of financial resources, food costs, transportation costs, and high cost of living as primary causes of food bank use (Nugent, 2011;Stewin, 2013;Frank et. al., 2015;Meldrum & Willows, 2006;House, Su, & Levy-Milne, 2016;Silverthorn, 2016).According to Meal Exchange's Hungry for Knowledge report (2016), students ranked food and housing costs to be the largest contributors to food insecurity, followed by inadequate income supports in the form of student loans and grants, and limited facilities to prepare food (Silverthorn, 2016).
Since the 1990s, tuition has tripled in Canada, and by 2011 total payments owed by students in loans to the federal government reached $15 billion (Burley & Awad, 2015).In 2014, average student debt in Canada was $28,295 (Burley & Awad, 2015).In Ontario, students under the loan program have been reported to have annual shortfalls of $1,232 for women, and $1,712 for men (Crisp, 2015).High food bank usage has also been recorded among international students across Canada, who pay significantly higher tuition fees (Farahbakhsh et. al, 2015;Stewin, 2013).
Among those students who were employed, the stresses of working were identified by half of survey participants, who reported that their employment had negative effects on their studies.This statistic compares with other studies that have found that the need to work in addition to attending classes negatively affects students' academic performance (Prairie Research Associates, 2011;Motte & Schwartz, 2009).Today, the effects of balancing work and school are affecting a larger proportion of post-secondary students.Whereas one in four students worked while attending University 35 years ago, in 2010 the proportion is just under one in two (Marshall, 2010).
Living at home with parents or guardians was protective against food insecurity, likely due to having family or others in the household pay all or most food and housing costs.One student commented, "If I was not living at home and was on my own for all expenses I would think it would be almost impossible to be food secure."Winnipeg has one of the highest proportion of young adults staying at home in the country (43.3 percent), behind Toronto and Vancouver (Statistics Canada, 2015b).Given that the majority of respondents were 20-29 years of age or younger, the University of Manitoba's rates of food security may be buffered by a tendency among students to live at home longer.
Indigenous, newcomer and exchange students
Certain groups of students appeared to be more vulnerable to food insecurity.Indigenous students were significantly more likely to experience food insecurity than those who are non-Indigenous.The Canadian Community Health Survey administered in 2014 assesses food insecurity among off-reserve aboriginal populations at 25.7 percent, however, alternative measures of Indigenous food insecurity in Canada record rates significantly higher.Indigenous Canadians are more vulnerable to risk factors for food insecurity such as extreme poverty, single-motherhood, living in rental accommodation, and increased rates of dependence on social services (Willows, Veugelers, & Kuhle, 2009).
Newcomer and exchange students were also significantly more likely to experience food insecurity compared to other non-Indigenous students.Other campus food security studies have attributed high international student fees and a lack of culturally appropriate food options as contributing factors (Stewin, 2013).International students studying at many Canadian universities are required to pay significantly higher fees than Canadian citizens and Permanent Residents.For example, in the 2015/16 school year at the University of Manitoba, international students paid $2,228 more annually than domestic students for one 6-credit hour or full-year science course, and international students in the law program pay just under 2.5 times the domestic rate (University of Manitoba, 2016).In the comment section of our survey, one student suggested that support be provided to international students by "reduc[ing] tuition fees for international students so we can live a better, healthier life.Physiological needs are important.
Perhaps the school can donate food items on monthly basis to needy international students."One Newcomer who acknowledged they had been in Canada for 14 months claimed they had never experienced food insecurity, however, they stated "I can't afford to buy extra food or stuff I want to taste with the money I get from [my] part-time job…I can only afford to buy the necessary food which satisfies me."
Effects on health
Study participants who experienced food insecurity stated that deterioration of mental and physical health were among the aspects of their lives most affected.This is consistent with observations from the general population, where chronic disease and mental disorders have been associated with food insecurity (Stuff et. al., 2004;Willows et. al., 2011, Tarasuk et. al., 2013).Regarding mental health, a 2013 study of the general Canadian population found that among severely food insecure individuals, 47.1 percent of women and 23.4 percent of men cited anxiety or mood disorders (Tarasuk et. al., 2013).One student from this study wrote that, "not having enough to eat has definitely affected my concentration in class."Another commented that not having enough time to provide for their food needs "is having a big impact on my health, and my mental health.I have found myself in many periods of depression throughout my time in University." Students' experiences with food insecurity can also constrain young peoples' opportunities to engage in individual or collective non-academic, social outlets.According to our survey, students' experiences with food insecurity negatively affected both grades and extracurricular activities, where the stress of succeeding in school affected students' participation in physical extra-curricular activities and their mental health.Therefore, food insecurity could impact students' overall mental and physical health by affecting students' lives in a variety of ways, such as nutritional and caloric deficiencies or negative influences on students' grades and concentration, while at the same time restricting students' opportunities and for extra-curricular activities and participation in social outlets.
Limitations of study
The first limitation of our study is the relatively small sample size.While there were several statistically significant observations made based on the population of survey respondents, the findings may not reflect the realities of all students at the University of Manitoba.Secondly, due to the use of a specific survey tailored to post-secondary students, comparisons made to rates of food insecurity among the general population, calculated through specific questions used in the Canadian Community Health Survey, should be made with caution.Lastly, alternative factors may have affected the accuracy of the survey results.These factors could include poor food skills, and, unwillingness or disinterest in eating diverse and healthy foods.The limitations identified by this exploratory study suggest that more work needs to be done in order to quantitatively and qualitatively capture the extent of food security on university campuses, as well as to ensure that further research can be equipped with the appropriate tools necessary to comprehensively define and investigate the relatively new concept of food insecurity.
Conclusion
This study contributes a uniquely Manitoban perspective to the emerging area of research on post-secondary student food insecurity, reinforces the need to assess the barriers faced by students, and addresses the implications for students' health and wellbeing.The respondents reported high rates of food insecurity that appear to be having a negative impact on their overall health, and in some cases, their academic performance.This exploratory study highlights the need to conduct further research into the prevalence, nature, causes, and effects of food insecurity at the University of Manitoba.It also delineates the need for a standardized assessment method for all Canadian universities to allow for ongoing surveillance and comparison.
Student food insecurity has implications for the future of the Manitoba and Canadian work force, as well as public health care expenditure.The main predetermining factor in food insecurity is inadequate finances due to housing, transport, food, and tuition costs.This disproportionately affects certain groups, such as newcomer and Indigenous students, who experience significantly higher rates of food insecurity.It is up to university administrators, student services, and provincial policy-makers to take action to ensure affected students have the financial and other resources necessary to succeed in their studies.
The federal Liberal government's focus on post-secondary student issues was reflected in their 2016 budget through increased grant funding as an alternative to loans.However, these policies currently only impact undergraduate students; policies to benefit graduate students are yet unknown (Snider, 2016).As of now, Manitoba post-secondary students pay comparatively low prices for tuition and living expenses.However, in 2015/16 tuition rates did rise in Manitoba by 1.9 percent, compared to 0 percent in provinces like Newfoundland, Alberta, and New Brunswick where fees were frozen (Statistics Canada 2016b).Media reports indicate that the newly elected Conservative provincial government is considering removing the cap on tuition fees for post-secondary institutions (Martin 2016).Should this occur, student food insecurity could rise, unless loans and other financial aid to students were increased and made more available.
Figure 1 :
Figure 1: Contributors to food insecurity identifying particular effects of their food insecurity
Table 1 :
Rates of student food insecurity at 5 major Canadian universities
Table 3 :
Questions used to assess students' coping mechanisms against food insecurity I/we went to a food bank, hunger relief or soup kitchen service (such as Winnipeg Harvest, Siloam Mission, Lighthouse Mission, Agape Table5, etc.) because I did not have the money to buy enough food.Was this often true, sometimes true, or never true in the past 12 months?I thought about going to a food bank or hunger relief program but was too embarrassed to actually go.Was this often true, sometimes true, or never true in the past 12 months? 4afts of surveys administered by researchers from these institutions were shared through private correspondence from the researchers themselves.5Theseare examples of well-known hunger relief programs and food banks in Winnipeg, Manitoba.
Table 4 :
Questions assessing students' experiences with food insecurity
Table 4 :
Frequency of study participants with various levels of food insecurity, compared with demographic characteristics | 2018-12-12T16:00:27.179Z | 2017-05-26T00:00:00.000 | {
"year": 2017,
"sha1": "580dd974baee9cd382f3587a1d15d0051bb85103",
"oa_license": "CCBY",
"oa_url": "https://canadianfoodstudies.uwaterloo.ca/index.php/cfs/article/download/204/181",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "580dd974baee9cd382f3587a1d15d0051bb85103",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Political Science"
]
} |
261063191 | pes2o/s2orc | v3-fos-license | Spatiotemporal denoising of low-dose cardiac CT image sequences using RecycleGAN
Electrocardiogram (ECG)-gated multi-phase computed tomography angiography (MP-CTA) is frequently used for diagnosis of coronary artery disease. Radiation dose may become a potential concern as the scan needs to cover a wide range of cardiac phases during a heart cycle. A common method to reduce radiation is to limit the full-dose acquisition to a predefined range of phases while reducing the radiation dose for the rest. Our goal in this study is to develop a spatiotemporal deep learning method to enhance the quality of low-dose CTA images at phases acquired at reduced radiation dose. Recently, we demonstrated that a deep learning method, Cycle-Consistent generative adversarial networks (CycleGAN), could effectively denoise low-dose CT images through spatial image translation without labeled image pairs in both low-dose and full-dose image domains. As CycleGAN does not utilize the temporal information in its denoising mechanism, we propose to use RecycleGAN, which could translate a series of images ordered in time from the low-dose domain to the full-dose domain through an additional recurrent network. To evaluate RecycleGAN, we use the XCAT phantom program, a highly realistic simulation tool based on real patient data, to generate MP-CTA image sequences for 18 patients (14 for training, 2 for validation and 2 for test). Our simulation results show that RecycleGAN can achieve better denoising performance than CycleGAN based on both visual inspection and quantitative metrics. We further demonstrate the superior denoising performance of RecycleGAN using clinical MP-CTA images from 50 patients.
Introduction
To avoid risks from cardiac catheterization of invasive coronary angiographies (ICAs) in low-and intermediate-risk coronary artery disease (CAD) patients, multidetector computed tomography (MDCT) has been used for CT angiography (CTA) to noninvasively assess the presence, location, severity, and characteristics of coronary atherosclerosis (Nieman et al 2001, Carrigan et al 2009, Cademartiri et al 2010).In addition, some findings from CTA may not be detectable by ICA (Motoyama et al 2009, Sun et al 2010, Miszalski-Jamka et al 2012).The main challenge in CTA is the strong demand on high temporal resolution (to mitigate cardiac motion artifacts) and high spatial resolution (for small coronary structures), which leads to high radiation dose (Yu et al 2009).Electrocardiogram-(ECG-) gated multi-phase CTA (MP-CTA), either in a retrospective helical scan mode or a prospective axial scan mode, can provide much more clinically relevant information than single-phase CTA (SP-CTA).Not only is the important heart function information lost in SP-CTA, but also different parts of the coronary arteries are better seen in different phases (Desjardins and Kazerooni, 2004)_ENREF_49.Thus, MP-CTA may be preferred for much greater diagnostic value than SP-CTA.However, even with ECG tube current modulation (TCM), the average effective dose of a MP-CTA scan could be much higher than 10 mSv (May et al 2012)_ENREF_50 (6 ~ 24 at Mayo Clinic), depending on the width of the pulse window and patient size (Weustink et al 2008)_ENREF_51.Taking 80% patients with negative findings into account, minimizing the radiation dose becomes a major and urgent need for a broader application of MP-CTA for CAD diagnosis.
Many methods have been developed to reduce radiation dose in CT acquisition, including optimization of tube current, tube potential, and use of dedicated bowtie filters.However, xray dose reduction in general will lead to elevated noise in reconstructed images.The noise in the low-dose CT (LDCT) images can be reduced by either conventional reconstruction methods (La Rivière, 2005, Wang et al 2005, Tian et al 2011, Wu et al 2017, Zhao et al 2018, He et al 2019, Wu et al 2021, Zhou et al 2021), or emerging deep-learning based denoising methods directly on images after regular reconstruction through paired-image training (Chen et al 2017, Kang et al 2017, Wolterink et al 2017) or unpaired images training using a cycle-consistent generative adversarial network (CycleGAN) (Kang et al 2019, You et al 2020, Gu et al 2021, Li et al 2021).In (Li et al 2021), several CycleGAN variants for LDCT denoising were investigated and compared with a paired deep learning method (RED-CNN) (Chen et al 2017).However, all these deep learning denoising methods treated each CT image independently and failed to count for the temporal correspondence between images, such as that of MP-CTA image sequences.
To our best knowledge, CycleGAN with an identity loss (Kang et al 2019) or waveletassisted noise disentanglement (Gu et al 2021) was the first work to use deep-learning methods to improve low-dose MP-CTA images.Although CycleGAN can achieve the translation between LDCT and full-dose CT (FDCT) without the need of paired training images, the translation is established only in the spatial domain.The temporal connections among different cardiac phases of a MP-CTA image sequence are not utilized by CycleGAN and may lead to sub-optimal denoising performance.On the other hand, an advanced CycleGAN model with a recurrent loss and a cycle consistency loss over spatial and temporal domain ('recycle loss'), so called RecycleGAN (Bansal et al 2018), was proposed to achieve video-to-video translation in computer vision, which utilizes both spatial and temporal information to solve the translation problem of temporally related data.Nevertheless, RecycleGAN has never been applied to denoise low-dose CT image sequences including MP-CTA.In this work, we adapt RecycleGAN to take into consideration of the temporal connection of the succeeding cardiac phases of MP-CTA images.This novel deep learning denoising method not only enjoys the advantage of CycleGAN without need of paired training images, but also exploits both spatial and temporal correspondence to boost denoising performance for time series of MP-CTA images.As our aim in this work focuses on comparing the denoising performance of CycleGAN and RecycleGAN for low-dose MP-CTA images, the comparison between CycleGAN and other traditional and deep learning methods for LDCT denoising can be found in the previous works, such as (Li et al 2021).
CycleGAN
To achieve image-to-image translation, CycleGAN (Zhu et al 2017) is proposed to learn mapping functions between two different domains without the need of paired data.Formally, given a set of images from a source domain A (e.g., low-dose CT images) and a set of images from a target domain B (e.g., full-dose CT images), the goal of CycleGAN is to learn a mapping G AB : A B, such that the output G AB a is indistinguishable from the images in domain B. The architecture of CycleGAN is composed of two generators and two adversarial discriminators (figure 1).Specifically, each generator aims to translate images from one domain to the other domain, while each discriminator is designed to distinguish between the real images in the target domain and the translated images from the source domain.
The objective of CycleGAN contains two terms: an adversarial loss (Goodfellow et al 2014) and a cycle consistency loss (Zhu et al 2017).Given data distribution a p data a and b p data b , the adversarial loss is designed to match the distribution of generated images G AB a to the distribution of that in the target domain B as follows: where G AB aims to minimize this loss against an adversary D B that tries to maximize it, i.e., (Zhu et al 2017).Similarly, for the generator G BA , the adversarial loss is: To further reduce the space of possible mapping functions (Goodfellow et al 2014), a cycle-consistency loss is introduced to guarantee the output of each cycle to be close to the input to that cycle, i.e., G BA G AB a ≈ a and G AB G BA b ≈ b.The cycle-consistency loss is defined as: This cycle-consistency loss enforces the constraint that G AB and G BA be inverse of each other (Kang et al 2019).
By putting all losses together, the overall objective for CycleGAN is: where λ controls the importance between the adversarial losses and the cycle-consistency loss.
The variants of CycleGAN (Zhu et al 2017) have been applied to various domains (Li et al 2021).However, they only use the spatial information in 2D images, and do not use the temporal information for the optimization of the image translation model (Bansal et al 2018).
The cycle-consistency loss forces the optimization to learn a solution that is closely tied to the input.This is suitable for the situation that only spatial information is available during the translation, while for time-related image sequences, such as CTA images, with only the cycle consistency, the model may be inadequate to generate perceptually unique results.The network structure of CycleGAN used in this work is based on figures 4 and 5 in (Li et al 2021).
RecycleGAN
RecycleGAN (Bansal et al 2018) is proposed to learn a mapping between two videos from different domains.It utilizes both spatial and temporal information to solve the reconstruction problem of temporally related data.RecycleGAN shares a similar model framework with CycleGAN, except that the cycle-consistency loss is replaced by a recurrent loss and a recycle loss to make use of the temporally ordered images to learn a better mapping.The workflow of RecycleGAN is shown in figure 2.
Given unpaired but ordered images a 1 , a 2 , …, a t , … ∈ A (i.e., temporally ordered low-dose MP-CTA images) and b 1 , b 2 , …, b s , … ∈ B (i.e., temporally ordered full-dose MP-CTA images), a recurrent temporal predictor P A is trained to predict the future image given the past images.The recurrent loss is defined as: where a 1: t = a 1 , …, a t .Then the recycle loss across domains can be defined based on this temporal prediction model as follows: where G AB a 1: t = G AB a 1 , G AB a 2 , …, G AB a t .In both forward and backward cycles, the recycle loss requires a sequence of images to map back to the initial domain.The overall loss of ReCycleGAN is defined by: where λ's control the importance of the losses.We show in the experiments that the proposed method provides an effective translation from low-dose MP-CTA to full-dose MP-CTA images when learning from unpaired CT image sequences.The detailed network structure of RecycleGAN (Bansal et al 2018) can be found in appendix.
Phantom data
We used the XCAT phantom program (Segars et al 2010) based on 18 patients' data (nine females and nine males) to generate cardiac CT images (thorax 512×512×128, voxel size of 1mm 3 ) for two different dose levels: full-dose and low-dose (20% of the full-dose).The number of phases for each cardiac cycle is set to eight.The 18 phantoms were divided into nine pairs of female and male.To generalize the performance of CycleGAN and RecycleGAN, we used the 9-fold cross-validation (CV).For each CV, the training dataset contains seven pairs of female and male phantoms, the validation dataset contains a pair of female and male phantoms, and the testing dataset contains another pair of female and male phantoms.The table 1 shows the patient pairs for nine CV sets.For each CV, the network was trained using the training data and the hyperparameters were tuned using the validation data.Afterward, the optimal hyperparameters were used to train the network using both training and validation datasets.Finally, the denoising performance was evaluated on the test dataset.To account for the temporal relationship among cardiac phases, the images of each slice are viewed as a looped video of eight image frames.
Patient data
We also used the real patient MP-CTA images from Mayo Clinic to evaluate the performance of RecycleGAN.MP-CTA images of 50 patients were retrospectively collected and deidentified (IRB was approved by Mayo Clinic).Intravenous iodinated contrast (Omnipaque ® 350) was injected using a bolus tracking technique, where the volume and injection rate were determined by the patient weight, followed by 10 c.c. saline chaser.
The arterial attenuation enhancement is 200~350 HU.These cases were acquired using a routine retrospectively ECG-gated helical scanning technique on a 3rd generation 192slice dual-source scanner (Force, Siemens Healthcare): 0.25 sec rotation time, 192×0.6mmdetector configuration, helical pitch automatically selected based on heart rate, tube potential automatically determined (CAREkV), TCM (CAREDose4D, maximum tube current (MTC) 180 mAs in the pulse window and 20% outside), and ECG-pulsing at 40%-70% phases.These parameters may vary for some patients, especially for those with irregular heartbeat.The CTDIvol was varying from patient to patient depending on the patient size, heart rate, and regularity of the heart rate (31~120 mGY.i.e. 6~24 mSv).For irregular heart rate, the pulsing window may be extended automatically, which could dramatically increase radiation dose.3D volume images (512×512 in plane, 300~375 slices, isotropic 0.4mm size) at 20 phases (0%-95% windows) were reconstructed using the Siemens ADMIRE algorithm with a Qr40 kernel (ADMIRE strength setting of 3).Therefore, in 20 phases of CTA images of each patient, roughly 6 phases are of full dose (with MTC) while the remaining 14 phases are of low dose (with 20% MTC).Due to the patient size, heartbeat irregularity, and unbalanced full-dose and low-dose slices (# of full-dose slices ≪# of low-dose slices), we selected the full-dose slices and the low-dose slices for training (48 patients out of 50) based on the standard deviation (STD) of a square region in the aorta (full dose <39HU and low dose >59HU) and at least three consecutive phases falling into either the full-dose window or the low-dose window.To tune the model hyperparameters, MP-CTA images of one patient were used for the validation set.The remaining one patient dataset was served as the test set for performance evaluation.
Evaluation metrics
To evaluate the proposed method, peak signal-to-noise ratio (PSNR) (Huynh-Thu and Ghanbari, 2008) and structural similarity index (SSIM) (Zhou et al 2004, Horé andZiou, 2010) are used as quantitative measurements for the XCAT phantom data.The PSNR is an expression for the ratio between the (denoised) low-dose CT image x and the corresponding full-dose CT image y as follows, where MAX Y is the maximum signal value that is set as 4095 for 12-bit CT images in our experiments.The term 'MSE' stands for mean squared error and is defined as, where i and j are the row and column indices low-dose of CT image x and the corresponding full-dose CT image y, respectively, and m and n represent the number of rows the number of columns, respectively.The PSNR measures the cumulative difference between two images.The higher the PSNR, the better the performance of denoising.
In addition to PSNR, the SSIM is designed to compare luminance, contrast, and structure difference between two images and is defined as, SSIM(x, y) = l(x, y)c(x, y)s(x, y), where l x, y = For patient data, since the ground truth was unknown, the performance was evaluated using STD in a square region of the aorta of CTA images of the test patient, where the uniform intensity is expected.Therefore, the lower STD, the better denoising performance.
Hyperparameters
Hyperparameters of CycleGAN and RecycleGAN were generally kept the same as the previous publications (Bansal et al 2018, Li et al 2021).Specifically, for CycleGAN λ was set to 10, while for RecycleGAN, λ rx was set to 0.5, and λ ry was set to 50, λ τx was set to 1, and λ τy was set to 100.The networks were trained with random weights from scratch using the Adam solver.For each model, we searched for the best learning rate in the range of 5.00×10 −6 to 1.26× −3 based on the lowest PSNR of the validation set (a pair of female and male patients for the phantom data and one patient for the patient data).For the phantom data, after training each CV data set, the best performing model was applied on the test dataset for performance evaluation.For the patient data, the learning rate was tuned using the validation patient and the best model was applied on the test patient.
Phantom results
We compared our proposed spatiotemporal RecycleGAN method with CycleGAN using PSNR and SSIM as quantitative metrics.Figure 3. shows PSNR changes of the validation set along with different learning rates for nine CV sets.We separated the female and male validation PSNR as some large differences were found between the genders (see tables 2 and 3).For CycleGAN, the learning rates 2×10 −5 to 3×10 −4 seem to have a PSNR plateau for the validation set.For RecycleGAN, this range narrows to 3×10 −5 to 3×10 −4 .The best validation PSNR for each CV set was listed in table 2 along with SSIM.First, the different PSNR and SSIM performance can be clearly seen between female and male validation patients.In most cases for CycleGAN, the PSNR differences are 2-6 dB except for CV7 (less than 1 dB), while SSIM difference is ranged from more than 0.01 to about 0.07.This difference is mainly caused by the learning rate was tuned based on the overall PSNR using both female and male validation patients.Although the differences are also observed for We show an image (Phase 5) of the test female and male data denoised by CycleGAN and RecycleGAN for CV1, CV2 and CV3 in figures 4 (female) and 5 (male), respectively.The full-dose and low-dose images are also shown as reference.Both CycleGAN and RecycleGAN effectively remove the noise in the images.RecycleGAN has less noise and is closer to the full-dose images than CycleGAN as shown in figures 4 and 5.
In figures 6 and 7, the eight phases of the heart region are shown for different methods along with the full-dose and low-dose references.Again, both CycleGAN and RecycleGAN effectively suppress the noise.RecycleGAN does a better job to further remove the noise than CycleGAN in the myocardium and the blood pool.RecycleGAN also achieves better contrast and structure preservation than CycleGAN.
Patient results
For the patient CTA data, some phases are with full-dose (at 100% MTC) and some with low-dose (at 20% MTC or transition between 100% MTC to 20% MTC In figure 10, we compared the low-dose CTA images (phases 1, 3, 5, and 19) of the test patient with CycleGAN and RecycleGAN denoised images.Similar to the findings in the phantom results, both CycleGAN and RecycleGAN can effectively suppress the noise, while RecycleGAN keeps the image details much better than CycleGAN.CycleGAN also suffers from some intensity artifacts as marked by the yellow arrows in figure 10, which are consistent with those reported in the previous study (Gu et al 2021).The ROI images are shown in figure 11, CycleGAN and RecycleGAN yield lower noise compared to the original low-dose images.Furthermore, RecycleGAN images are least noisy and more consistent across all phases, while CycleGAN suffers some artificial pattens and noise bumps for phase 5.The consistency of RecycleGAN is likely due to the recurrent loss, which takes the temporal correlation into the denoising mechanism.The quantitative measures of STD of ROI for eight phases of low-dose and high-dose phases are shown in table 5. CycleGAN does a good job for most phases (bringing down the noise from 50~60 HU to 30~40 HU) except for phase 6. RecycleGAN further suppresses the noise to the range of 16~26 HU for all phases.
Discussion and conclusions
RecycleGAN is more effective than CycleGAN for denoising low-dose CT image sequences as it uses a recurrent loss to enforce the temporal consistence.In essence, it treats 2D image series as a 3D signal (2D space +1D time) and denoises in 3D instead of 2D.This leads to more effective and consistent noise suppression and structure preservation.In the future, the whole 3D volume image plus time may be treated as a 4D signal to see if further improvement could be achieved.Right now, the training of RecycleGAN is more time consuming (37 h for RecycleGAN versus 18 h for CycleGAN for the phantom data on NVIDIA A6000 GPU).The computational burden moving from 3D to 4D may be alleviated by multiple GPU parallelism.
In this work, we focus on comparing RecycleGAN and CycleGAN with extensive phantom and patient studies (with 9-fold cross-validation for the phantom study and 50 patients for the patient study).We used CycleGAN as a baseline, which was extensively compared with other state-of-the-art denoising methods (You et al 2020, Li et al 2021).Although the direct comparison between RecycleGAN and other methods may be lack in this work, their relative performance can be deduced from the comparison between RecycleGAN and CycleGAN.
MP-CTA can offer more diagnostic information than SP-CTA.However, the full radiation dose is a major hurdle to adopt MP-CTA broadly for CAD diagnosis.Therefore, to lower MP-CTA dose level to be comparable to SP-CTA will be clinically significant.RecycleGAN is an important development moving toward this goal.First, RecycleGAN is a softwarebased method and does not require the aligned low-dose and full-dose images.Although the hardware difference may demand further tuning of the RecycleGAN model trained on a certain type of scanner (e.g.Siemens Force in this work), as the nature of CT images is the same, a comprehensive model could be built using data from multi-scanners and multicenters.Secondly, RecycleGAN showed superior performance on suppressing noise and preserving the structure details and contrast for CTA image sequences compared to CycleGAN.If a constant 20% MTC could be used for MP-CTA, the radiation dose could be lowered by ~55% (assuming 6 phases 100%MTC pulse window for a total of 20 phases).
Although this dose level is still higher than SP-CTA, further reduction, such as sparse sampling, could be exploited.Use of advance deep learning or reconstruction methods to explore the lower bound of MP-CTA dose level without compromising the diagnostic outcomes is worth further investigation.
For the patient MP-CTA cases used in this study, an ECG-gated tube current modulation was turned on with the pulsing window between 40% and 70% of the cardiac phases.The tube current reduction outside the pulsing window was 20% of the full tube current.Therefore, this study focused on reducing noise of low-dose images acquired outside the pulsing window.One previous study has investigated CycleGAN denoising of extreme low-dose (high-noise) CT (Gu et al, 2021).At 4% of full dose, although the baseline CycleGAN method (Kang et al 2019) introduces some artificial features, CycleGAN denoised images still improved the signal-to-noise ratio (SNR) and the radiologist reading rates over the original LDCT images.To address the performance deterioration of CycleGAN, the waveletassisted noise disentanglement (WAND) (Gu et al 2021) was introduced to extract highfrequency sub-band images (including both noise and edge information) before CycleGAN training.Their results showed that WAND were effective to suppress high noise and avoid artifacts.In figure 10, we also discovered similar artifacts in CycleGAN images reported in (Gu et al, 2021), which were successfully removed in RecycleGAN images.This demonstrated that the spatiotemporal training in RecycleGAN may be an alternative way to correct for the inconsistent translation of CycleGAN.Nevertheless, we believe that WAND can be deployed similarly to RecycleGAN, i.e. adding high-frequency sub-band image extraction before RecycleGAN training, when its denoising performance is significantly degraded due to substantially elevated noise.This will be a topic for future investigation.
In summary, we developed a spatiotemporal deep learning denoising method, RecycleGAN, for low-dose cardiac CT image sequences.Compared to the state-of-the-art spatial domain denoising method, CycleGAN, RecycleGAN utilizes the temporal relationship of several consecutive phases through a recurrent loss to further improve the denoising performance.Note that RecycleGAN still enjoys the advantage of CycleGAN without need of aligned low-noise and high-noise images.Both phantom and patient studies show that RecycleGAN outperforms CycleGAN in quantitative metrics and image quality for CT image sequences.
It is envisioned that RecycleGAN could be used to significantly lower the MP-CTA imaging dose by effectively removing the image noise.More clinically relevant evaluations will be conducted in the future work.
The generator structure of RecycleGAN.
The predictor structure of RecycleGAN.
The discriminator structure of RecycleGAN.
.
The first term l x, y measures closeness of mean luminance μ x and μ y .The contrast c x, y is measured by standard deviation σ x and σ y .The structure similarity s x, y is measured by correlation coefficient between images x and y. σ xy is the covariance between two images.The c 1 , c 2 and c 3 are used to stabilize the division operation(Zhou et al 2004, Horé andZiou, 2010) .The higher SSIM value indicates the closer resemblance of two images.
Figure 1 .
Figure 1.The workflow of CycleGAN.In the forward cycle (blue line), an image a from domain A is translated to domain B by generator G AB , expressed as B ˆ= G AB a .Then, B ˆ is translated back to domain A, expressed as a ˆ= G BA G AB a .The backward cycle (green line) has similar operations where image b in domain B is mapped to domain A as A ˆ= G BA b and then mapped back to domain B as b ˆ= G AB G BA b .
Figure 2 .
Figure 2.The workflow of RecycleGAN.In the forward cycle (blue line), an image a t at time t from domain A is translated to domain B by generator G AB , expressed as B ˆt = G AB a t .Then, a temporal predictor P B is applied on B ˆ1:t to predict a future image B ˆt + 1 , and then B ˆt + 1 is translated back to domain A, expressed as a ˆt + 1 = G BA P B G AB a t ).The backward cycle (green line) has similar operations where image b s in domain B is mapped to domain A as A ˆs = G BA b s , and then mapped back to domain B with a temporal predictor P A , expressed as b ˆs + 1 = G AB P A G BA b s .
Figure 8 .
Figure 8. Twenty phases of the test patient CTA images (Display Window [−1000 950]HU).The black box in the aorta is used as a region of interest (ROI) to calculate the standard deviation (STD) of the intensity to represent the noise level.
Figure 11 .
Figure 11.Images of the test patient in a region of interest denoised by different methods.(Top row: original low-dose images; middle row: CycleGAN; bottom row: RecycleGAN) Display Window [76 676]HU.
To keep the underlying data similar, we selected 16.2 thousand low-dose images and 15.8 thousand full-dose images for CycleGAN training, while we selected 15.8 thousand low-dose frames and 15.2 thousand full-dose frames for RecycleGAN training.The difference was caused by the requirement of three consecutive phases for RecycleGAN training, which was not satisfied by all CycleGAN training images.
RecycleGAN metrics, they are notably smaller.RecycleGAN outperformances CycleGAN in almost all cases, except for CV7 male SSIM (marked as bold blue in table 2).After taking the average values (± Standard Deviation) of nine CV sets, the PSNR and SSIM for CycleGAN are 41.23 ± 2.16 dB and 0.9462 ± 0.0241 for the female validation data, and 41.13 ± 1.62 dB and 0.9526 ± 0.0188 for the male validation data.The corresponding numbers for RecycleGAN are 41.71 ± 2.07 dB and 0.9523 ± 0.0224 for the female validation data, and 42.10 ± 1.17 dB and 0.9600 ± 0.0108 for the male validation data.RecycleGAN achieves not only the greater average values, but also the smaller variances than CycleGAN.The best models were then applied to the test dataset and the PSNR and SSIM results are shown in table 3. The similar findings to the best validation metrics are observed although the number of cases that RecycleGAN is worse than CycleGAN increases from one to two.RecycleGAN still outperformances CycleGAN in most cases, except for PSNR of CV1 male and CV8 female (marked as bold blue in table3).The PSNR and SSIM for CycleGAN are 40.36 ± 2.23 dB and 0.9431 ± 0.0250 for the female test data, and 40.91 ± 2.16 dB and 0.9501 ± 0.0208 for the male test data.The corresponding numbers for RecycleGAN are 40.84 ± 2.05 dB and 0.9512 ± 0.0215 for the female test data, and 41.43 ± 2.11 dB and 0.9572 ± 0.0178 for the male test data.The test results demonstrated again that RecycleGAN leads to better denoising performance than CycleGAN.
). Phase 8-13 in this test patient should be in the 100% MTC window (full dose), while others should be in the 20% MTC window (low dose) or the transition window.For clarity, four phases for each category are shown in figure8, i.e. phases 1, 3, 5, and 19 for low-dose with high noise and phases 8, 10, 12, and 14 for full-dose with low noise.Note that phase 5 shows less noise than other low-dose phases as the MTC was ramped up during phases 4-6.The black box in the aorta in figure8is used as region of interest (ROI) to calculate the standard deviation (STD) of the intensity to represent the noise level and the magnified views of ROI are shown in figure9.The noise texture can be seen more clearly, and the top row (low-dose images) are much noisier than the bottom row (full-dose images).The STD values in HU for eight low-dose phases and eight high-dose phases are listed in table 4, where the low-dose STD values are greater than 45HU and the full-dose STD values are less than 40.It is worth noting that they are different from the thresholds for the selection of low-dose and full-dose training data (full dose <39 HU and low dose >59 HU).The inclusion of transition phases (4-6) is to see how effective CycleGAN and ReCycleGAN can denoise for different noise levels in the test data.
Table 2 .
The best validation metrics for CycleGAN and RecycleGAN.
Table 3 .
Quantitative metrics for the test data for CycleGAN and RecycleGAN.
Table 4 .
The standard deviation (STD) values in HU of low-dose and full-dose ROI.
Table 5 .
The standard deviation (STD) values in HU in ROI for CycleGAN and RecycleGAN.
Biomed Phys Eng Express.Author manuscript; available in PMC 2023 October 23. | 2023-08-23T06:16:02.103Z | 2023-08-21T00:00:00.000 | {
"year": 2023,
"sha1": "dc76ea2c245052ebfa16b082e9c332fdf057f94f",
"oa_license": "CCBY",
"oa_url": "https://iopscience.iop.org/article/10.1088/2057-1976/acf223/pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "5faf979edaee511b7d5425adc6ad55d79866b885",
"s2fieldsofstudy": [
"Medicine",
"Engineering",
"Computer Science"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
} |
215803742 | pes2o/s2orc | v3-fos-license | Enhancing the antigenicity and immunogenicity of monomeric forms of hepatitis C virus E2 for use as a preventive vaccine
The E2 glycoprotein of hepatitis C virus (HCV) is the major target of broadly neutralizing antibodies (bNAbs) that are critical for the efficacy of a prophylactic HCV vaccine. We previously showed that a cell culture–derived, disulfide-linked high-molecular-weight (HMW) form of the E2 receptor–binding domain lacking three variable regions, Δ123-HMW, elicits broad neutralizing activity against the seven major genotypes of HCV. A limitation to the use of this antigen is that it is produced only at low yields and does not have a homogeneous composition. Here, we employed a sequential reduction and oxidation strategy to efficiently refold two high-yielding monomeric E2 species, D123 and a disulfide-minimized version (D123A7), into disulfide-linked HMW-like species (Δ123r and Δ123A7r). These proteins exhibited normal reactivity to bNAbs with continuous epitopes on the neutralizing face of E2, but reduced reactivity to conformation-dependent bNAbs and nonneutralizing antibodies (non-NAbs) compared with the corresponding monomeric species. Δ123r and Δ123A7r recapitulated the immunogenic properties of cell culture–derived D123-HMW in guinea pigs. The refolded antigens elicited antibodies that neutralized homologous and heterologous HCV genotypes, blocked the interaction between E2 and its cellular receptor CD81, and targeted the AS412, AS434, and AR3 domains. Of note, antibodies directed to epitopes overlapping with those of non-NAbs were absent. The approach to E2 antigen engineering outlined here provides an avenue for the development of preventive HCV vaccine candidates that induce bNAbs at higher yield and lower cost.
with the virus, which causes progressive liver disease, including cirrhosis and cancer, that can ultimately be fatal or treatable only by liver transplant. Treatment with direct acting antivirals mediates high levels of viral clearance but does not prevent reinfection, and the fact that many infected individuals are unaware of their HCV-positive status leads to ongoing viral transmission. Modeling suggests that timely HCV elimination would be facilitated by the combined actions of direct acting antivirals and a yet to be developed preventive vaccine (1,2).
HCV is an enveloped, positive-sense, single-stranded RNA virus. The viral surface glycoprotein E2 mediates attachment to target cell receptors, including the major receptor CD81, and is the main target for neutralizing antibodies (NAbs). Crystallographic data show that soluble E2 has a globular structure with a central immunoglobulin -sandwich flanked by front and back layers (3)(4)(5). E2 has two broad antigenic regions: (i) a neutralizing face comprised of the front layer and CD81 binding loop targeted by NAbs and (ii) a nonneutralizing face comprised of sections of the back layer and immunoglobulin -sandwich targeted by nonneutralizing antibodies (non-NAbs).
Spontaneous viral clearance, which occurs in ϳ30% of infected individuals, has been correlated with the early development of NAbs that have broad reactivity against multiple HCV isolates (bNAbs) and broadly reactive cell-mediated immunity (CMI) (6,7). Furthermore, passively infused monoclonal bNAbs or polyclonal antibodies derived from HCV-infected humans can provide protection from challenge in smallanimal models of HCV infection (8 -12). A number of vaccine development approaches based on the elicitation of bNAbs and/or CMI, including recombinant protein, virus-like particles, and vaccine vectors, have been assessed in animal models or phase I and II clinical trials. The responses elicited have in most cases shown limited cross-genotype reactivity (reviewed in Refs. 7 and 13), and no HCV vaccine candidate aimed at developing bNAbs has advanced beyond a phase I clinical trial.
The development of a broadly protective HCV vaccine has been challenging for a number of reasons. Hepatitis C has extremely high sequence variability due to the lack of proofreading function of the virally encoded RNA-dependent RNA polymerase. As a result, HCV circulates as eight divergent genotypes with median within-and between-genotype amino acid sequence divergences of 23 and 33%, respectively (14). A prophylactic vaccine must therefore provide broad protection against the global pool of circulating viruses. Other viral defense mechanisms that blunt the immune response to E2 include the high number of attached glycans, some of which surround the CD81-binding site and have been demonstrated to shield bNAb epitopes (15, 16). E2 contains three variable domains. Hypervariable region 1 (HVR1) is a target of typespecific NAbs but plays a nonessential role in viral entry and so rapidly develops escape mutations that further diversify the viral sequence pool (17)(18)(19)(20). A further role for HVR1 is to maintain E2 in a conformation that is resistant to neutralization (21)(22)(23). Hypervariable region 2 (HVR2) and the intergenotypic variable region also play roles in reducing accessibility of the CD81-binding site and NAb epitopes (24). Together, these factors present a challenge to vaccine development.
We previously reported on a recombinant, soluble version of the E2 glycoprotein from which HVR1, HVR2, and the intergenotypic variable region were removed from the receptor-binding domain (RBD) (⌬123, Fig. 1). Like its WT RBD counterpart, the proteins expressed in mammalian cell culture include monomeric species as well as heterogeneous disulfide-linked forms. Enhanced cross-genotype neutralizing responses were preferentially generated in guinea pigs vaccinated with a high-molecular-weight form (⌬123-HMW) (25). Distinguishing ⌬123-HMW from monomeric ⌬123 was an occluded nonneutralizing surface and the preferential generation of antibodies that overlap with AS412, AS434, and AR3. A limitation to the use of cell culture-derived ⌬123-HMW is that it comprises less than 5% of the total ⌬123 yield and contains impurities, both significant problems in terms of cost and ease of purification for scaled-up vaccine production.
In this study, we used sequential reduction and oxidation to drive disulfide-bond rearrangement in order to refold monomeric E2 into an HMW-like form. We applied such refolding to RBD, ⌬123, and their variants in which 7 cysteine residues were mutated to alanine (⌬123A7 and RBDA7; Fig. 1), which leads to a potentially simplified intramolecular disulfide-bonding pat-tern and a relatively homogeneous monomeric profile (26). We succeeded in refolding up to 70% of ⌬123 and ⌬123A7 monomers into assembled HMW-like forms (⌬123r and ⌬123A7r) and compared the biophysical and antigenic properties of the assembled and cell culture-derived HMW forms. In addition, the immunogenicity was assessed in guinea pigs. ⌬123r and ⌬123A7r largely recapitulated the immunogenic properties of cell culture-derived ⌬123-HMW and present a new avenue for the production of vaccine candidates with enhanced immunogenicity for HCV.
Soluble E2 monomers can be refolded into higher-molecular-weight forms
The formation of ⌬123-HMW during expression in 293-F cells is driven by the formation of intermolecular disulfide bonds. However, this multimeric form generally represented less than 5% of the total purified glycoprotein and contains impurities (Table 1). We sought to improve the efficiency of production and homogeneity of HMW through limited reduction of intramolecular disulfide bonds in E2 monomers followed by slow oxidation to promote the assembly of higherorder species through the formation of intermolecular bonds, while preserving the immunogenicity of the molecule and its potential utility as a vaccine candidate. Affinity-purified ⌬123 and ⌬123A7 showed strikingly different size-exclusion chromatography (SEC) profiles. ⌬123 consisted of a range of species with peaks at 46-, 60-, 70-, and 79-ml volume (4, 31.5, 16.5, and 48% of the total, respectively) ( Fig. 2A), corresponding to the previously described HMW1, HMW2, dimer, and monomer species (25). In contrast, ⌬123A7 was almost entirely monomeric (Fig. 2B). SEC fractions corresponding to monomeric ⌬123 and ⌬123A7 (indicated by the gray shading in Fig. 2, A and B) were pooled and concentrated, and monomeric status was confirmed by analytical SEC (Fig. S1, A and B). In both reducing and nonreducing SDS-PAGE, the monomeric forms of ⌬123 and ⌬123A7 migrated to positions consistent with their expected monomer glycoprotein size of ϳ47 kDa (25), confirming the lack of stable intermolecular disulfide bonds (Fig. 2I). Yields of ϳ20 -40 and 10 -15 mg of purified monomeric protein per liter of tissue culture supernatant were obtained for cells stably transfected with ⌬123 and transiently transfected with ⌬123A7, respectively. SEC of the monomers after DTT-in- In ⌬123 and ⌬123A7, N-terminal truncation removed hypervariable region 1, whereas hypervariable region 2 and the intergenic variable region were replaced with short linkers (amino acids GSSG). The positions of Cys residues are indicated with residue numbers above the schematic. In RBDA7 and ⌬123A7, Cys residues at positions 452, 486, 569, 581, 585, 597, and 652 were mutated to alanine. EDITORS' PICK: Multimerization of HCV E2 enhances immunogenicity duced refolding showed that both ⌬123 and ⌬123A7 efficiently assembled HMW species with ϳ60 -70% of the total in this form ( Fig. 2 (C and D, respectively) and Table 1). The HMW peak for ⌬123A7 eluted slightly earlier than that of ⌬123 (49 ml compared with 53 ml). Fractions corresponding to the refolded species (indicated by the hatched shading in Fig. 2, C and D) were pooled and used for further analyses. Similar to ⌬123 and ⌬123A7, RBD eluted as a range of species with a distinct monomeric peak at 75 ml, whereas RBDA7 was almost entirely monomeric (Fig. 2, E and F). The SEC fractions corresponding to monomeric RBD and RBDA7 (Fig. 2, E and F, gray shading) were pooled and concentrated, and monomeric status was confirmed by analytical SEC (Fig. S1, C and D). These species ran to positions consistent with their expected monomer glycoprotein size of ϳ55 kDa in SDS-PAGE (Fig. 2I). When monomeric RBD and RBDA7 were subject to DTT-induced refolding, assembled HMW was formed less efficiently, with a lower percentage of the total refolding and smaller size of the HMW species generated ( Fig. 2 (G and H, respectively) and Table 1). This indicated that the presence of one or more of the HVRs inhibited DTTinduced refolding; hence, RBD and RBDA7 were not analyzed further.
Biophysical characterization of refolded ⌬123 and ⌬123A7
Biophysical techniques were used to examine the size of refolded ⌬123r and ⌬123A7r. We previously used SEC-multiangle light scattering (SEC-MALS) analysis to show that monomeric ⌬123 was 47 kDa and cell culture-derived ⌬123-HMW was ϳ2,400 kDa, whereas a smaller species, HMW2, was 240 kDa (25). SEC-MALS analysis of assembled ⌬123r and ⌬123A7r proteins revealed that they were polydispersed with a wide molar mass range, with both having a weight average molar mass of 409 kDa ( Table 2). This was ϳ9-fold higher than monomeric ⌬123, but smaller than that previously reported for cell culture-derived ⌬123-HMW (25).
We next examined the thermal stability of the E2 antigens using differential scanning fluorimetry (DSF). The traces obtained for monomeric ⌬123 and ⌬123A7 (Fig. 3, A and B) indicated moderate differences in thermal stability with melting temperature (T m ) values of 77 and 71°C, respectively. This suggested that the lower number of cysteine residues and consequently reduced number of disulfide bonds in ⌬123A7 reduced thermal stability compared with ⌬123. We were unsuccessful in obtaining T m values for the assembled glycoproteins using DSF, probably due to excess uptake of dye prior to heating. We therefore utilized indirect meth-ods to assess the thermal stability of these molecules. A modification of blue native PAGE (BN-PAGE) was used to assess the resistance to dissociation of the multimeric structure of the assembled antigens by heating to temperatures ranging from room temperature (RT) to 100°C for 5 min either in the absence or presence of the reducing agent DTT prior to BN-PAGE. In the absence of pretreatment, both ⌬123r and ⌬123A7r migrated to ϳ720 kDa. The multimeric structure of the antigens was largely (⌬123r, Fig. 3C) or completely (⌬123A7r, Fig. 3D) resistant to heating to 100°C in the absence of reducing agent, consistent with the cross-linking of subunits with nonlabile disulfide bonds. The addition of 0.2 mM DTT during heating to 60°C and above caused progressive dissociation of ⌬123r HMW-like multimers, with a mean HMW band intensity at ϳ720 kDa of 0.88 at 60°C reducing to 0.27 and 0.30 at 90 and 100°C, respectively, relative to DTT treatment at RT (Fig. 3, C and E). In contrast, ⌬123A7r HMW-like multimers were more resistant to dissociation at the same temperatures and DTT concentration, with mean HMW band intensities at ϳ720 kDa of 0.85 and 0.81 at 90 and 100°C, respectively, relative to RT (Fig. 3, D and E).
Thermal stability of specific epitopes was analyzed using the conformation-dependent nonneutralizing MAb14 (24) in a direct ELISA modified by the additional step of heating the antigens at the indicated temperature in carbonate buffer for 30 min prior to coating the plates. MAb14 was used as it binds equally to ⌬123, ⌬123A7, and the refolded versions of these antigens (Table 3). Results are shown as MAb14 binding to treated antigen, relative to untreated antigen. The MAb14 epitope was largely resistant to thermal disruption up to 90°C, with 100°C treatment reducing the binding of monomeric proteins marginally more than the refolded versions (Fig. 3F). The bNAb HC84.27 was also used in this assay as it is well-characterized and binds the neutralizing face of E2, has a discontinuous epitope (27), and showed adequate binding to all of the antigens assessed (Table 3). HC84.27 binding was more sensitive to thermal disruption than MAb14, with binding being markedly reduced by treatment at temperatures of 80°C or above for all antigens assessed (Fig. 3G). The HC84.27 epitope was more resistant to heat treatment up to 70°C within the ⌬123A7 monomer compared with other antigens.
Antigenic comparison of refolded and monomeric E2
Monomeric and assembled forms of ⌬123 and ⌬123A7 and cell culture-derived ⌬123-HMW were compared for their reactivity with a panel of E2-specific mAbs by direct ELISA (Fig. S2), with the -fold difference in binding compared with ⌬123 monomer shown in Table 3. Compared with the other antigens, ⌬123r and ⌬123A7r showed markedly reduced reactivity to bNAb HC11 (domain B), and AR3 bNAbs AR3A and AR3D, with ⌬123r also showing markedly reduced reactivity to AR3C and HC-1 (domain B). These data suggest that a subset of conformation-dependent epitopes are occluded or their structure is altered on the neutralizing face of E2 in the assembled glycoproteins. By contrast, the reactivity of ⌬123r and ⌬123A7r was similar to cell culture-derived ⌬123-HMW and monomeric antigen forms for bNAbs with linear epitopes localizing to the and ⌬123A7r showed markedly reduced binding to the non-NAbs 2A12, CBH4G, and AR1A compared with the corre-sponding monomers. Cell culture-derived ⌬123-HMW also had markedly reduced binding to 2A12 and CBH4G, suggesting occlusion of the nonneutralizing face of E2 in the cell culturederived HMW and assembled glycoproteins. The H52 mAb was an exception in that binding to ⌬123r was strongly enhanced, recapitulating the enhanced binding of this mAb to the cell
EDITORS' PICK: Multimerization of HCV E2 enhances immunogenicity
culture-derived ⌬123-HMW. This antibody is sensitive to mutation at Cys 652 , 3 and as a consequence, H52 binding to monomeric ⌬123A7 and ⌬123A7r was either markedly or moderately reduced, respectively, compared with monomeric ⌬123. We sought to confirm the direct ELISA binding data by using biolayer interferometry (BLI) to measure the reactivity of the analyte-phase E2 antigens to a subset of mAbs (HCV1 (AS412), AR3C (AR3), and 2A12 (domain A)). In these experiments, the multimeric forms of antigen did not have measurable off rates in most cases, presumably through avidity effects, precluding obtaining K D values ( Fig. S3 and Table S1). The K D values obtained for ⌬123 and ⌬123A7 monomers for the three antibodies were broadly similar, supporting the direct ELISA data that showed similar binding levels of these antigen/antibody combinations. BLI sensorgrams of ⌬123r/AR3C and ⌬123A7r/ AR3C and all multimeric antigens to 2A12 showed minimal binding, consistent with the minimal binding seen by direct ELISA.
We next examined the ability of the different E2 species to bind to the plate-bound large extracellular loop (LEL) of CD81 in a capture ELISA. As reported previously, cell culturederived ⌬123-HMW showed an ϳ2-3-fold reduction in LEL binding compared with ⌬123 monomers (25). Assembled ⌬123r and ⌬123A7r did not bind to CD81 LEL at levels significantly above background (Fig. 4, A and C). The binding of CD81 LEL to ⌬123A7 monomers was reduced by approximately 1 log compared with ⌬123 monomers, suggesting that the mutational loss of 7 cysteine residues reduces CD81 binding capacity. Binding of the anti-His 6 mAb was used to confirm equal loading of E2 antigens in a direct ELISA (Fig. 4B). CD81 binding was also assessed by coating plates with E2 antigen and measuring the capture of CD81 LEL (data not shown). This experiment showed similar relative binding of different antigen species to CD81 LEL, including the loss of reactivity of ⌬123r and ⌬123A7r.
Assembled forms of E2 induce strong E2-specific antibody responses
To assess the immunogenicity of the assembled ⌬123r and ⌬123A7r proteins, guinea pigs were immunized four times with the proteins in the MF59-analog adjuvant AddaVax TM . The E2-specific titers of the sera of guinea pigs vaccinated with ⌬123-HMW (n ϭ 8, group 1), ⌬123r (n ϭ 8, group 2), ⌬123A7r (n ϭ 8, group 3), ⌬123 monomers (n ϭ 4, group 4), ⌬123A7 monomers (n ϭ 4, group 5), and negative controls (n ϭ 3, group 6) toward the monomeric forms of ⌬123 or RBD were determined by direct ELISA (Fig. 5, A and B, respectively). Antibody titers were robust and similar for all immune groups toward both antigens, generally ranging from 10 4 to 10 5 . Within the ⌬123-HMW, ⌬123r, and ⌬123A7r groups, where animal numbers were sufficient to support statistical analysis, there were no significant differences between the groups (p Ͼ 0.05), and within-group means had a narrow range between 10 4.2 and 10 4.6 .
To examine whether antibodies able to recognize epitopes I, II, and III were generated, the corresponding avidin-bound biotinylated peptides were used to capture specific antibodies present in the immune sera (Fig. 5, C-E, respectively). All animals generated measurable antibodies specific to these regions with the single exception of one serum from the ⌬123A7 monomer group against epitope I. There were generally similar titers of antibodies elicited in the ⌬123-HMW, ⌬123r, and ⌬123A7r immune groups with a trend toward lower titers in animals that received monomeric immunogens, particularly ⌬123A7 monomers. Within the ⌬123-HMW, ⌬123r, and ⌬123A7r groups, the only significant difference was that the mean titer against epitope III for the ⌬123A7r-vaccinated group was narrowly significantly lower (p ϭ 0.0486) than the ⌬123r-vaccinated group.
Vaccine-induced antibodies compete with CD81 LEL and mAbs for binding to E2
To examine whether the E2-vaccinated groups generated antibodies able to prevent the interaction between the homologous genotype 1a RBD and CD81, an ELISA was performed in which RBD and immune sera were mixed in solution and incubated prior to addition to plate-bound CD81 LEL. The immune sera from all E2-vaccinated animals competed with the interaction between CD81 and the homologous G1a RBD antigen, with similar titers elicited between groups (Fig. 6A), despite significant occlusion of the CD81 surface in the case of ⌬123r and ⌬123A7r. Antibodies able to block the interaction between heterologous genotype 2a RBD and CD81 LEL were also present in all sera, albeit at lower titers (Fig. 6B). There were no statistically significant differences between the groups assessed for either interaction.
We also examined the specificity of the immune serum by employing a competitive ELISA using a subset of the bNAbs and non-NAbs that were used to assess antigenicity of the E2 molecules. Immune sera of all vaccinated animals were able to compete with bNAbs HCV1 (AS412), AR3C (AR3), and HC84.27 (AS434) for interaction with the homologous RBD. Where group sizes allowed statistical comparison (⌬123-HMW, ⌬123r, and ⌬123A7r), there were no statistically significant differences between the groups. There was a trend toward higher titers in the ⌬123-HMW, ⌬123r, and ⌬123 monomer groups compared with the ⌬123A7r and ⌬123A7 monomer groups (Fig. 7, A-C). In contrast, sera from animals vaccinated with monomeric ⌬123 and ⌬123A7 had higher titers of antibodies able to compete with binding of the non-NAb 2A12 and CBH4G compared with the sera from animals vaccinated with the cell culture-derived or assembled HMW forms (Fig. 7, D and E). In fact, no CBH4G competing antibodies were observed for any of the animals in the groups that were vaccinated with ⌬123-HMW, ⌬123r, or ⌬123A7r. Overall, these results show that the assembled immunogens elicit antibodies that overlap with bNAb epitopes located in antigenic regions AS412, AS434, 3 H. Drummer, unpublished observation. EDITORS' PICK: Multimerization of HCV E2 enhances immunogenicity and AR3, even when the antigenic reactivity to these mAbs was markedly reduced in the case of AS434 and AR3 and the immunogenicity of non-NAb epitopes was significantly decreased.
Assembled forms of E2 induce neutralizing antibodies
Neutralization assays were performed on 1:40 dilutions of all sera against both homologous genotype 1a using pseudotyped retroviral particles (HCVpp) and heterologous cell culturederived virus (HCVcc) containing the structural regions of genotypes 2a, 3a, and 5a (Fig. 8). Where group sizes allowed statistical comparison (⌬123-HMW, ⌬123r, and ⌬123A7r), no statistically significant differences were found; however, we noted several trends in these data. The strongest levels of neutralization were detected toward homologous G1a HCVpp by 2 mM DTT at the indicated temperatures relative to RT ϩ 0.2 mM DTT, which was assigned a value of 1.0 (E). ELISA reactivity of MAb14 (F) and HC84.27 (G) after pretreatment of E2 antigens at the indicated temperatures for 30 min prior to coating. Titers were obtained by interpolating fitted curves at 25-fold above background (given by reactivity to BSA) and were expressed relative to that of RT, which was assigned a value of 1.0. Error bars in E represent the S.D. of two independent experiments, and error bars in F and G represent the S.D. of three independent experiments. Note that in one case each for ⌬123 monomers and ⌬123A7 monomers, the threshold for titration to HC84.27 after treatment at 100°C was not met, in which case they were assigned a titer of the highest concentration of antibody used (3,160 ng/ml).
We next sought to determine whether neutralizing activity correlated with other ELISA binding or inhibitory titer parameters combining the immune sera across all vaccination groups (Table 4 and Fig. S4). Most parameters had a statistically significant positive correlation to H77 neutralization, the exceptions being epitope III-binding titer, inhibition of the G2a RBD/ CD81 interaction, and inhibition of the interaction between RBD and the non-NAbs 2A12 and CBH4G. The strongest positive correlations with neutralization were observed for the inhibition of the binding of the bNAbs HCV1 (AS412), AR3C (AR3), and HC84.27 (AS434) and CD81 LEL to H77c RBD and for the direct binding titers to epitopes I (AS412) and II (AS434) (p Ͻ 0.005, r Ͼ 0.5 for these parameters). This suggests that HMW and assembled HMW forms of E2 are able to elicit antibodies targeting multiple neutralization domains, including AS412, AS434, and AR3, and reduce the generation of potentially deleterious non-Nabs.
Discussion
Here, we report on efforts to synthetically produce a disulfide-linked HMW multimer of the HCV E2 glycoprotein using sequential reduction and oxidation to drive intermolecular disulfide bond formation. This was prompted by our previous finding that an HMW-⌬123 multimer, which was spontaneously formed during expression in 293-F cells, showed superior immunogenicity compared with monomeric E2 but was expressed at very low levels. Stably transfected 293-F cell clones yielded 20 -40 mg of ⌬123 monomer/liter of tissue culture supernatant, of which ϳ60 -70% could be assembled into the HMW form by sequential reduction and oxidation. This com- Table 3 Antigenicity of monomeric and assembled ⌬123 and ⌬123A7 and cell culture-derived ⌬123-HMW measured by direct ELISA Numbers show the fraction of antibody reactivity compared with that of monomeric ⌬123, which was assigned a reactivity of 1.0 for all antibodies. Indicated are relative binding of Ͻ 0.5 (yellow shading), Ͻ 0.1 (red shading), and Ͼ 2.0 (green shading).
EDITORS' PICK: Multimerization of HCV E2 enhances immunogenicity
pares with the less than 5% of the total yield for cell culturederived ⌬123-HMW. The refolding strategy was efficient at producing HMW multimers of ⌬123 and ⌬123A7 but not RBD or RBDA7, suggesting that the presence of the HVRs sterically interferes with intermolecular disulfide bond formation and/ or maintenance of assembled multimers. To efficiently form HMW complexes, ⌬123 and ⌬123A7 monomers would have undergone extensive intramolecular disulfide bond breakage and then formation of intermolecular disulfide bonds when sequentially reduced and oxidized during the assembly process. The disruption of intramolecular disulfide bonds within the monomeric antigens and/or the formation of novel intramolecular disulfide bonds that were not present prior to refolding would be expected to broaden the range of conformational states adopted by ⌬123r and ⌬123A7r. This may account for the reduced reactivity of the assembled forms to a number of conformation-dependent bNAbs and non-NAbs and soluble CD81 that was observed. In contrast, reactivities to linear mAbs were similar when assembled and monomeric forms were compared. Despite this apparent global skewing toward the presentation of linear epitopes, ⌬123r and ⌬123A7r elicited antibodies that competed with the interaction between RBD and CD81 and the conformational bNAbs tested that targeted the AR3 and AS434 epitopes to a similar extent as sera generated by monomeric E2. Importantly, antibodies raised against assembled ⌬123r and ⌬123A7r and cell culture-derived ⌬123-HMW either did not compete with the two non-NAbs assessed or did so less potently than antibodies present in sera raised against monomeric ⌬123 and ⌬123A7. The occlusion of non-Nab epitopes may refocus the immune system toward NAb targets. This concept is well-established in HIV vaccine development, where mutations have been designed to stabilize the HIV Env trimer and occlude non-NAb epitopes with some success in eliciting NAb responses in small animals (28, 29). Half-log serial dilutions of sera were performed, and curves were fitted by nonlinear regression. Titers were obtained by interpolation using a value of 25-fold above background (defined by signal in the absence of sera) for the ⌬123 and RBD antigens and 20-fold above background (defined as above) for peptides I, II, and III. The dashed line shows the lower detection limit of the assay (1:100 dilution). Half-log serial dilutions of sera and a constant concentration (0.5 g/ml) of H77c RBD (A) and JFH-1 RBD (B) were mixed, incubated for 1 h, and then added to plate-bound CD81 LEL in a competitive ELISA. E2 antigen was detected using the anti-His 6 tag antibody. Curves were fitted by nonlinear regression, and ID 50 values were interpolated using binding in the absence of guinea pig sera as 100% binding. Data are shown as the log 10 ID 50 of individual guinea pig sera. The dashed line shows the lower detection limit of the assay (1:10 dilution).
EDITORS' PICK: Multimerization of HCV E2 enhances immunogenicity
It has been shown that functional E2 on the surface of viral particles exists as a noncovalently linked heterodimer with E1 (30) (and references therein). One study also found that virusassociated E1 forms homotrimers (31), suggesting a form comprised of a trimer of homodimers. The potential higher-order quaternary structure of these proteins on viral particles is less , and CBH4G (100 ng/ml) (E) were incubated with plate-bound RBD in a competitive ELISA. mAb binding was detected using a horseradish peroxidase-conjugated secondary antibody specific for human antibody. Curves were fitted by nonlinear regression, and ID 50 values were interpolated using binding in the absence of guinea pig sera as 100% binding. Data are shown as the log 10 ID 50 of guinea pig sera. The dashed line shows the lower detection limit of the assay (1:10 dilution). Individual data points are the mean of within-assay triplicate measurements, and bars represent within-group means. Where negative neutralization values were obtained, they were assigned a value of 0. The dotted line represents the mean level of nonspecific neutralization of three control sera from guinea pigs vaccinated with adjuvant alone.
EDITORS' PICK: Multimerization of HCV E2 enhances immunogenicity
well-characterized. One study found that virion-associated E1 and E2 formed disulfide-linked HMW complexes of greater than 440 kDa with evidence to suggest that this form was functional (32). It is possible that the HMW form of E2 in our study retains a subset of conformations present in virus-associated E2. More efficient antigen uptake and presentation of larger molecules by antigen-presenting cells and/or more avid binding to B-cell receptors may also have played a role in the enhanced immunogenicity of HMW E2. This is well-recognized for large-sized antigen platforms, such as virus-like particles or liposomes, or the coupling of small antigens to larger carrier molecules but has been less well-studied for single-component multimeric complexes. Larger multimeric forms of the lipopolysaccharide O antigen of Francisella tularensis and the meningococcal capsular polysaccharide of Neisseria meningitidis have been shown to mediate enhanced immunogenicity compared with smaller forms of the same antigen (33,34).
A number of viral glycoproteins have a high degree of structural plasticity in some protein domains, and this is believed to play a role in viral immune evasion. It has been shown that stabilizing HIV-1 envelope protein structure by glutaraldehyde-mediated cross-linking selectively enhances the humoral immune responses to key neutralizing epitopes (35). Structural plasticity has also been reported for HCV E2 (36, 37) despite E2 being highly stable overall, as evidenced by a high melting temperature. This is exemplified by the 412-423 N-terminal region, which displays high sequence conservation, contributes to the CD81-interactive region, and is a target epitope of bNAbs. Crystal-derived structures of a number of bNAb-derived Fabs in complex with peptides corresponding to this region of E2 showed that the peptide variously adopted either a -hairpin (21,(38)(39)(40)(41), extended (42), or short antiparallel -sheet/extended coil (43) conformation. DSF analysis showed that ⌬123 and to a lesser extent ⌬123A7 monomers had high overall thermostability, which has previously been reported for similar E2 constructs using calorimetry (37). The post-heating ELISA binding data indicated that the MAb14 epitope was generally resistant to disruption. Assembled versions of E2 were slightly more heat-resistant than the corresponding monomers, which may be an advantage in terms of immune recognition of normally labile epitopes. In comparison, the HC84.27 epitope was more heat-sensitive, with the ⌬123A7 monomers showing more heat resistance than ⌬123 monomers or either assembled HMW forms. It therefore appears that thermal resis-tance can be epitope-dependent. It is likely that reformation of individual epitopes occurs after heat treatment in the ELISA experiments, whereas this would not occur during DSF, where protein unfolding is measured continuously in real time. If ⌬123A7 was able to refold more efficiently through a pathway that was simplified by the reduced number of disulfide bonds, this may explain why it had a lower T m in DSF but showed higher heat stability by ELISA at the HC84.27 epitope. In BN-PAGEexperiments,multimersof⌬123A7rshowedgreaterresistance to dissociation than ⌬123r when samples were heated in the presence of DTT prior to electrophoresis, in contrast to the DSF data showing that monomeric ⌬123A7 was less heat-stable than ⌬123 monomers. A simplified assembly may have allowed ⌬123A7r to form more compact multimers that were more resistant to disassociation by treatment with heating and the reducing agent DTT than ⌬123r.
When used to immunize guinea pigs, the HMW forms (cell culture-derived HMW, ⌬123r, and ⌬123A7r) elicited similar RBD-and ⌬123-binding titers as the monomeric forms of these glycoproteins. When titers against three peptides corresponding to CD81 interactive regions/NAb targets (epitopes I-III) were compared, there was a trend for ⌬123 monomers and especially ⌬123A7 monomers to generate lower titers than the HMW forms. These data are consistent with the maintenance of reactivity of HMW to antibodies with linear epitopes seen in the antigenicity studies (Table 3 and Fig. S2) and suggest a possible bias toward the induction of antibodies with linear epitopes. They are also consistent with the reduced antibody/ HMW binding off rates observed with BLI, which may translate to more avid B cell receptor binding and prolonged signaling. The individual immune sera generated by all antigens consistently and robustly neutralized the homologous G1a H77c pseudotyped viral particles. Heterologous G5a was more frequently neutralized by sera generated by the HMW forms, with 14 of 24 (58%) sera showing at least 50% neutralization compared with 2 of 8 (25%) sera of monomer-vaccinated animals. G2a and G3a were generally less consistently neutralized, with 50% or less of the sera neutralized to the 50% level in most immunization groups. G1a neutralization was also consistently positively correlated to most binding parameters, including competition with bNAbs but not non-NAbs for binding to RBD monomers. Not unexpectedly, given the less consistent neutralization seen for G2a, G3a, and G5a, few significant positive correlations between the neutralization of these viruses and binding parameters were observed (data not shown).
Overall, we found that assembled versions of ⌬123 when used in combination with AddaVax TM were as effective as cell culture-derived HMW at generating antibodies that bind intact E2 or peptides corresponding to neutralizing targets, block the interaction between E2 and CD81 or bNAbs (but not non-NAbs), and neutralize virus. The HMW assembly strategy described here was very efficient at producing disulfide-linked HMW at levels compatible with vaccine production compared with the very low yields of cell culture-derived HMW. This novel approach shows utility for production of HCV vaccine candidates and may have broader vaccine applicability where the advantages associated with larger antigen size are sought.
Recombinant protein expression and purification
The soluble HCV E2 ectodomain comprising amino acids 384 -661 (RBD) (H77c polyprotein numbering used here and throughout), the ⌬123 E2 core domain in which the three HVRs were either removed (residues 384 -408) or replaced with GSSG linkers (residues 461-485 and 570 -580) and modified versions of these glycoproteins bearing seven cysteine-to-alanine mutations (A7: C452A, C486A, C569A, C581A, C585A, C597A, and C652A) (Fig. 1) were expressed in Freestyle 293-F cells (293-F, Thermo Fisher Scientific) as described previously (25,26). ⌬123 was produced using a stable transfected cell clone, whereas ⌬123A7, RBD, and RBDA7 were produced in cells transiently transfected using 293fectin (Thermo Fisher Scientific) according to the manufacturer's recommendations. All versions were purified from tissue culture supernatant by affinity chromatography using Talon resin (Clontech, Mountain View, CA) via the C-terminal His 6 tag following the manufacturer's guidelines. Eluates were concentrated and bufferexchanged to PBS adjusted to pH 6.8 (PBS 6.8) and subjected to SEC using a Superdex 200 16/600 column (GE Healthcare, Uppsala, Sweden). Analytical SEC to confirm the isolation of monomeric E2 was performed using a Superdex 200 10/300 column (GE Healthcare). CD81 LEL was expressed and purified as a dimer in Escherichia coli as described previously (44).
Assembly of HMW-like E2 proteins
E2 monomers were buffer-exchanged from PBS 6.8 to 50 mM carbonate-bicarbonate buffer, pH 9.6, at a final E2 concentration of 1 mg/ml. DTT was added to a final concentration of 0.6 mM, followed by incubation at 37°C for 30 min. The DTT concentration was then adjusted to 1.2 mM followed by further incubation at 37°C for 30 min. PBS 6.8 equaling 50% of the reaction volume was added followed by incubation at RT for 15 min to allow for slow disulfide bond reformation. This step was repeated twice with the amount of PBS 6.8 added equaling 50% of the original reaction volume on each occasion. The reaction buffer was then fully exchanged back into PBS 6.8 and concentrated prior to SEC.
PAGE
A modification of the BN-PAGE method was performed in the presence of the indicated concentration of the reducing agent DTT and/or with sample heating at the indicated temperature prior to electrophoresis. Native PAGE 4 -16% BisTris gels (Invitrogen) were used following the manufacturer's instructions. The indicated E2 antigens (4 g) and NativeMark protein standards (Thermo Fisher Scientific) were adjusted to 1ϫ sample buffer (4ϫ sample buffer: 200 mM BisTris, 64.2 mM HCl, 200 mM NaCl, 40% (w/v) glycerol, 0.004% (w/v) Ponceau S) prior to loading on the gel. The gel was fixed in 50% ethanol and 2% phosphoric acid; stained in 8.5% phosphoric acid, 10% ammonium sulfate, 20% methanol, and 0.12% Coomassie blue G-250 dye; and imaged using a LI-COR Odyssey IR imager and version 3.0 software. Band intensity was quantified with Image Lab version 6 software (Bio-Rad). Denaturing SDS-PAGE of the indicated E2 monomeric antigens (4 g) and Precision Plus protein standards (Bio-Rad) was performed using standard conditions either in the absence or presence of the reducing agent, -mercaptoethanol. Gels (12% Tris/glycine) were stained with Coomassie dye and imaged as above.
DSF
The thermal stability of E2 antigens was tested by diluting 10 g of protein into a 25-l volume with 5ϫ SYPRO Orange Protein Gel Stain (Thermo Fisher Scientific) in duplicate. The samples were then heated in an Mx300 qPCR System (Agilent Technologies) using the Stratagene MX PRO program in 0.5°C increments, starting at 25°C and ending at 95°C for 1 min/ temperature step. Fluorescence was read at the end of each increment in triplicate. Excitation was at 492 nm, and emission was at 610 nm. The T m in°C was determined to be the minimum of the negative first derivative of the melting curve.
Immunizations
Guinea pigs (outbred tricolor) that were matched for gender, weight, and age were immunized subcutaneously with 100 g of E2 protein in PBS 6.8 in a 1:1 (v/v) mix with AddaVax TM adjuvant (InvivoGen, San Diego, CA) four times at 3-week intervals. A negative control group was immunized as above with a 1:1 (v/v) mix of PBS 6.8 and adjuvant. Two weeks after the final dose, blood was collected by terminal cardiac puncture and allowed to clot for serum preparation. Sera were stored at 4°C, with heat inactivation at 56°C for 30 min prior to use in the case of the neutralization assays. Animals were housed and all procedures were performed at the Preclinical, Imaging, and Research Laboratories, South Australian Health and Medical Research Institute (Gilles Plains, Australia). All animal experiments were performed in accordance with the eighth edition of EDITORS' PICK: Multimerization of HCV E2 enhances immunogenicity the Australian Code for the Care and Use of Animals for Scientific Purposes and were approved by the SAHMRI Animal Ethics Committee, project number SAM210.
ELISA
Direct ELISA-The relative reactivity of E2 antigens to mAbs was assessed by ELISA as described previously (24) except that E2 (250 ng/well) was directly coated onto the plastic surface. Half-log serial dilutions of mAbs were incubated for 1 h and detected using horseradish peroxidase-labeled antibody (Dako, Glostrup, Denmark) against the appropriate primary antibody species. Color reactions were measured with a Multiskan Ascent plate reader (Thermo Electron, Waltham, MA). mAb binding to different antigens was compared by fitting curves with nonlinear regression using Prism version 7 software, and titers were obtained by interpolation of optical density (OD) values 20-fold above that of background, as defined by binding to BSA. Binding was then expressed as -fold difference compared with monomeric ⌬123. The relative reactivity of guinea pig serum antibodies to the indicated E2 antigens was also determined by direct ELISA as described above. A cut-off OD value of 25-fold above background, as defined by signal in the absence of sera, was used to determine the dilution titer for each individual guinea pig serum.
Capture ELISA-To determine the relative reactivity of E2 antigens to CD81, ELISA plates were coated with CD81 LEL, blocked, and incubated with serial dilutions of E2 antigens for 2 h. The amount of E2 antigen captured was measured using an anti-His 6 mAb. Reactivity of guinea pig sera to peptides based on H77c sequences for epitope I ( 408 KQNIQLINTNGSW-HINSTALN 428 ), epitope II ( 430 NESLNTGWLAGLFYQHK-FNSSG 451 ) and epitope III (H77c, 523 GAPTYSWGANDTD-VFVLNNTRPPLGNW 549 ) were also determined by capture ELISA. Plate-bound avidin was used to capture the biotinylated peptide (1 g/ml for 1 h) followed by the addition of serial dilutions of guinea pig sera and subsequent steps as outlined under "Direct ELISA." In this case, a cut-off OD value of 20-fold above background (defined by signal in the absence of sera) was used to determine the titer.
Competitive ELISA-The ability of antibodies within immune sera to compete with mAbs or CD81 LEL for binding to monomeric RBD was measured in antibody competition or E2-CD81 inhibition assays as described previously (25). Inhibitory titers were expressed as the reciprocal dilution of immune serum that reduces the binding reaction being competed by 50% (inhibitory dilution 50, ID 50 ) using binding in the absence of sera as 100% binding.
BLI
BLI-based measurements were determined using an Octet RED System (ForteBio, Fremont CA). Antibodies were diluted in 1ϫ kinetic buffer to 10 g/ml and immobilized onto anti-human IgG Fc capture biosensors (ForteBio). Kinetics assays were carried out at 30°C using standard kinetics acquisition rate settings (5. Fitting curves were constructed using ForteBio Data Analysis 10.0 software using a 1:1 binding model, and double reference subtraction was used for correction.
Neutralization assays
HCV neutralization assays were performed as described previously (53). Briefly, HEK293T cells were co-transfected in a 1:1 (w/w) ratio of pE1E2H77c and pNL4 -3.LUC.R-E-to produce HCV H77pp (54,55). 1:40 dilutions of guinea pig sera were added to H77pp and incubated for 1 h at 37°C before addition to Huh7.5 cells. After incubation for 4 h, the inocula were removed, and cells were incubated in fresh media for 72 h. Following lysis in cell culture lysis buffer (Promega, Madison WI), luciferase activity in clarified lysates was measured by using a luciferase substrate (Promega) on a CLARIOstar microplate reader fitted with luminescence optics (BMG Lab Technologies). Infectious cell culture-derived genotype 2a (J6), 3a (S52), and 5a (SA13) HCVcc were produced by transfecting Huh7.5s with in vitro-transcribed RNA by electroporation as described previously (25). NAb assays were performed by mixing HCVcc with 1:40 dilutions of guinea pig sera as described above with incubation for 42 h after removal of the inocula. Luciferase activity in cell lysates was measured using Renilla luciferase substrate (Promega).
Statistics
Statistical between-group comparisons of guinea pig sera were performed where group size was sufficient (n ϭ 8, ⌬123-HMW, ⌬123r, and ⌬123A7r groups). Curves were fitted by nonlinear regression using one-site-specific binding with Hill slope. Data were statistically compared using the nonparametric Kruskal-Wallis test with Dunn's multiple comparisons. Correlations between parameters were tested using the nonparametric Spearman test and combined data from the sera of all E2 vaccinated animals. For both tests, a p value of Ͻ0.05 was considered significant. All statistical analyses were performed using Prism version 7 software.
Data availability
Data will be shared upon request to the corresponding author, Heidi Drummer. | 2020-04-18T13:05:51.598Z | 2020-04-16T00:00:00.000 | {
"year": 2020,
"sha1": "8e765f73a16f62af3cbbb08baf7f30c238ed2f1e",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/article/S0021925817502549/pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "bb417ea77064cc6f606991839d4a59ec9d244ed6",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
118412260 | pes2o/s2orc | v3-fos-license | Extrinsic orbital angular momentum of entangled photon-pairs in spontaneous parametric down-conversion
Starting from the standard Hamiltonian describing the optical non-linear process of spontaneous parametric down-conversion, we theoretically show that the generated entangled photon-pairs carry non-negligible orbital angular momentum in the degrees of freedom of relative movement in the type-II cases due to spatial symmetry breaking. We also show that the orbital angular momentum carried by photon-pairs in these degrees of freedom escapes detection in the traditional measurement scheme, which demands development of new techniques for further experimental investigations.
I. INTRODUCTION
The nonlinear optical process of spontaneous parametric down-conversion (SPDC) serves as an important toolkit to produce entangled photon-pairs. The entanglement may involve energy, linear momentum, and angular momentum. Recently, it was demonstrated that photon-pairs generated in SPDC processes are entangled in another physical variable, orbital angular momentum (OAM), in which the states of high-dimensional entanglement [1] and hyper-entanglement [2] can be prepared.
Although OAM of light is not a true angular momentum [3,4], it is a measurable physical quantity under paraxial approximation. Therefore, the total angular momentum of light may be broken into three separate measurable parts: spin angular momentum, intrinsic OAM, and extrinsic OAM [5]. The spin of light is determined by the polarization of the light beam, the intrinsic OAM of light is associated with the transverse phase fronts of light beams [6], and extrinsic OAM of light is related to the relative movement of the center of the light beam with respect to some external point in space. As one shall see below, all previous investigations on OAM entanglement generated in SPDC processes are limited to intrinsic OAM of light. However, we show in this paper that the down-converted photon-pairs in type-II SPDC processes carry non-negligible extrinsic OAM.
The extrinsic OAM carried by photon-pairs in SPDC processes is the key to understand the relation of spatial symmetry to the OAM conservation rule in SPDC processes [7]. Moreover, our study may lead to a new path to exploit the orbital angular momentum of light in practical applications. In section II, we introduce the states of light carrying OAM and derive the mathematical descriptions for intrinsic and extrinsic OAM of light. Section III calculates the traditional entangled states of light in OAM created in SPDC processes, which are states * Electronic address: sfeng@ece.northwestern.edu entangled in intrinsic OAM. We further show that, as a result of azimuthal symmetry breaking, photon-pairs created in type-II SPDC processes carry non-negligible extrinsic OAM, which is beyond detection scope of traditional scheme [1,8]. Finally, brief discussions are given in section IV followed by a summarizing conclusion.
II. OAM OF LIGHT
When the conservation of a vector is concerned, one usually breaks the vector into components along, for example, x, y, and z-axes. The conservation of the vector means all these components are conserved. If one of the components is not conserved, the vector is said to be non-conserved. The same argument applies to the cases of OAM (non-)conservation in the SPDC processes. To exam whether the OAM is conserved along the pump propagation direction (z-direction) in the SPDC processes, one needs to calculate the state of the down-converted light beams that carry OAMs, the z-components of which are, say, L z . An elegant approach exploiting orbital Poincaré spheres to study this general case was provided in [9], where the z-component of the OAM carried by photons is, however, not guaranteed to be n ( n is an integer). We quantize the studied field along z-axis such that the z-component of the OAM of light is n ( n is any integer). In quantum theory, the eigenstate of the OAM operator L z can be found by solving the eigenvalue equation: which leads to a solution for a one-photon field in free space as follows [4], Here α = l is the z-component of the OAM carried by the photon (l is any integer), and φ k the azimuthal angle of the wave vector k. The g(k, t) is a function independent of the azimuthal angle φ k and, therefore, can be written in the form g(p ρ , k z , t), where p ρ = k 2 − k 2 z is the amplitude of the transverse component p ρ of k. Theâ † (k) is the photon creation operator. We note that the freedom of the field polarization is neglected and emphasizes that the l in Eq. (1) has a physical meaning (OAM along the z-axis) essentially different from what is meant by the same notation in [10,11,12], where the l represents the OAM carried by photons along the axes dictated by the light-beam central vectors.
The one-photon detection amplitude ϕ l 1 (r) ≡ 0|Ê (+) (r)|ψ 1 (t) [Ê (+) (r) = k C kâ (k)e i(k·r−ω k t) ,â(k) is the annihilation operator, C k is a coefficient dependent of k = |k|] for the one-photon field in the eigenstate of the operatorL z is With beam invariants p ρ = k − k zẑ and ω introduced [13], in a plane (z = z 0 ) transverse to the z-axis, the one-photon detection amplitude is then where h(p ρ , t) = ω C k g(p ρ , k z , t)e i(kz z0−ωt) , in which k z = (ω/c) 2 − p 2 ρ , q ρ = r − zẑ, and φ p = φ k . We need to point out that the total OAM, the zcomponent of which is L z = l (l is any non-zero integer), carried by a photon will be arbitrarily fractional if the photon propagates along an arbitrary direction. Extensive theoretical study on fractional OAMs can be found in [14].
Similarly, the eigenstate of the OAM operatorL which leads to a solution for two one-photon fields in free space as follows, Here α = (l s +l i ) is the z-component of the OAM carried by the photons (l s,i are any integers) in both fields, and φ ks,i the azimuthal angles of the wave vectors k s,i . In planes (z s = z s,0 , z i = z i,0 ) transverse to the z-axis, the two-photon detection amplitude should read where φ s,i = φ ks,i , and h s,i (p s,i , t) are functions independent of the azimuthal angles φ s,i . As we shall show below, the traditional scheme [1,15] measures the OAM of the down-converted beams in SPDC processes in the degrees of freedom of center-of-momentum movement described by q + = q s + q i and p + = p s + p i . Thereby, it is useful to re-write Eq. (4) in terms of joint variables q + , p + , q − = q s − q i , and p − = p s − p i : where φ ± are the azimuthal angles of vectors p ± and exploited were p ± e iφ± = p s e iφs ± p i e iφi , which gives Eq. (5) is similar to Eq. (4) where e ilsφs and e iliφi are associated with the z-components of the OAM of each photon in the two one-photon fields. Following the same rule, one can associate the term e il+φ+ ≡ e i(m+ns+ni)φ+ in Eq. (5) with the z-component of the intrinsic part of OAM (l + for two photons) carried by two photons in the degrees of freedom of center-of-momentum movement (described by p + and q + ), and connect e il−φ− ≡ e i[(ls+li)−(m+ns+ni)]φ− to the extrinsic part of OAM (l − ) of the two photons in the degrees of freedom of relative movement (described by p − and q − ) of one photon with respect to the other. Obviously, l + + l − = l s + l i always holds, which shows that the total OAM of the two photons can always be decomposed into two separate parts, the intrinsic part and the extrinsic part, in two independent sets of degrees of freedom of joint movement, proving the statements given in the introduction. In principle, neither the intrinsic OAM nor the extrinsic OAM is negligible when one considers the total OAM of two beams.
III. EXTRINSIC OAM OF LIGHT IN TYPE-II SPDC PROCESSES
For a type-I SPDC process, where the rotational symmetry around pump direction holds [1], the state vector of the down-converted light is calculated in [4] to the first-order approximation. Here we consider the general case, in which the rotational symmetry may be broken in the SPDC processes. If one assumes a classical pump beam and two linearly-polarized down-converted modes (signal and idler) that are initially empty, the Hamiltonian governing a (type-I or type-II) SPDC process in the interaction picture is [4,16,17] where V I is the nonlinear interaction volume, and E(r, t) represents the electrical field associated with the pump beam. Subscripts s and i denote signal and idler, respectively.â † (k s ) andâ † (k i ) are the creation operators for the down-converted modes and their wave vectors k s,i are evaluated inside the medium. The coefficient C where χ is the second-order nonlinearity tensor of the interaction medium and e represents the unit vector of linear polarization for the electrical field.
We consider a Laguerre-Gaussian (LG) pump beam propagating alongẑ with the principal component polarized alongx in cylindrical coordinates [4,18] Here z R is the Rayleigh length, w(z) = w 0 1 + z 2 /z 2 R , w 0 is the beam radius at the waist z = 0. l is the winding number of the pump mode and p the number of radial nodes. Subscript P refers to pump beam and q(z) = z − iz R . Plugging Eq. (7) into Eq. (6) giveŝ ) with beam invariants [13,19] that are constant along propagation of beams: the angular frequencies ω s,i and the transverse components p s,i of wave vectors k s,i .
Under the assumption that the average radius of the beam is small compared to the transverse section of the nonlinear medium and that the medium length l c is much smaller than the Rayleigh range z R of the pump beam, we obtain [4,20] where We note that the spatial symmetry of the Hamiltonian (8) is primarily dictated by the term W (∆k z ) through Eq. (9). Then, the two-photon wave function of the downconverted light reads [4], to the first-order approximation, where ∆ω = ω s + ω i − ω P , T (∆ω) = exp[i∆ω(t − t int /2)] sin(∆ωt int /2)/(∆ω/2) with t int being the interaction time. Using equation (11), we find the two-photon detection amplitude of the down-converted beams: ps,pi,ωs,ωi where C k = ω k 2ε0V and V is the quantization volume. Using Eqs. (9-10) and converting the sum into integrals , we obtain the transverse profile of ϕ 2 (r s , r i ) in the transverse planes z s = z 0,s and z i = z 0,i : 6 dω s dω i C ks C ki ω s ω i c 4 k z,s k z,i C (s,i) 1
For the sake of simplicity, we assume p s ≈ p i [13] and z 0,s = z 0,i ≡ z ′ 0 , t s = t i ≡ t,ω s =ω i ≡ω (frequencydegenerate case), whereω s,i are the central angular frequencies of the down-converted beams. Usually, T (∆ω) can be approximated by a Delta function δ(∆ω) times t int and, under paraxial approximation, the phase factor φ s,i ≡ k z,s z 0,s − ω s t s + k z,i z 0,i − ω i t i may be considered as a constant over the integral range of the angular frequency: where δ(ω s + ω i ) = 0 is applied. Then, with joint variables p ± and q ± , Eq. (12) can be re-written as where with the global phase term e iωP( z ′ 0 c −t) being dropped off. The dependence of D(ω s ) on p ± is considered weak and neglected in our analysis.
In terms of the beam invariants, we evaluate the phase mismatch ∆k z to the first-order approximation: [13] ∆k z ≈ −νD − p 2 where the signal is assumed to be the e-beam. Hereν, D,K, and N are parameters dependent of the nonlinear medium and defined in Ref. [13]. In the last step, it is assumed that |p + | ≪ |p − |, which is usually valid in non-collinear configurations except in a very rare case where the detected down-converted photon-pairs nearly co-propagate with the pump. In this case, the dependence of W (∆k z ) on p + in Eq. (13) is negligible. At this point, one might argue that an important and largely employed experimental configuration is the non-collinear phase matching in which the x-components of p + and p − are comparable, that is, where the two down-converted light cones cross each other. Now we show that the p + ·x term can be neglected even if p + ·x is comparable to p − ·x.
At the crossings of the two down-conversion cones, both p ± ·x may be comparable to each other because they are close to zero with respect to other cases where the two cones do not cross. So, both terms are small compared to the |p − | term and can be omitted in Eq. (15). To quantitatively prove this, we take the experimental example of Ref. [21] , where the effective θ pm = 49.63 • . One can easily obtainK = 14.38µm −1 , N = −0.068 using the formula in [13]. According to the numerical estimation in [13], |p − | is order of 1µm −1 at the crossings. So, Then we can do the following comparison for the cone crossings: Because |p + ·x| ≈ |p − ·x| at the crossings, we also have Mathematically, it does not hurt to keep one negligible term and drop the other negligible one in Eq. (15). Choosing to keep the p − ·x term is to make Eq. (15) general enough to cover all cases no matter whether p ± ·x are comparable to each other or not.
Accordingly, the two-photon detection amplitude ϕ 2 (q + , q − ) can be written as a product of two separate terms: where Eq. (16) can be generalized to the type-I case and its Fourier transform reads [22]: if one denotes F 2 (p + , p − ) as the Fourier transform of ϕ 2 (q + , q − ). Eq. (16) or Eq. (20) reveals the following physics. The movement of the down-converted photon-pairs in the transverse plane is de-coupled into two independent sets of degrees of freedom: one for the center-of-momentum movement described with the joint variables {q + , p + } and the other for the relative movement of each photon with respect to its twin delineated with {q − , p − }. In the degrees of center-of-momentum-movement freedom, the photon-pairs carry intrinsic OAM (denoted as l + per pair) always equal to that of the pump photon, i.e., l + = l, as stated by Eq. (14), giving rise to entanglement in intrinsic OAM in these degrees of freedom. In the degrees of relative-movement freedom, as attested by Eq. (18)- (19) and shown below in details, the extrinsic OAM (l − per pair) carried by the photon-pairs depends on the azimuthal symmetry of F − (p − ) dictated by W [∆k z (ω s , p − )]. Mathematically, one can always expand Eq. (19) in the form of Fourier series: where each non-zero F 5)]. In a type-II SPDC process, W [∆k z (ω s , p − )] (and then the Hamilto-nianĤ int ) lacks the azimuthal symmetry in usual experimental conditions l c ≥ 0.5mm [13] [ Fig. 1(a)]. As stated by Eq. (19), F − (p − ) then must be a function of the azimuthal angle φ − , which requires that at least one higherorder term in Eq. (21) is non-zero. Each of these nonzero higher-order terms (l − = m = 0) contributes nonnegligible extrinsic OAM of the down-converted photonpairs in the degrees of relative-movement freedom [ Fig. 1(b)].
On the contrary, the phase mismatch in type-I SPDC processes is, to the first-order approximation, [13] where non-collinear configurations are assumed. Obviously, the phase mismatch in the type-I cases is azimuthally invariant. Thereby, in the Fourier expansion of Eq. (21), only the zero-order term survives with all the higher-order terms being left null. In other words, the down-converted photon-pairs are generated in type-I SPDC processes with 100% probability that they carry no extrinsic OAM, in principle, due to the azimuthal symmetry of the non-linear processes.
IV. DISCUSSIONS
In the foregoing section, we theoretically address that the down-converted beams created in type-II SPDC processes carry non-negligible extrinsic OAM in the degrees of freedom of relative movement. Due to the misunderstanding of the term of thin-medium approximation, the existence of the extrinsic portion of the OAM in the type-II SPDC processes has been theoretically overlooked for years [10,11].
In previous theoretical studies [10,11], W (∆k z )l −1 c , denoted as ∆(p − ) in [10], was either approximated by one [11] or considered as a very broad function that cuts out modes with large transverse wave vectors [10]. The treatments for W (∆k z )l −1 c in these theories are valid only in thin-medium approximation, which demands medium thickness to be order of 10µm, as shown in Fig. 2. In this case, W (∆k z )l −1 c is approximately azimuthally symmetric even for type-II SPDC processes. However, in practice, all experiments involving SPDC processes are far beyond valid thin-medium approximation and the W (∆k z )l −1 c does not necessarily possess azimuthal symmetry [ Fig. 1(a)]. Concerning experimental investigations, one might wonder why the extrinsic OAM of the down-converted beams in the type-II SPDC processes has never been observed. The very reason is that the existing OAM measurement techniques [1,15] are not suitable for measuring the extrinsic part of the OAM of the down-converted beams. To see this, let us first exam how the traditional scheme works for intrinsic OAM measurement.
One can expand Eq. (14), which describes the centerof-momentum movement of the down-converted photonpairs, as (each photon-pair carries intrinsic OAM l , i.e., l + = l) where p ′′ s e iφ ′′ s = p s e iφs − p 0 e iφ0 and p ′′ i e iφ ′′ i = p i e iφi + p 0 e iφ0 represent new vectors centered at ±p 0 (or ±p 0 e iφ0 , φ 0 is the azimuthal angle of p 0 ) respectively in the transverse planes. So, in the degrees of center-of-momentummovement freedom, if the signal beam is projected into a mode centered at p 0 carrying OAM l s (m = l s ) per photon, its twin beam will be simultaneously projected into another mode with a center of −p 0 carrying OAM (l − l s ) per photon, which can be measured by a phase mask centered at −p 0 combined with an SMF [1,8], as is illustrated in Fig. 3a. Illustrating how the extrinsic OAM carried by photon-pairs escapes detection in the traditional OAM measurement scheme. In the degrees of relativemovement freedom, detection of one photon by one set of detection devices centered at p0 (donut-like marker) projects its twin into a mode (indicated by the dashed circle) centered also at p0, which is out of detection scope for the other set of devices centered at −p0 (blue cross).
Similarly, in the degrees of relative-movement freedom, one can expand Eq. (21) as (assuming each photon-pair carries extrinsic OAM l ′ , i.e., l − = l ′ , in these degrees of freedom due to symmetry breaking) where p ′ s,i e iφ ′ s,i = p s,i e iφs,i − p 0 e iφ0 represent new vectors both centered at p 0 in the transverse planes. In the degrees of relative-movement freedom, if the signal beam is projected into a mode centered at p 0 carrying OAM l s (m = l s ) per photon, its twin beam will be simultaneously projected into another mode with a center of p 0 also, carrying OAM (l ′ − l s ) per photon which nevertheless cannot be measured by a phase mask centered at −p 0 combined with an SMF (Fig. 3b) that measures only the intrinsic part of OAM in the degrees of the center-ofmomentum-movement freedom in the traditional scheme [1,8].
In conclusion, we theoretically show the existence of non-negligible extrinsic OAM carried by the downconverted beams generated in the type-II SPDC processes in the degrees of freedom of relative movement due to the azimuthal symmetry-breaking. We explain how the extrinsic OAM escapes detection in the traditional OAM measurement scheme. Therefore, new OAM measurement techniques need to be developed if the extrinsic OAM is to be experimentally studied in the SPDC processes. | 2008-10-07T16:37:01.000Z | 2008-05-13T00:00:00.000 | {
"year": 2008,
"sha1": "dbc25b8b917cb6b7f5f9891a7a6baf02b1aa8430",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "dbc25b8b917cb6b7f5f9891a7a6baf02b1aa8430",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
212567133 | pes2o/s2orc | v3-fos-license | Zn-Promoted C–H Reductive Elimination and H2 Activation via a Dual Unsaturated Heterobimetallic Ru–Zn Intermediate
Reaction of [Ru(PPh3)3HCl] with LiCH2TMS, MgMe2, and ZnMe2 proceeds with chloride abstraction and alkane elimination to form the bis-cyclometalated derivatives [Ru(PPh3)(C6H4PPh2)2H][M′] where [M′] = [Li(THF)2]+ (1), [MgMe(THF)2]+ (3), and [ZnMe]+ (4), respectively. In the presence of 12-crown-4, the reaction with LiCH2TMS yields [Ru(PPh3)(C6H4PPh2)2H][Li(12-crown-4)2] (2). These four complexes demonstrate increasing interaction between M′ and the hydride ligand in the [Ru(PPh3)(C6H4PPh2)2H]− anion following the trend 2 (no interaction) < 1 < 3 < 4 both in the solid-state and solution. Zn species 4 is present as three isomers in solution including square-pyramidal [Ru(PPh3)2(C6H4PPh2)(ZnMe)] (5), that is formed via C–H reductive elimination and features unsaturated Ru and Zn centers and an axial Z-type [ZnMe]+ ligand. A [ZnMe]+ adduct of 5, [Ru(PPh3)2(C6H4PPh2)(ZnMe)2][BArF4] (6) can be trapped and structurally characterized. 4 reacts with H2 at −40 °C to form [Ru(PPh3)3(H)3(ZnMe)], 8-Zn, and contrasts the analogous reactions of 1, 2, and 3 that all require heating to 60 °C. This marked difference in reactivity reflects the ability of Zn to promote a rate-limiting C–H reductive elimination step, and calculations attribute this to a significant stabilization of 5 via Ru → Zn donation. 4 therefore acts as a latent source of 5 and this operational “dual unsaturation” highlights the ability of Zn to promote reductive elimination in these heterobimetallic systems. Calculations also highlight the ability of the heterobimetallic systems to stabilize developing protic character of the transferring hydrogen in the rate-limiting C–H reductive elimination transition states.
[Ru(PPh 3 )(C 6 H 4 PPh 2 ) 2 H][Li(THF) 2 ] (1).
To an agitated suspension of (t, 2 J PP = 20 Hz, C 6 H 4 PPh 2 (cis to RuH)), -27.9 (t, 2 J PP = 20 Hz, C 6 H 4 PPh 2 (trans to RuH)). 13 Table S4 and Figures S9-S15. The reaction was monitored further at 273 K and finally at 298 K. These data are summarized in Table S5 and To visualize more clearly the color change over the course of the reaction, a repeat run was carried out in a J. Young's resealable ampule. The ampule was charged with a magnetic stir bar and a THF (3 mL) solution of 3 (56 mg, 0.054 mmol) under 1 atm of H 2 at 263 K (ice / NaCl). The reaction mixture changed from red to colorless upon stirring, but upon stirring being halted, became red again. This process was reproducible was over several minutes (see accompanying ESI video file). Ultimately, complete conversion of 4 to a mixture of fac-8-Zn S23 and mer-8-Zn was confirmed by analysis of an aliquot of the solution by 1 H and 31 P NMR spectroscopy. [d] fac-8-Zn was observed in less than 1 mol % quantity.
S-4 Crystallographic Details
Data for 1, 3 and 4THF were collected using an Agilent SuperNova instrument (using Cu-K radiation) while those for 2, 4THF/4ClTHF, 4 and 6 were obtained using an Agilent Xcalibur diffractometer and a Mo-K source. All experiments were conducted at 150 K, with the exception of that for compound 3 (vide infra). Using Olex2, 7 all structures were solved with the olex2.solve 8 structure solution program and subsequently refined using the SHELXL program. 9 While refinements were largely unremarkable, there are some points which merit note as follows.
The hydride ligand in 1 was located and refined without restraints. There is a little smearing of the electron density in the region of the THF ligands. However, efforts to model same were abandoned, on the basis that a stable disorder model could not be achieved without the inclusion of extensive restraints.
The asymmetric unit in 2 contains one cation, one anion and three molecules of benzene.
The hydride ligand in the former was located and refined subject to being a distance of 1.6 Å from Ru1. The cation was (surprisingly) ordered. There is evidence for some disorder in the guest benzene based on C83, but this was not modeled. The highest, residual, electron-density maximum is located at a chemically insignificant distance from the transition metal.
C60-C62 were modeled for 60:40 disorder in the structure of 3. Distance restraints were used in the disordered region. H1 was located and refined freely. Data were collected at 200 K, as the crystal was seen to crack and degrade at 150 K -possibly due to a phase transition.
In 4THF, the asymmetric unit comprises one molecule of the ruthenium-zinc complex and two regions of solvent. The hydride ligand (H1) was located and refined freely. Each of the two solvent regions contain one molecule of THF, with the moieties based on O2 and O3 being disordered in 70:30 and 65:35 ratios, respectively. Distance and ADP restraints were included in S39 disordered regions to assist convergence. The assignment of the oxygen atoms in the solvent entities is somewhat tentative due to the smearing of the electron density in these regions.
The asymmetric unit in 4THF/4ClTHF contains one molecule of a ruthenium-zinc complex and two regions of solvent. The methyl ligand attached to Zn1, in the former, was seen to be disordered in a 65:35 ratio with a chloride ligand which means that the gross crystal contains two distinct compounds. The hydride ligand (H1) was located and refined freely. Each of the two solvent regions contain one molecule of THF, with both solvent molecules being disordered in a 65:35 ratio. Distance and ADP restraints were included in disordered regions to assist convergence. The assignment of the oxygen atoms in the solvent entities is somewhat tentative due to the smearing of the electron density in these regions.
The hydride ligand (H1) in the structure of 4 was located and refined without restraints.
One phenyl ring (attached to P3) was modeled to take account of 55:45 disorder. The component parts therein were treated as rigid hexagons in the final least-squares and some soft ADP restraints were also included for partial occupancy carbon atoms. The asymmetric unit in 6 comprises one cation and one anion. The fluorine atoms attached to C82 were modeled to take account of 60:40 disorder in the final least-squares, while refinement of those bonded to C86 Figure S32. 13 THF + THFd 8 Figure S37. 13
Computational Details
DFT calculations were run with Gaussian 09 (Revision D.01). 10 Ru, Mg, Zn and P centers were described with the Stuttgart RECPs and associated basis sets 11 and 6-31G** basis sets were used for all other atoms. 12 A set of d-orbital polarization functions was also added to P (d =0.387) 13 and together this combination is termed BS1. Optimizations employed the BP86 functional 14 Table S8). The effect of dispersion was also considered with Grimme's D3 parameter set with Becke-Johnson damping 24 Figure S49. | 2020-03-07T14:14:19.292Z | 2020-03-05T00:00:00.000 | {
"year": 2020,
"sha1": "0556b973d75ab3bc6335328ace20417b86297716",
"oa_license": "CCBY",
"oa_url": "https://pubs.acs.org/doi/pdf/10.1021/jacs.0c01062",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "e7f69cd64aae62c7ba92193b416ec178cd070950",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
38101879 | pes2o/s2orc | v3-fos-license | Cold Atom Optical Lattices as Quantum Analog Simulators for Aperiodic One-Dimensional Localization Without Disorder
Cold atom optical lattices allow for the study of quantum localization and mobility edges in a disorder-free environment. We predict the existence of an Anderson-like insulator with sharp mobility edges in a one-dimensional nearly-periodic optical lattice. We show that the mobility edge manifests itself as the early onset of pinning in center of mass dipole oscillations in the presence of a magnetic trap which should be observable in optical lattices.
Optical lattices incorporating ultracold atomic condensates are rapidly becoming ideal quantum systems for studying various model Hamiltonians developed earlier for studying solid-state phenomena. This is primarily due to the extraordinary level of precise tunability that experimentalists have achieved in controlling the parameters (e.g. hopping, interaction and lattice periodicity) of the optical lattice, which makes it possible for the cold atom optical lattice to operate as an ideal quantum analog simulator for various many-body condensed matter Hamiltonians. By contrast, ideal model Hamiltonians (e.g. Hubbard and Anderson models) often poorly describe solid-state systems since experimental control over complex condensed matter systems is, in general, quite limited. In addition solid-state systems are invariably contaminated by unknown disorder, defects, and impurities whose effects are not easy to incorporate in model Hamiltonians. The cold atom optical lattices are therefore becoming increasingly important in advancing our knowledge about the quantum phase diagram and crossover in model many-body Hamiltonians of intrinsic interest. Examples include: the Bose-Hubbard model [1], the Tonks-Girardeau gas [2], and the BEC-BCS crossover [3].
In addition to studying strong correlation effects (e.g. the superfluid-Mott insulator transition in the Bose-Hubbard model) in many-body Hamiltonians, cold atom optical lattices also offer ideal systems for studying quantum transport phenomena including ballistic quantum transport [4,5,6] and quantum localization [7,8,9,10]. The latter may be more generally classified as metalinsulator transition phenomena with a direct relationship to the solid-state. The distinction between a "metal" (i.e. a system with finite resistivity at zero temperature) and an "insulator" (i.e. a system with infinite zero temperature resistivity) is purely quantum. Broadly speaking, there are four classes of metal-insulator transitions in quantum lattice systems: Metal-band insulator transition in an ordered periodic lattice arising from the chemical potential moving into energy band gaps; interaction induced metal-insulator transition as in the Mott transi-tion; disorder induced quantum localization (i.e. Anderson localization [11]); and quantum localization in aperiodic (but deterministic) potentials in disorder-free lattice systems.
In this paper, we establish that very general aspects of the metal-insulator transition phenomena (in the disorder-free environment) can be directly experimentally studied in aperiodic cold atom optical lattices with the tuning of experimental parameters leading to the observation of both band and quantum (Andersonlike) localization in the same system but in different parameter regimes. Such an experimental study of localization or insulating transitions in deterministic aperiodic systems is impossible in solid state lattice systems since disorder (which leads to direct Anderson localization) is invariably present in solid state systems overwhelming any subtle localization effects arising from deterministic aperiodic potentials. In particular, all states are localized in one-dimensional systems in the presence of any disorder whereas one-dimensional aperiodic potentials allow for the existence of extended quantum eigenstates. This makes one-dimensional optical lattice systems particularly interesting from the perspective of localization studies in deterministic aperiodic potentials since such studies in the corresponding one-dimensional solid-state systems are essentially impossible due to disorder effects. We therefore consider aperiodic quantum localization in one-dimensional optical lattices, conclusively establishing the feasibility of studying this unusual phenomenon in cold atom optical lattices.
The single-particle quantum localization problem in a deterministic quasiperiodic potential (i.e. two lattice potentials with mutually incommensurate periods) has a long history [12,13]. In particular, localization properties have been extensively studied in the Harper (or, equivalently, Aubry) model which has an intriguing self-dual point where the eigenstates form a multifractal Cantor set spectra and are neither localized nor extended. Away from the dual point conventional wisdom dictates that all states, as a function of the chemical potential, are either all extended or all localized, depending on the mu-tual strengths of the potential and hopping terms. Such Harper model type quasiperiodic potentials therefore do not allow for the existence of a mobility edge separating extended states (above the mobility edge) from localized states (below the mobility edge) which is the hallmark of the Anderson localization transition in three-dimensional disordered system. Central to our work is the conclusive theoretical demonstration of a class of one-dimensional optical lattice systems where the deterministic lattice potential does allow for the existence of a mobility edge in one dimension [14], which cannot happen through Anderson localization with disorder. This class of models distinguishes itself from other models discussed in the context of optical lattices [15] through the formation of a metal-insulator mobility edge rather than a metal-band edge. We find that: 1) Direct numerical simulation and an analytic WKB approximation provide conclusive evidence for a rare metal-insulator mobility edge in a onedimensional model, the nearly periodic Harper model. 2) Transport measurements in suitably designed, onedimensional optical lattices can exhibit the mobility edge.
We consider spinless fermions (or equivalently hardcore bosons sufficiently near the Tonks-Girardeau regime) in the lowest band of a one-dimensional, tight binding lattice with external potentials: where the amplitudes u n multiply the Wannier states at sites n in the real space wavefunction Ψ(x) = n u n w(x − n). We work in units of the hopping matrix element, t = 1, and lattice spacing of the primary lattice defining the tight binding problem, a = 1, unless otherwise noted. The statistics of spinless fermions implicitly allow for an arbitrary on-site interaction in the above single-band model. In the absence of an external potential the solutions form extended states, u n = u 0 exp(inφ), with band energies E = 2 cos(φ), for 0 ≤ φ ≤ π. The band edges lie at E = ±2. In the presence of an oscillatory modulation of strength V , much weaker than the primary lattice, we can ignore modifications to the hopping. In this limit we impose a secondary lattice: For α irrational the additional potential establishes an incommensurate pseudorandom model, the Harper model (for Ω = 0 and V D = 0). The potential V D f n adds disorder where f n is a random number satisfying 0 ≤ f n ≤ 1 for each site. The confinement potential, Ωn 2 , applies to optical lattice systems.
According to the Aubry-Andre conjecture [16] the Harper model exhibits a metal-insulator transition at the self-dual point V = 2. For V < 2 all states are extended while for V > 2 all states localize (the states at V = 2 are critical). The localized states are characterized by a nonzero Lyapunov exponent (inverse localization length), γ(E), where u n (E) ∼ exp(−γn), and gaps in the energy spectra. While exceptions to the Aubry-Andre conjecture have been rigorously proven for specific values of α [17], we discuss here an additional and experimentally relevant counter example defined by: α = m ± ǫ, for integer m and with N sites. In the limit N → ∞ the secondary lattice defines a slowly varying, nearly-periodic potential. A similar, slowly varying potential has been considered in the context of one-dimensional localization in quasiperiodic systems [18]. In the limit defined by Eq. (2) the eigenstates of Eq. (1) with Ω = 0 and V D = 0 display Anderson-like localization where we expect to find only extended states. To see this consider γ defined in the limit, N → ∞ [19]: The first equality allows us to use the transfer matrix method to calculate γ for large system sizes. The solid line in the top panel of Fig. 1 plots the Lyapunov exponent versus energy for N = 10 7 and V = 0.5. The additional potential, V n , shifts the lower band edge to E = −2 − 2V while leaving the upper band edge at E = 2. We see extended states in the center of the band, −2 < E < 2 − 2V , with γ = 0, as expected from the Aubry-Andre conjecture. However, near the band edges, −2 − 2V < E < −2 and 2 − 2V < E < 2, the states localize, γ > 0. The points E = −2 and 2−2V define mobility edges which are unexpected in one dimension but found in three-dimensional models with disorder. The localization is, in this sense, Anderson-like. We find that, for N = 10 7 , the mobility edges persist for rational and irrational values of ǫ from 10 −5 to 10 −2 . We conjecture that in the limit Eq. (2) irrational numbers are approximated by rational numbers up to a number much smaller than N −1 . For N → ∞, the spectra can contain an infinite number of infinitely small gaps and therefore localized states eliminating the distinction between an incommensurate and commensurate system [12,17].
The unexpected insulating behavior coincides with a devil's staircase-like structure in part of the energy spectrum [12,14]. The second equality in Eq. (3) shows that a degeneracy at E j supports non-zero γ(E j ). The lower panel of Fig. 1 We can understand the insulating states in a "semiclassical" approximation where ǫ plays the role of . We analyze the behavior of each regime as a function of energy. At low energies, E < −2, the slowly varying potential confines low energy states near the potential minima defined by V n . Very little tunneling between minima forces localization. Intermediate energies,−2 < E < 2 − 2V , see a smaller barrier between minima allowing for extended states and, therefore, the first mobility edge at E = −2. A second mobility edge forms at E = 2 − 2V when states localize at the secondary lattice maxima. At first this seems counterintuitive but can be understood in a WKB approximation based on the slowly varying nature of V n . A similar analysis was performed for a different model in Ref. [18]. Our results show that the high energy states, E > 2 − 2V , moving energetically above the lattice slow when passing secondary lattice maxima to force localization. We have checked that our analysis based on the WKB approximation reproduces the solid line in the upper panel of Fig. 1. As an additional check we can, in a continuum approximation [12,20], define a position variable, n →x, and a difference operator, u n+1 + u n−1 → 2cos(p)u(x) (with p ≡ i∂/∂x), to give the semi-classical Hamiltonian: H CL = −2cos(p) + V (x), with the replacement V n → V (x). The phase trajectories of H CL produce extended and localized states (and therefore mobility edges) in the regimes obtained in Fig. 1.
We now discuss the possibility of observing this unique type of localization. In the solid state a necessary correction to the Harper model includes disorder where we add to V n a potential of the form: V D f n . For V = 0 (and Ω = 0) this defines the one-dimensional Anderson model where we expect all states to localize for arbitrary V D . However, for V = 0, the states (otherwise extended in the V D = 0 case) have a small localization length which could allow some remnant of a mobility edge. The dotdashed line in the upper panel of Fig. 1 plots γ for the same parameters as the solid line but with V D = 1.0. We find that a finite amount of disorder obscures the position of the remnant-mobility edges while localizing all states.
In what follows we consider an essentially disorder-free manifestation of Eq. (1): one-dimensional, cold atom optical lattices. The interference of appropriately detuned lasers of wavelength λ = 2a can give rise to our tight binding lattice with a sufficiently strong lattice height V L . To create a secondary modulating potential, V n , consider an additional pair of lasers at angles θ and π−θ to the primary lattice with wavelength λ ′ and amplitude V ′ L . The additional lasers interfere to modulate the energy of the nth site by: . For small angles we can retrieve, up to an overall constant, our nearly-periodic Harper model with m = λ/λ ′ an integer and ǫ ≈ −λθ 2 /2λ ′ . For realistic parameters: V L = 5E R , V ′ L = 0.1E R , θ = 5 • , and λ = λ ′ (where E R is the photon recoil energy), we find t ≈ 0.065E R , V ≈ 0.055E R , and ǫ ≈ 0.004 yielding the appropriate parameter regime. Furthermore, we find that, in the limit of Eq. (2), fluctuations in the relative phase do not alter the position of the mobility edge. We now include an important modification to the model which accounts for realistic finite size effects.
A crucial addition to the Harper model in optical lattices is the parabolic confinement: Ωn 2 , which leads to a finite particle number. We find that weak confinement leaves the mobility edges intact. To see this consider the local Lyapunov exponent: γ L (E j ) = (2N CL + 1) −1 NCL n=−NCL ln |u n+1 /u n |, where the semiclassical limits of the parabolic trap define the number of states participating in transport, 2N CL + 1. The classical turning points give N CL = 2|x CL (E)| and Eq. (2) becomes: For Ω ∼ 10 −5 we find 2N CL ∼ 10 3 . In the limit Ω → 0 we retrieve the usual Lyapunov exponent, γ L → γ. Fig. 2 plots the local Lyapunov exponent as a function of energy for N = 10 7 , ǫ = 0.005, V = 0.5, V D = 0, and Ω = 10 −5 . The mobility edges remain even with a reduced number of states comprising the system. The inset shows the normalized density profile as a function of site number for three different chemical potentials, µ. At zero temperature we include states with E ≤ µ. For µ = 0.5 (dashed-dotted line) we find extended states with some modulation due to V n . For µ = 1.5 (dashed line) we have crossed the mobility edge and the density pins to unity at some lattice sites. Here the formation of a mesoscopic version of the Anderson-like gapless insulator fixes the density. For µ = 3.0 we enter the band insulator regime which fixes a large fraction of the states at integer density.
Dipole oscillations in harmonically confined atomic gases serve as a direct probe of localization [7,8]. A small shift in the center of mass results in harmonic oscillations in the absence of an external lattice. The presence of one or more weak lattices allows for weakly localized states which can suppress oscillations and lead to an effective under-damping of the center of mass motion. The addition of strongly localized states can, in the absence of dissipation, eventually pin the center of mass to effectively over-damp the center of mass oscillations. Strong experimental and theoretical evidence supports the possibility that band localization has indeed been observed in fermionic, one-dimensional optical lattices [7]. Similar evidence also suggests such behavior for strongly interacting bosons [8].
We now study the onset of the gapless Anderson-like insulator and its effect on center of mass oscillations. Consider the center of mass to be displaced ∆ lattice sites at some initial time T = 0. For extended states, the center of mass position,X(T ), averages to zero for long times while localized states should pin the center of mass position,X ∼ ∆. The center of mass position can, for some parameters, demonstrate complex, damping-like behavior as function of time making a damping constant ill-defined. To extract a simple quantity to be compared with experiment we calculate the long time average of the center of mass position, <X > ∞ , as a function of chemical potential by diagonalizing Eq. (1) with a parabolic potential, Ω = 10 −5 , for N = 3000, ∆ = −3, and V D = 0. As an intermediate step we require degenerate eigenstates (localized at the edges) to simultaneously diagonalize the parity operator since our system possess reflection symmetry about the origin. The dashed line in Fig. 3 plots <X > ∞ as a function of chemical potential in the absence of a secondary lattice, V = 0. For µ < 2 the extended states perform several oscillations about the trap center but over long times average to zero displacement. Above the band edge (labeled B.E.), for µ > 2, localized states near the edge pin the center of mass near ∆. For µ 3 the system never leaves its initial position.
A second weaker lattice causes a mobility edge to form energetically below the band edge. The solid line in Fig. 3 plots the same as the dashed line but with a second lattice, V n , with V = 0.5 for chemical potentials near the upper mobility edge (labeled M.E.). <X > ∞ remains zero where we expect extended states but pins near ∆ for µ > 2 − 2V . The mesoscopic version of the gapless insulator results in the early onset of pinning in the regime 2 − 2V < µ < 2 and Eq. 2. Furthermore, the localized states with the additional lattice, V = 0.5, also display weak periodicity in <X > ∞ as a function of µ. These oscillations correspond to the chemical potential passing through peaks and valleys in the corrugated confinement potential.
Fluctuations in the lattice depth can soften the otherwise sharp mobility edge. The quantity of interest, 2V /t, can fluctuate wildly with only moderate changes in V L at extremely large lattice depths. To see this consider an approximate expression in terms of the hopping extracted from an analysis of the related Mathieu problem: V /t ≈ ( √ πV /4)(V L /E R ) −3/4 exp(2 V L /E R ). A relative error in V and V L , R V and R VL respectively, propagates to a relative error in 2V /t: [R 2 V + R 2 VL (3/4 − V L /E R ) 2 ] 1/2 . We have checked that this formula is quantitatively accurate for V L 5E R by comparing with error derived numerically from the exact tunnelling. We find that for R V = R VL = 5% the relative error in 2V /t remains below 20% for V L < 20E R .
We note that additional time dependence in the model discussed here possesses other applications. We take H CL as a good approximation to the nearly-periodic Harper model in the limit Eq. (2). In the presence of a pulsed secondary lattice: V ∝ j δ (T − jT 0 ), where for integer j the secondary lattice oscillates with period T 0 , we simulate the kicked Harper model via H CL . The kicked Harper model exhibits chaotic behavior with the "classical" to quantum crossover controlled by ǫ.
We have explicitly demonstrated the existence of a mobility edge (and the associated, unusual metal-insulator transition in a deterministic disorder-free environment) in suitably designed aperiodic cold atom optical lattice systems. The deterministic aperiodic background potential in these optical lattices leads to exotic and nontrivial energy eigenstates dependent on the relationship between irrational numbers and their rational approximations. The ensuing quantum localization occurs in the absence of disorder and therefore distinguishes itself from Anderson localization which, in the solid state, masks the presence of mobility edges formed from quasiperiodic potentials in one dimension.
We thank K. Park and G. Pupillo for valuable discussions. This work is supported by NSA-LPS and ARO-ARDA. | 2017-10-11T04:00:08.799Z | 2005-06-16T00:00:00.000 | {
"year": 2005,
"sha1": "166cd8b1a6a91858bf45de7bb9f81d78871837c2",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/cond-mat/0506415",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "45ab9056febc28737b32a5c051daa657f0d0051b",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
59122940 | pes2o/s2orc | v3-fos-license | Individual Satisfaction and Tax Morale: The Perspective of Different Profession in Indonesia
,
Introduction
This research does deepening related individual satisfaction and taxpayers awareness to obey paying the tax.The first objective of this research proves that individual satisfaction has a significant influence on tax morale in Indonesia.Individual satisfaction was proxied with financial condition and level of happiness used in Torgler (2004a).The second objective is to know the dominant factors that affect individual satisfaction and tax morale from the perspective of tax consultants and contractors.This research suspect profession of taxpayers affects the determination of dominant factors in influencing individual satisfaction and tax morale.Factors that expected to affect satisfaction and tax morale are the religious commitment, trust in the government agency, democratic system, and perception of others individuals.Professional tax consultant was chosen because they often face an ethical dilemma in doing their professionalism, integrity, pressure from the client and the government, and on the other hand they face competitive environment as a professional tax consultant.They fully understand the regulation, law, and fiscal policies, but on the other hand, they were tied to professional's code of conduct.The professional organization of Indonesia Tax Consultant called IKPI has a rule (Indonesia Tax Consultant Organization, 2014) like in second part of our 4a, 4b, 5a, and 5b is "Upholding integrity, dignity, and honor: by maintaining public trust, being honest and lay on the line without sacrificing the recipients of services; Be professional: always use moral judgment in delivering the services, always act within service framework and respect public and government trust."Finn et al. (1988) who performed a survey of public accountants in the USA find that tax issue dilemma is the toughest matter of ethics among others, like client's pressure to change information in the tax return.Leung and Cooper (1995) found three highest ethical dilemmas that were most frequently facing by accountants are client's proposals for tax evasion.Due to the strengthened ethical dilemma, tax consultants have a higher assertion toward government agency, democratic systems, and individual satisfaction, which ultimately affects the determination of dominant factors affect satisfaction and tax morale.In Indonesia before December 2014, the license of tax consultant was issued by the professional organization of Indonesia Tax Consultant Organization (IKPI).But the rule was changed since the issuance Regulation of the Minister of Finance No 111 / PMK.03 / 2014 about Tax Consultant.The license is the authority of the Director General of Taxes.This regulation is issued to discipline the tax consultant profession, in which one of the rules is obliged tax consultants to submit their financial statements to the Director General of Taxes on a regular basis.Since the enactment of tax enforcement in 2014 in Indonesia, the profession of tax consultants became the most sought-after profession of a private and corporate taxpayer, so that this profession experienced a significant increase in income, especially in a big city, for instance, Surabaya.
Contrast with tax consultants; contractors generally do not fully understand the rules, laws, and policies of taxes.They are also not bound by the tax code of ethics, nor do they face the ethical tax dilemma in their work.Their focus only on enlarging profit and cash flow so that their business can sustain.For contractors, taxes are part of operating expenses but have to be paid for the company to sustain.The contractors were chosen because since the end of 2012 the value of property in big cities of Indonesia, including Surabaya has increased significantly.Infrastructure development in Surabaya helped to increase property prices in Surabaya and surrounding areas.As a result, their income increased significantly, resulting in higher tax payments.The income tax for contractors in Indonesia uses gross income as the tax base.This causes the increase in gross income of the business directly affects the amount of tax payable.This reason makes contractors is the most appropriate respondent in this study to be compared with tax consultants.
The city of Surabaya was chosen in this study because in 2014 Surabaya was the city with the highest tax revenue in Indonesia, around Rp 8.9 trillion per year, an increase of 27% over the previous year (Wibowo, 2014).A large number of investment entering the city of Surabaya caused a significant tax increase.Besides, the profession of tax consultants and construction service entrepreneurs in Surabaya is experiencing a significant increase in income.
The first research contribution is that this study is the first attempt to compare tax morale from the perspective of two different professions, that is tax consultants and contractors.Reviewing the mindset of these two different professions can provide a deeper understanding of tax morale from two different points of view.The second contribution is to provide an understanding of the influence of individual satisfaction, trust in government, democratic system, and religious commitment to increase tax morale in Indonesia, both from the perspective of tax consultants and contractors.The improvement of the quality of life which are the Indonesian government program since 2014 and transparency through e-government, expected can increase tax morale of taxpayers in Indonesia, especially tax consultants and contractors.The third contribution is to provide an understanding of the dominant factors that affect individual satisfaction and tax morale of both professions so that the government can be more focused on increasing tax morale in Indonesia.
Individual Satisfaction and Tax Morale
The intrinsic factor that is allegedly influencing its influence in Indonesia is individual satisfaction.Indonesia's human development index from 1990 to 2015 has increased by 30.5% from 0.528 to 0.689 (United Nation Development Program, 2016).Based on data released by Wealth Health Organization (World Life Expectancy, 2015),Indonesia life expectancy for men is 67 years and women 71.2 years and ranked 113th in the world.Since 2014 the Indonesian government has focused on human development through four programs, which is: "Smart Indonesia" (Indonesia Pintar), "Healthy Indonesia" (Indonesia Sehat), "Working Indonesia" (Indonesia Kerja), "Prosperous Indonesia" (Indonesia Sejahtera) (Office Staf President, 2015)."Smart Indonesia" is the program that facilitates twelve years of unpaid education for Indonesian."Healthy Indonesia" provides health services without levees."Working Indonesia" is the distribution of agricultural land for farmers and unemployed workers.The last one "Prosperous Indonesia" is achieved through the provision of subsidized housing and social security.The government's continuous improvement in the quality of life program expected to play a significant role in increasing the tax morale and tax compliance in Indonesia.Torgler (2003) and(2004a) researched in Latin America and Asia, found that financial satisfaction and happiness were affecting tax morale.Gintis et al. (2008) state that the fundamental human being is moral beings, human beings get pleasure by doing ethical acts, but humans will feel guilty when doing unethical acts.Judging from an ethical point of view, paying taxes raises happiness, so when repeatedly done leads to a feeling of happiness.(Sá , Martins, & Gomes, 2015) conducted a study in Portugal proving individual satisfaction affecting tax compliance.H1: Individual satisfaction has a significant influence on tax morale 2.2 Religious Commitment, Individual Satisfaction, and Tax Morale (Mohdali & Pope, 2014) who researched religious commitment found that intrapersonal religious commitment has a stronger effect than interpersonal religious commitment.Intrapersonal religious commitment is an individual commitment derived from individual beliefs and attitudes, while interpersonal religious commitment is a commitment derived from individual activism in religious organizations or communities (Pope and Mohdali, 2010).Research Okulicz-Kozaryn (2010) proved religiosity makes people happy, especially in a religious nation, like Indonesia.This happiness comes from the religious community that formed the "need to belong" in their community.Like Mohdali and Pope (2014) that proxied religious commitment on two dimensions (intrapersonal and interpersonal), so that our research uses religion activity and individual belief, to measure religious commitment.
Religious commitment is how much level of understanding in religious belief, values, and its application in the daily life of the taxpayers (Worthington et al., 2003).The principle of religion is teaching the community to act by prevailing norms and laws and to emphasize sanctions, as a result of violations of social norms and applicable law.Research Mohdali & Pope (2014), which conducted on taxpayers in Malaysia, found that religious commitment was positively influencing tax compliance voluntarily.The culture of Malaysian society that tends to have strong religious values proves to have a positive impact on individual tax compliance.Research on motivation to donate in Kuala Lumpur Malaysia found religious belief was a moderator that strengthens motivation to donate (Teah, Lwin, & Cheah, 2014).The reason because it is essential for individuals with strong religious belief to provide a signal in the form of action, where the action is in line with the religious doctrine that they believe.Multi-cultural and multi-racial in Malaysia has a holistic approach to religious belief rather than specific religion (Loch et al., 2010).Research (Uyar et al., 2015) proves religiosity has a positive influence on ethical awareness.The main concept of all religions is to prevent people from immoral acts, always to uphold the truth, fair and not deceive, which ultimately leads people to follow the law (Uyar et al., 2015).The power of religiosity that is capable of mobilizing the people to obey taxes and form ethical individuals is an extraordinary power and can be used by the state to increase tax payments.Indonesia with six official religions is a society with a high culture of religiosity, such as Malaysia.If the religious commitment factor proves to be more dominant, then the government needs to approach the religious side to provide a tax-related understanding of religious education.This is still little that had been done by the government, although Indonesia has religious-based primary and secondary schools amounted to 48,354 Islamic-based schools (Indonesia Central Bureau of Statistics, 2015), not including other religions.The hypothesis of this research is H2: Religious commitment has a significant influence on individual satisfaction H3: Religious commitment has a significant influence on tax morale The research (Sá , Martins, & Gomes, 2015) conducted in Portugal shows that there is no relation between religiosity and tax morale.Research (Welch et al., 2007), Mcgee (2012), Jalili (2012) show results where religiosity does not affect tax morale.This is because, in certain countries, religiosity is being seen as ethical or unethical, depending on how the state carries out all or part of the applicable religious law.If a country has a different view of ethical or unethical acts with any applicable religious law, then the society does not believe in any of the applicable religious law (as a standard of ethical or unethical behavior).
Trust to Government Institutional, Individual Satisfaction, and Tax Morale
Trust in institutional government is closely related to political stability, government effectiveness, regulatory quality, rules and policies, and corruption.If rules and policies are not formalized, the players spend a lot of time arguing about rules and policies, resulting in less time competing in productive activities (Ensley & Munger, 2001in Torgler, 2004b).High trust in government means government institutions can create a safe, stable, free from violence, justice, and individual satisfaction conditions.In this research, the taxpayer satisfaction indicator is measured by happiness and financial conditions.
Government institutions have an enormous role in improving tax compliance, especially in Indonesia, as Indonesian government is struggling to improve the image of government institutions, improving communication between government and society, and improving the transparency of state finances.These three are implemented in the form of eradicating corruption and simplifying licensing procedures across government agencies, multiplying call centers for public complaints, and building e-government based information technology.The government's credibility as a tax fund manager plays a significant role in improving the tax compliance of a country.(Picur & Riahi-Belkaoui, 2006), which conducts cross-country research of 30 developed and developing countries, finds prevention of corruption and bloated bureaucracies increasing tax morale.This research finds tax compliance is highest in the countries that have high control corruption and low size of bureaucracy (Picur & Riahi-Belkaoui, 2006).It is vital to protect the citizens from corruption and bureaucracy because it can build the perception or image regarding countries corruption and credibility of the bureaucracy, finally, threaten countries economic development.According to (Torgler, Schaffner & Macintyre, 2007) if the interests of the taxpayers are represented by government institutions, and they enjoy good public facilities, then taxpayer compliance will increase.Responsive government results from a strong relationship between tax payments and the availability of good public facilities (Bird et al, 2004).In a low credibility government, party politicians, legislators, and administrative staff have great strength.This makes tax compliance from taxpayers lower.In a country where corruption occurs systematically and low financial transparency, it can not be assumed that paying taxes is an ethical and normative obligation.This means that in such conditions, a person who does not pay taxes, does not mean violate social norms, even taxpayers feel cheated (Torgler, Schaffner & Macintyre, 2007).This result is also ongoing with Ibrahim, Musah & Abdul-Hanan (2015) that trust in government is a crucial factor that drives tax morale.The research (Teah, Lwin & Cheah, 2014) on donor institutional confidence, also proves that donating venues affect motivation to donate.Individual trust in international charity is higher than local charity because international charity is considered more efficient and better in distributing funds for the needy.This provides the view that in the case of tax payments, the credibility of state institutions as tax managers contributes substantially to tax compliance.State institutions perceived as clean, transparent and efficient will motivate taxpayers to be more compliant with paying taxes.H4: Trust in the government agency has a significant influence on individual satisfaction H5: Trust in the government agency has a significant influence on tax morale 2.4 Democratic System, Trust in Government, Individual Satisfaction, and Tax Morale The democratic system affects tax morale.Research (Torgler, 2005) in Switzerland found direct democratic rights to have a strong influence on tax morale.A country that values the opinions of its citizens gets more support from its citizens (Prinz, 2002in Torgler 2005).A government committed to running a direct democracy means forcing itself to hold back its power, and to signal that taxpayers are responsible individuals.According to (Torgler, 2005) direct democracy also means the government does not ignore or assume taxpayers, uncomprehending voters.Democracy's indicators in Indonesia use five indicators, that is accountability, rotation of power, open political recruitment, general elections, and the fulfillment of basic rights (Gaffar, 2005).The reason for the use of these indicators is because the taxes we discuss only in the scope of income taxes that are managed by the central government.So this research tries to assess the perceptions of respondents about the democratic system in central government.The government signals that taxpayer preferences are noticed and implemented in government processes.The higher the level of taxpayer participation in the decision-making process, the stronger the occurrence of social contracts based on trust, the higher tax morale of taxpayers (Torgler, 2005).
In a country with a democratic government, citizens may exercise the right vote to vote for or not to elect a leader.Indonesia is a democratic country where citizens choose leaders in the central and presidential institutions, so the higher the taxpayer's assessment of the government's democratic system, the higher the trust in government institutional.Based on the explanation, the research hypothesis is H6: the Democratic system has a significant influence on individual satisfaction H7: the Democratic system has a significant influence on tax morale H8: the Democratic system has a significant influence on trust in government
Perception of Others, Individual Satisfaction, and Tax Morale
According to Reisinger and Turner (1997), Indonesian culture is highly collectivistic and group-oriented.Indonesian culture is more emphasis on people, human relations, and family oriented.Friendship is determined by stable and long-term relationship.So for Indonesia people, community, togetherness, and sociability are seen as very important in social life.Indonesian personal relationship is inclusive.Solitude is perceived negatively (Reisinger and Turner, 1997).That is why the perception of the environment, including friends, family, colleagues strongly influence the views of taxpayers in Indonesia, including tax morale.Some taxpayers have the same views with other individuals.But other taxpayers disagree with other people view.This affects individual satisfaction.This is the problem of many people who enter the community.They are required to obey the will of the group, and these things will affect individual satisfaction.Indonesia is the best practices to see the effect of other's perceptions of individual satisfaction and tax morale.H9: Perception of other taxpayers has a significant influence on individual satisfaction H10: Perception of other taxpayers has a significant influence on tax morale
Religious Commitment and Trust in Government
Research of Poppe (1996) stated that religious leader could influence how the church's member view government.He also said that trust in the government is more likely if the church's membership is religiously active.His research proved that individuals who have a high level of religious behaviour are more likely to trust in the government than others (Poppe, 1996).The research hypothesis is H11: Religious commitment has a significant influence on trust in government
Research Methodology
This research modified model uses in (Sá , Martins & Gomes, 2015) research to adjust to conditions in Indonesia.Indicators and relationships among variables have been modified to suit the conditions in Indonesia.This study also compared answers from two groups of respondents, i.e., tax consultants and contractors.
Method of Collecting Data
In this study, the data collection was obtained from a direct survey to the taxpayers using a questionnaire.Data collection is done in two ways, by email or directly meet the taxpayers and then provide a questionnaire and conduct interviews.The statistical method used to test the validity and reliability test, a goodness of fit, and hypothesis test is Partial Least Square-SEM with the help of Wrap PLS program.This research does not use a different t-test to compare the answer of tax consultants and contractors because a few questions in the questionnaire are specific for each group of the respondent.Otherwise, we used descriptive analysis.Table 1 shows the variable indicators in this study.The measurement scale for democratic system variables, religiosity, environmental perception, trust in government institutions, and tax morale, using a 4-point scale with a score of 1 "Strongly Disagree", a score of 2 "Disagree", a score of 3 "Agree", a score of 4 "Strongly Agree ".Interpretation of respondents' answers are divided into the following 3 groups:
Sample Determination
The population in this study consists of two groups, that is taxpayers who work as tax consultants and contractors.Sampling technique used in this research is purposive sampling with certain criteria: a. Tax consultant with the following criteria: 1. Individual taxpayers who work as tax consultants, 2. Incorporated in IKPI organization membership, 3. Has worked as a tax consultant for at least one year b.Contractors with the following criteria: 1. Individual taxpayer who owns contractions business, 2. Having an office as a business representative,
Has worked as contractors for at least one year
The minimum sample size is calculated as ten times the number of variables in the research model.There are 6 variables in this study so that the minimum sample size as much as 10 x 6 variables = 60 respondents.The number of respondents in this study was 60 respondents of tax consultants and 78 respondents of contractors.
Description of Respondents
Description of respondent's profile include age, gender, religion, marital status, last education, and annual income, in this study which will be described in the table 3 below: ).This means that both groups of respondents have a high religious commitment, both from intrapersonal religious commitment and interpersonal religious commitment.The average respondent's answer to trust in government variables is high but almost close to the lower limit of 2.31.Respondent's answer to democratise system variable also belongs to a high category.The dependence of the respondents on the perception of other taxpayers is a high category for the group of contractors, but it is a low category for tax consultants.This is happening because tax consultants are more aware of the rules and tax policies, so it is more independent than the construction businessman.Regarding individual satisfaction, tax consultants and contractors are categorized as very high and high.This shows the level of tax consultant satisfaction is higher than the construction businessman.In the case of tax morale, the responses of both groups of respondents are high, but the tax consultant is slightly higher than the construction businessman.The result of the goodness of fit tax consultant and contractor model shows that both have met the criteria, so it can be said that both models have been compatible and supported by the data.Criterion: weak (0,02), medium (0,15), dan strong (0,35)
Analysis Partial Least Square
The following interpretation of tables 7 and 8 that is: the results of the tax consultant model show taxpayer satisfaction and democratic system is a predictor of the strongest influence on tax morale.This can be seen from the coefficient of 0.329 and 0.289.From the contractor, religious commitment and democratic system is the strongest predictor affecting tax morale, with coefficient 0,371 and 0,241.According to the tax consultant, religious commitments have the strongest influence on taxpayer satisfaction, whereas according to the contractors, democratic has the most powerful influence.Predictors affecting trust in government, both groups of respondents agree that the democratic system is the strongest influence than the other predictors.Table 9 shows that trust in government is the only variables that proved to be the mediating variable between the democratic system and taxpayer satisfaction in the tax consultant model.However, this relationship is not significant in the contractors model.
The cross-loading results of both models show the outer model meets the convergent validity.
Similarly, the composite reliability coefficients of both models have met the criteria (table 10).Cronbach's alpha coefficients tax consultant and construction model meet the criteria (table 11).Discriminant validity for all variables in both models has been met.The collinearity test among the predictor variables also meets the VIF criteria <10 (table 12).
Discussion
Based on the test results, tax consultant respondents consider the dominant factor affecting tax morale is individual satisfaction.In contrast to the perception of contractors where according to them, individual satisfaction does not affect tax morale.Differences in perceptions between these two groups of respondents can be analyzed from various causes such as construction business services tend to be more volatile than the services of tax consultants.Although this research is done when the financial condition of contractors is rising, due to the fluctuations in business income, according to them tax satisfaction is not related to tax morale.According to the buffer-stock theory of Carroll (1992), discloses consumers withhold their assets as reserves in the event of earning's fluctuations in the future that can not be predicted.In the case of tax morale, when business income is more volatile, the business climate is unstable (e.g., government rules and policies change rapidly), the tax morale tends to be lower, because the asset reserves they have to prepare when the business is decreasing.When this study was conducted, the Indonesian government had suspended professional license of tax consultants for 2 years, so during that time there was no new license for the new tax consultant.Along with the implementation of tax law enforcement, causing demand for tax consulting services higher than other periods.This causes tax consultant services tend to be more stable than contractors.The second cause is the tax consultant believes with the perception that taxpayers with a better financial condition should pay bigger taxes.In contrast to contractors who argue that income is greater due to harder business, so it is heavier to pay taxes.The research of Kawulusan and Tjondro (2016) in Indonesia that examines the influence of 'how hard the taxpayers work' to the perception of tax evasion, shows the results of taxpayers who assess that they work very hard, increasingly agree with the tax evasion action.
This study found the democratic system proved to affect tax morale with medium effect size from tax consultants and contractors.According to Torgler (2005), the government signaled that taxpayer preferences are noticed and implemented in the governance process.The higher the level of taxpayer participation in the decision-making process, the stronger the occurrence of social contracts based on trust, the higher tax morale of taxpayers (Torgler, 2005).
According to Torgler (2004b), the active role of democratic citizens helps the State to monitor and control politicians and thereby reduce the information gap and reduce the inflexibility of government power.The democratic system may represent the opinion of taxpayers for the government of the State.A democratic system of government opens opportunities for citizens in expressing their views (Torgler, 2004a).This explanation is also supported by the average responses of both groups of respondents who are categorized as high for the democratic system factor in Indonesia (Table 1).Trust in the government proved not to affect tax morale according to both groups of respondents.Average respondent's answer for trust to the government variable tends to be low, both from tax consultants and contractors, but from the tax morale side, both groups of respondent's answer tend to be high.This can be seen in the following table: In Indonesia, trust to institutional government does not affect tax morale because it is common practice in Indonesian society that taxpayers are "just paying taxes", do not see whether the nominal is paid following the actual circumstances.The only important thing is they already paid the taxes.What's going on when the tax officer checked the payment?A bribe was common in the past years, and the nominal was not high.I think the society needs more time to change their mind to pay the real taxes.This result is also supported by a survey conducted by the OECD (2013) in Indonesia on "do you agree if you can see government expenditures, even if they result in tax increases?".The average answer is 3.7 on a scale of 1 -5, meaning the respondent's answer in the 2 -3 score range.This shows that some respondents disagree if there is a tax increase despite an increase in government expenditure transparency.The culture of "just paying taxes" is stronger than feelings importance of transparency and trust in institutional government.Trust in institutional government does not affect tax morale under conditions in countries where tax rules are not applied strictly for long periods of time.
The new mass tax enforcement is enforced through the signing of the cooperation between the Directorate General of Taxation and the Indonesia National Police in 2014.According to Joulfaian (2009), there are several conditions that caused tax evasion to develop, that is (1) when the bribe to tax officers became general habits, (2) lack of sufficient reward on the ability of tax officers to detect tax evasion by taxpayers.However, Akdede (2006) states that if the nominal bribe is large, then the taxpayer will choose to pay taxes rather than tax evasion.This means there is a way to increase the value of bribe: (1) increasing the probability of detection by higher inspectors, (2) increasing penalty value for both parties involved bribe (Akdede, 2006).
Perception of others proved to affect tax morale but with weak effect size, both from the perception of tax consultants and contractors.This suggests that people in big cities no longer rely on information from other taxpayer, but they are more aware of the importance of this tax problem, preferring to seek information from a tax expert.In addition, easy access to information in large cities helped reduce the dominance of this variable to tax morale.
The interesting fact in this research is that religious commitment is the dominant factor affecting tax morale from the perception of contractors.This means they voluntarily pay taxes because they obey social norms, uphold the truth, fair, and not cheat.In this case, the power of religiosity is able to move contractors to pay taxes obediently.This result is supported by research (Mohdali & Pope, 2014) in Malaysia, which found religious commitment has a positive effect on tax morale.The results of this study is consistent with the research of Torgler (2007), proving that there is a negative correlation between religious membership and crime.The results of the study is also consistent with Adam Smith's argument (Smith, 2010) in his Theory of Moral Sentiments, concludes that religiosity acts as a mechanism driving human's moral self.Instead, this study found for tax consultants, religious commitment does not affect tax morale.
The dominant factor affect individual satisfaction according to tax consultant is a religious commitment, with effect size 0.36 which is included in a strong category.Based on the results of this research we can conclude religious commitment of tax consultants is stronger influence on individual satisfaction than tax morale.Okulicz-Kozaryn (2010) found that the form of religiosity is promoting social capital which is predicted to produce high individual satisfaction.Religiosity accommodates "need to belong" feelings one expected, thus increasing the individual taxpayer's satisfaction.
Conclusions and Recommendations
Tax consultants and contractors have different views on factors affecting tax morale.The perception of tax morale from the perspective of the tax consultant is dominantly influenced by individual satisfaction and democratic system, while from the point of view of the construction entrepreneur the factor of religious commitment and democratic system tend to be more dominant.The fact that these two types of businesses have different business characteristics influences the different perceptions of both groups of respondents.
The democratic system proved to affect tax morale with medium level of influence, both from the perception of tax consultants and contractors.This provides an understanding that the two groups consider the government, as the policy maker, should involves the taxpayers.Democratic is closely related to the principles of learning and bottom-up approach (Papaioannou, 2007), so taxpayers expect taxation rules and policies that run is a contract or an agreement with the taxpayers.
An interesting finding is that trust in institutional government does not affect the tax morale of both groups of respondents.Although both respondents have answers that are not much different, both groups agree that this trust of the government does not affect the tax morale.The practice of just paying taxes that happened in the past is one of the reasons.It takes time to force Indonesian taxpayers to pay the real taxes.This causes Indonesian government unable to increase the tax morale of Indonesian society.
Other taxpayers perception proved affecting tax morale, according to the two groups of taxpayers, but with a very small (weak) level of influence.This result proves that awareness about taxation is no longer dependent on the opinions or arguments of the surrounding environment (family, business associates, or friends).Taxpayers see the tax as an important factor affecting the business sustainability (sustainability effort), so more dependent on the opinion of experts in the field of taxation.
Suggestions for further research are more emphasis on surveys of democratic forms of the wider system, so taxpayers are involved in decision-making related taxation.In addition to further research on government institutions, it is important to consider which government institutions are dominant factors according to respondents' perceptions, i.e., events / events closest to the implementation of surveys that may affect perceptions of respondents.
Table 3 .
Description of Respondents
Table 4
above shows the religious perceptions of tax consultants and contractors including high category (above 2.31
Table 5 .
Model Fit and Quality Indices for Tax Consultant and Contractors Models
Table 6
shows the tax consultant model can explain 48.8% predictors affecting trust in the government, while contractors model is only able to explain 1.8%.The tax consultant model can explain 45.2% of predictors affecting taxpayer satisfaction, but the construction model can only explain 26.3%.Both models succeeded in explaining the predictor of tax morale, in which both R-squares show the values of 53.2% and 54.1%.
Table 7 .
Path Coefficients and P-value for Direct Effect
Table 9 .
P-value for Indirect Effect
Table 10 .
Composite Reliability Coefficients
Table 12 .
Collinearity test among the predictor variables | 2019-05-30T23:44:35.619Z | 2018-05-21T00:00:00.000 | {
"year": 2018,
"sha1": "824bc1462200b770219e430fe815ab6645a68045",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.5296/jpag.v8i2.13168",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "3b36e513625fb5319d31986a499be2817f8aa32d",
"s2fieldsofstudy": [
"Business",
"Economics"
],
"extfieldsofstudy": [
"Business"
]
} |
207398338 | pes2o/s2orc | v3-fos-license | Public health benefits from legalizing cannabis: both sides of the coin
Although Spithoff and colleagues mention some benefits of legalizing cannabis, they provide more details about the potential harms. The positives are limited to reducing stigma and “realization of therapeutic benefits.”[1][1]
It may be difficult for physicians viewing cannabis through the lens
We appreciate Gravel and colleagues' efforts to develop a clinical decision rule that successfully identifies skull fractures among young children with mild head trauma and no indication for head computerized tomography (CT). 1 We wonder, however, if these skull fractures warrant diagnosis and, specifically, if affected children benefit from their detection. Isolated skull fractures have been suggested as an example of overdiagnosis, the accurate detection of an abnormality from which a patient does not experience net benefit. 2,3 Follow-up outcome data, such as receipt of surgical repair, are needed to assess the possibility of patient benefit, but they are not included in the present study. Other studies have found that clinical deterioration and surgical intervention are rare among well-appearing children with isolated skull fractures. 4,5 Even when repair is performed among this cohort, the impetus is generally cosmetic. If growing skull fractures are the concern (which, as the authors concede, are exceedingly uncommon), then the important research question becomes how to best predict these specific fractures rather than how to predict skull fractures in general.
Faced with an unclear benefit of testing, we must consider the potential harms. How often did skull fracture findings trigger CT scans, for which there is an added risk of malignancy? Though isolated skull fractures do not necessarily warrant routine hospitalization, studies have demonstrated that most children with this finding are indeed admitted to hospital. 5 Parental anxiety and guilt resulting from the news that their young child has a skull fracture is an additional concern.
Improving the means to detect abnormities is a timeless objective in medicine, but we must pair this work with efforts to determine whether children receive more benefit than harm as a result of increased or improved diagnosis.
Public health benefits from legalizing cannabis: both sides of the coin
Although Spithoff and colleagues mention some benefits of legalizing cannabis, they provide more details about the potential harms. The positives are limited to reducing stigma and "realization of therapeutic benefits." 1 It may be difficult for physicians viewing cannabis through the lens of addiction to see any silver lining from legalization. However, there are both individual and public health benefits that should be balanced against possible harms. The first and most immediate benefit is that patients who use cannabis for therapeutic purposes will no longer fear legal sanctions.
Both the US and Canada are currently dealing with an increase in addiction and death from fentanyl, oxycodone and other opiates. Two large studies have shown about a 25% decrease in deaths from opiate overdose associated with the legalization of medical cannabis and the availability of dispensaries. 2,3 The recent COMPASS study found that the use of cannabis for chronic pain has a reasonable safety profile and that patients often used it as a substitute for other more harmful drugs, such as opiates, NSAIDS (nonsteroidal anti-inflammatory drugs) and alcohol. 4 Harm reduction experts have also expressed concerns that professional societies are jeopardizing patient health by requiring a much higher standard for the prescribing of cannabis over the prescribing of opioids. 5 Legalization of cannabis would remove research blockades to begin proper study of cannabidiol. This compound is not associated with a "high," is not known to be addictive and has antiseizure, antianxiety and antipsychotic properties. 6 Up to this point, proper study of cannabidiol and other cannabinoids has been restricted by their criminalized status. | 2018-04-03T00:41:37.747Z | 2016-01-05T00:00:00.000 | {
"year": 2016,
"sha1": "8d1ef901a77f6cc7972e2bc5a318900c3b3834e6",
"oa_license": null,
"oa_url": "https://www.cmaj.ca/content/cmaj/188/1/63.2.full.pdf",
"oa_status": "GOLD",
"pdf_src": "Highwire",
"pdf_hash": "c63ca86a5c0c9259b243ce9f9da43dc0636f1622",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
232761995 | pes2o/s2orc | v3-fos-license | Automated Analysis of Cerebrospinal Fluid Flow and Motile Cilia Properties in The Central Canal of Zebrafish Embryos.
Circulation of cerebrospinal fluid (CSF) plays an important role during development. In zebrafish embryo, the flow of CSF has been found to be bidirectional in the central canal of the spinal cord. In order to compare conditions and genetic mutants across each other, we recently automated the quantification of the velocity profile of exogenous fluorescent particles in the CSF. We demonstrated that the beating of motile and tilted cilia localized on the ventral side of the central canal was sufficient to generate locally such bidirectionality. Our approach can easily be extended to characterize CSF flow in various genetic mutants. We provide here a detailed protocol and a user interface program to quantify CSF dynamics. In order to interpret potential changes in CSF flow profiles, we provide additional tools to measure the central canal diameter, characterize cilia dynamics and compare experimental data with our theoretical model in order to estimate the impact of cilia in generating a volume force in the central canal. Our approach can also be of use for measuring particle velocity in vivo and modeling flow in diverse biological solutions.
. One challenge now is to generalize this approach in order to compare a variety of genetic animal models and experimental conditions. This is of special interest for investigations on 2 www.bio-protocol.org/e3932 cilia-defective mutants in which motility defects are partial and the consequences on flow are not fully understood.
The goal of this protocol is to guide the computation of CSF flow profiles from fluorescence measurements. We developed a user-friendly interface program to generate flow profiles from collected data. We additionally provide a protocol to compare experimentally measured profiles of embryonic CSF flow to a theoretical profile. Our theoretical model relies on the assumption that the average flow rate is null. In this case, the volume force that gives rise to CSF flow can be computed and compared between different conditions. The volume force depends on different cilia parameters: = ℎ , where is the average cilia beating frequency, ℎ the width of the region occupied by the cilia, the viscosity, and a dimensionless parameter. We finally show how to quantify the main cilia frequency using transgenic embryos with cilia labeled by fluorescent proteins. c. Pull microinjection needles from borosilicate glass capillaries with a 2-step needle puller.
Materials and Reagents
Adjust temperature and pulling force to produce a long and sharp funnel shape needle with an approximate tip diameter of 1-3 µm (equivalent to egg injection pipettes). iii. Imaging could be performed with any imaging system as long as the signal to noise ratio (SNR) is high enough, and the imaging speed is above a few frames/s. Upright or inverted spinning disk can be chosen. Spinning disk imaging seems the most adequate but widefield microscope could be suitable as well with bright fluorescent beads. Confocal or two-photon microscopes could also be used, although the imaging speed might be limited with classical 6 www.bio-protocol.org/e3932 v. Because imaging was mostly performed in the sagittal plane at the embryonic and larval stage using visible laser for excitation, pigmentation was not an issue and did not therefore require the use of PTU.
a. As the central canal shape and cilia properties may differ along the rostrocaudal axis, we recommend imaging always at the same rostro-caudal position. In our case, we focused on 3 segments above the yolk extension ( Figure 1). Because spinning disk microscopes perform sharp optical sectioning, we advise using either the Differential Interference c. Use the program cilia analysis to extract cilia beating frequency, length and angle. See Data analysis section for more details.
Data analysis
On top of the experimental procedures, we detail below two independent analysis workflows.
The first analysis (Section A) enables to obtain the CSF flow profile from the time series of bead trajectories acquired in Part Procedure-A. This also allows measuring the total CSF flow rate (Section B), which is expected to be null in WT embryos (Thouvenin et al., 2020). If adequate (see conditions 8 www.bio-protocol.org/e3932 below), the experimentally measured CSF flow can be fitted to a bidirectional flow model (Thouvenin et al., 2020) in order to extract the volume force generated by the motile cilia.
The second analysis workflow (Section C) uses the cilia beating movies (Procedure-B) to extract cilia parameters, including each cilia main beating frequency, length, and angle.
If appropriate, the last section (Section D) aims to combine the outputs from the two analysis workflows and extract a parameter we called α, an ad hoc coupling parameter that measures how multiple cilia efficiently work together to generate a flow.
A. CSF flow profile generation
Specifically for this protocol, we developed a user-interface platform to allow users to generate CSF flow profiles as easily as possible. Here, we present the analysis workflow ( Figure 2) and how to generate a first CSF flow profile from the fluorescent beads measurements. More subtle fine tuning of parameters is available within the user interface to adapt to variable imaging conditions, and is fully described in the document ManualGeneProfile.pdf file that can be found with the software.
As input, the analysis takes 2D time lapses of beads flowing in the central canal ( Figure 2A1). In order to generate kimographs, we swap for a given dorsoventral position the axes so that the X axis corresponds to the rostrocaudal position and the Y axis to time. Then, the beads trajectories appear as lines whose slopes reflect the direction and speed of the particles along the rostrocaudal axis. In order to build the flow profile, the program filters each kymograph and performs automatic segmentation of all lines in each kymograph ( Figure 2B2). It then extracts the slope of each line, and converts it into the particle velocity, in order to build a histogram of velocities for each dorsoventral position ( Figure 2B3). By calculating the average velocity at each position, we generate the CSF flow profile ( Figure 2C1). 9 www.bio-protocol.org/e3932 b. Alternatively, download and install the standalone application. Once it is installed, go to the command window and navigate to the installed folder. Run: application\GeneProfile. Figure 2A2 opens.
The user interface window in
3. Select .tif files to analyze. Multiple files can be selected at once, and they will be processed one by one. The volume force generated by motile cilia, 2) the pressure gradient that is established in the canal to oppose the cilia beating, 3) the width of the region bearing motile cilia, and 4) the diameter of the central canal. In order for the fit to be meaningful, the two assumptions of our bidirectional flow model should be respected: a cylindrical geometry for the central canal and the "no net flux" condition (Thouvenin et al., 2020). As a reminder, under these two assumptions, we showed that the averaged velocity profile can be fairly described by a piecewise second-order polynomial, defined as: Here is the diameter of the channel and the viscosity of the CSF, is the pressure gradient and is the average force per unit volume generated by the cilia. The latter two parameters can be measured experimentally from the CSF velocity profiles processed with the GeneProfile interface.
Our theoretical model describes a symmetric bidirectional flow for which the net total flow rate is null. In the user interface, before launching the fitting tool, we provide the user an estimate of the bidirectionality of the flow, called that is defined as: varies between 0% for a purely monodirectional flow and 100% for a purely bidirectional flow (the flow rate advected caudally equals the flow rate advected rostrally).
We advise users not to perform the fit of velocity profiles for values of < 70%, because under this arbitrary threshold, the "no flux" condition is no longer valid, and therefore the parameters of the fit are meaningless.
1. Once the flow profile is calculated (Data analysis Section A), the "Fit Model" button on the right ( Figure 2A2) turns green. The fit can be performed by pushing this button.
2. Two possibilities can arise: a. If < 70%, we estimate that the flow is not bidirectional enough to fit the velocity with our model, and display the warning message "We advise the user not to go further". If the two assumptions for our model are not respected, we advise to click on the "Stop here" button in order to stop the fitting process. If the measured flow profile was robustly measured, having a low β means that the flow could not be simply explained by the action of motile cilia in a closed cylindrical geometry and that another model should be developed by taking into account other physical effects.
b. If > 70%, the flow can be reasonably fitted with the simple model, and a verification imprecise.
C. Cilia frequency measurement
This section aims to describe the analysis protocol to estimate the main beating frequency of cilia (see Thouvenin et al., 2020), from the fluorescence cilia time lapse acquisitions described in the section Procedure B.
Similarly to section Data Analysis A, we describe here the principle of the analysis workflow ( Figure 3), as well as key instructions to perform a first analysis. Detailed instructions, as well as descriptions of fine-tuning parameters are available in an external document ManualCilia.pdf that can be found with the shared code.
The program first loads the imaging data with cilia dynamics versus time ( Figure 3A), and applies a local average filter (of size 4 by default) to increase the cilia SNR. For each pixel in the filtered data, the time series is extracted and Fourier transformed ( Figure 3B). The 5 maximal peaks of the Fourier spectrum are extracted, but, by default, only the first one is used. The frequency of the other peaks can be used for validation (e.g., if sampling errors are made, the sum of the frequencies of the first and second peak is equal to the acquisition frequency). A 2D image with the main frequency found at each pixel is thus created ( Figure 3C). In noisy regions, it outputs a random frequency, but in cilia regions it draws regions of interests of a given frequency that we considered to be single cilium. Each of these regions of interest containing more than 40 pixels (7.5 μm 2 ) is finally segmented and analyzed. The parameters frequency, diameter, eccentricity, area, angle, and major axis length are extracted and associated to their corresponding cilia parameters. 13 www.bio-protocol.org/e3932 If a comparison between dorsal and ventral cilia is of interest (Thouvenin et al., 2020), the program allows to manually draw a line at the center of the central canal and classify cilia as dorsal or ventral with respect to their relative position from the central line. This procedure is not described further here, but can be found in the document "ManualCilia.pdf" located in the same folder as the shared Matlab code. | 2021-04-03T06:17:05.442Z | 2021-03-05T00:00:00.000 | {
"year": 2021,
"sha1": "1dd9f8f68335bb7479de547009c2c0ef010238ba",
"oa_license": "CCBY",
"oa_url": "https://bio-protocol.org/pdf/Bio-protocol3932.pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "59213305a2e7c1ed746e72f3b5f097a1202141b5",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119005421 | pes2o/s2orc | v3-fos-license | First events from the CNGS neutrino beam detected in the OPERA experiment
The OPERA neutrino detector at the underground Gran Sasso Laboratory (LNGS) was designed to perform the first detection of neutrino oscillations in appearance mode, through the study of nu_mu to nu_tau oscillations. The apparatus consists of a lead/emulsion-film target complemented by electronic detectors. It is placed in the high-energy, long-baseline CERN to LNGS beam (CNGS) 730 km away from the neutrino source. In August 2006 a first run with CNGS neutrinos was successfully conducted. A first sample of neutrino events was collected, statistically consistent with the integrated beam intensity. After a brief description of the beam and of the various sub-detectors, we report on the achievement of this milestone, presenting the first data and some analysis results.
Introduction
The solution of the long-standing solar and atmospheric neutrino puzzles has come from the hypothesis of neutrino oscillations. This implies that neutrinos have non vanishing and not degenerate masses, and that their flavor eigenstates involved in weak interaction processes are a superposition of their mass eigenstates [1].
Several key experiments conducted in the last decades with solar neutrinos (see [2] for a review), and with atmospheric, reactor and accelerator neutrinos, have contributed to build-up our present understanding of neutrino mixing. Atmospheric neutrino oscillations, in particular, have been studied by the Super-Kamiokande [3], Kamiokande [4], MACRO [5] and SOUDAN2 [6] experiments. Long baseline experiments confirmed the oscillation hypothesis with accelerator neutrinos: K2K [7] in Japan and MI-NOS [8] in the USA. The CHOOZ [9] and Palo Verde [10] reactor experiments excluded the ν µ → ν e channel as the dominant one in the atmospheric sector.
However, the direct appearance of a different neutrino flavor is still an important open issue. Longbaseline accelerator neutrino beams can be used to probe the atmospheric neutrino signal and confirm the preferred solution of ν µ → ν τ oscillations. In this case, the beam energy should be large enough to produce the heavy τ lepton. This is one of the main goals of the OPERA experiment [11] that uses the long baseline (L=730 km) CNGS neutrino beam [12] from CERN to LNGS, the largest underground physics laboratory in the world. The challenge of the experiment is to measure the appearance of ν τ from ν µ oscillations. This requires the detection of the short-lived τ lepton (cτ = 87.11 µm) with high efficiency and low background. The τ is identified by the detection of its characteristic decay topologies, in one prong (electron, muon or hadron) or in three-prongs. The τ track is measured with a largemass sampling-calorimeter made of 1 mm thick lead plates (absorber material) inter-spaced with thin emulsion films (high-accuracy tracking devices). This detector is historically called Emulsion Cloud Chamber (ECC) [11]. Among past applications it was successfully used in the DONUT experiment for the first direct observation of the ν τ [13].
The OPERA detector is made of two identical Super Modules each consisting of a target section of about 900 ton made of lead/emulsion-film ECC modules (bricks), of a scintillator tracker detector, needed to pre-localize neutrino interactions within the target, and of a muon spectrometer.
The construction of the CNGS beam has been recently completed and a first run took place in August 2006 with good performance of the facility. First data were collected by the OPERA detector still without ECC bricks installed, yielding a preliminary measurement of the beam features along with the collection of a number of neutrino interactions (319) consistent with the integrated beam intensity of 7.6 × 10 17 protons on target (p.o.t.). The OPERA experiment operated very satisfactorily during the run.
The CNGS beam and the OPERA experiment
The CNGS neutrino beam was designed and optimized for the study of ν µ → ν τ oscillations in appearance mode, by maximizing the number of charged current (CC) ν τ interactions at the LNGS site. A 400 GeV proton beam is extracted from the CERN SPS in 10.5 µs short pulses with design intensity of 2.4 × 10 13 p.o.t. per pulse. The proton beam is transported through the transfer line TT41 to the CNGS target T40 [12]. The target consists of a series of thin graphite rods helium-cooled. Secondary pions and kaons of positive charge produced in the target are focused into a parallel beam by a system of two magnetic lenses, called horn and reflector. A 1,000 m long decay-pipe allows the pions and kaons to decay into muon-neutrinos and muons. The remaining hadrons (protons, pions, kaons) are absorbed by an iron beam-dump. The muons are monitored by two sets of detectors downstream of the dump; they measure the muon intensity, the beam profile and its center. Further downstream the muons are absorbed in the rock while neutrinos continue their travel towards Gran Sasso.
The average neutrino energy at the LNGS location is ∼ 17 GeV. Theν µ contamination is ∼ 4%, the ν e andν e contaminations are lower than 1%, while the number of prompt ν τ from D s decay is negligible. The average L/E ν ratio is 43 km/GeV. Due to the earth curvature neutrinos from CERN enter the LNGS halls with an angle of about 3 • with respect to the horizontal plane.
Assuming a CNGS beam intensity of 4.5 × 10 19 p.o.t. per year and a five year run about 31,000 CC plus neutral current (NC) neutrino events will be collected by OPERA from interactions in the lead-emulsion target. Out of them 95 (214) CC ν τ interactions are expected for oscillation parameter values ∆m 2 23 = 2 × 10 −3 eV 2 (3 × 10 −3 eV 2 ) and sin 2 2θ 23 =1. Taking into account the overall τ detection efficiency the experiment should gather 10-15 signal events with a background of less than one event.
In the following, we give a brief description of the main components of the OPERA detector. Each of the two Super Modules (SM1 and SM2) consists of 103,168 lead/emulsion bricks arranged in 31 target planes ( Fig. 1), each one followed by two scintillator planes with an effective granularity of 2.6 × 2.6 cm 2 . These planes serve as trigger devices and allow selecting the brick containing a neutrino interaction. A muon spectrometer at the downstream end of each SM allows to measure the muon charge and momentum. A large size anti-coincidence detector placed in front of SM1 allows to veto (or tag) interactions occurring in the material and in the rock upstream of the target. The run of August 2006 was conducted with electronic detectors only, taking neutrino interactions in the rock upstream of the detector, in the passive material of the mechanical structure and in the iron of the spectrometers. In addition, the information from a tracking plane made of pairs of emulsion films (Changeable Sheets, CS) was used to study the association between emulsion-film segments with tracks reconstructed in the Target Tracker (TT). Fig. 2 shows a photograph of the detector in the underground Hall C of LNGS as it was during the neutrino run.
The electronic detectors
The needs of adequate spatial resolution for high brick finding efficiency, for good calorimetric measurement of the events, as well as the requirement of covering large surfaces (∼ 6,000 m 2 ), impose strong requirements on the TT. Therefore, the cost-effective technology of scintillating strips with wave length shifting fiber readout was adopted.
The polystyrene scintillator strips are 6.86 m long, 10.6 mm thick and 26.3 mm wide. A groove in the center of the strip houses the 1 mm diameter fiber. Multi anode, 64-pixel photomultipliers are placed at both ends of the fibers. A basic unit of the TT called module consists of 64 strips glued together. One plane of 4 modules of horizontal strips and one of 4 modules of vertical strips form a scintillator wall providing X-Y track information. The readout electronics is based on a 32-channel ASIC [14] that outputs a charge proportional to the signal delivered by each pixel of the photomultipliers with a dynamic range from 1 to 100 photoelectrons.
Muon identification and charge measurement are needed for the study of the muonic τ -decay channel and for the suppression of the background from the decay of charmed particles, featuring the same topology. Each muon spectrometer [15] consists of a dipolar magnet made of two iron arms for a total weight of 990 ton. The measured magnetic field intensity is 1.52 T. The two arms are interleaved with vertical, 8 m long drift-tube planes for the precise measurement of the muon-track bending. Planes of Resistive Plates Chambers (RPCs) are inserted between the iron plates of the arms, providing a coarse tracking inside the magnet, range measurement of the stopping particles and a calorimetric analysis of hadrons.
In order to measure the muon momenta and determine their sign with high accuracy, the Precision Tracker (PT) is built of thin walled aluminum tubes with 38 mm outer diameter and 8 m length [16]. Each of the ∼ 10,000 tubes has a central sense wire of 45 µm diameter. They can provide a spatial resolution better than 300 µm. Each spectrometer is equipped with six fourfold layers of tubes.
RPCs identify penetrating muons and measure their charge and momentum in an independent way with respect to the PT. They consist of electrode plates made of 2 mm thick plastic laminate of high resistivity painted with graphite. Induced pulses are collected on two pickup strip planes made of copper strips glued on plastic foils placed on each side of the detector. The number of individual RPCs is 924 for a total detector area of 3,080 m 2 . The total number of digital channels is about 25,000, one for each of the 2.6 cm (vertical) and 3.5 cm (horizontal) wide strips.
In order to solve ambiguities in the track spatial-reconstruction each of the two drift-tube planes of the PT upstream of the dipole magnet is complemented by an RPC plane with two 42.6 • crossed strip-layers called XPCs. RPCs and XPCs give a precise timing signal to the PTs.
Finally, a detector made of glass RPCs is placed in front of the first Super Module, acting as a veto system for interactions occurring in the upstream rock. The veto detector was not yet operational for the August 2006 run. The PT was in the commissioning phase with two working planes. The TT and the RPCs already passed a full commissioning with cosmic-ray muons before the run a .
OPERA has a low data rate from events due to neutrino interactions well localized in time, in correlation with the CNGS beam spill. The synchronization with the spill is done offline via GPS. The detector remains sensitive during the inter-spill time and runs in a trigger-less mode. Events detected out of the beam spill (cosmic-ray muons, background from environmental radioactivity, dark counts) are used for monitoring. The global DAQ is built as a standard Ethernet network whose 1,147 nodes are the Ethernet Controller Mezzanines plugged on controller boards interfaced to each sub-detector specific front-end electronics. A general 10 ns clock synchronized with the local GPS is distributed to all mezzanines in order to insert a time stamp to each data block. The event building is performed by sorting individual subdetector data by their time stamps.
Emulsion films, bricks and related facilities
An R&D collaboration between the Fuji Company and the Nagoya group allowed the large scale production of the emulsion films needed for the experiment (more than 12 million individual films) fulfilling the requirements of uniformity of response and of production, time stability, sensitivity, schedule and cost [17]. The main peculiarity of the emulsion films used in high energy physics compared to normal photographic films is the relatively large thickness of the sensitive layers (∼ 44 µm) placed on both sides of a 205 µm thick plastic base.
A target brick consists of 56 lead plates of 1 mm thickness and 57 emulsion films. The plate material is a lead alloy with a small calcium content to improve its mechanical properties. The transverse dimensions of a brick are 12.7 × 10.2 cm 2 and the thickness along the beam direction is 7.5 cm (about 10 radiation lengths). The bricks are housed in support structures placed between consecutive TT walls.
In order to reduce the emulsion scanning load the use of Changeable Sheets, successfully applied in the CERN CHORUS experiment [18], was extended to OPERA. CS doublets are attached to the downstream face of each brick and can be removed without opening the brick. Charged particles from a neutrino interaction in the brick cross the CS and produce a trigger in the TT scintillators. Following this a The cosmic muon flux in the LNGS Hall C integrated over the full solid angle is about 1 muon/m 2 /hour. trigger the brick is extracted and the CS developed and analyzed in the scanning facility at LNGS. The information of the CS is used for a precise prediction of the position of the tracks in the most downstream films of the brick, hence guiding the so-called scan-back vertex-finding procedure.
The hit brick finding is one of the most critical operations for the success of the experiment, since one aims at high efficiency and purity in detecting the brick containing the neutrino interaction vertex. This requires the combination of adequate precision of the TT, precise extrapolation and high track finding efficiency in the CS scanning procedure. During the neutrino run of August 2006 a successful test of the whole procedure was performed by using an emulsion detector plane consisting of a matrix of 15 × 20 individual CS doublets with overall transverse dimensions of 158 × 256 cm 2 inserted in one of the SM2 target planes.
The construction of more than 200,000 bricks for the neutrino target is accomplished by an automatic machine, the Brick Assembly Machine, operating underground in order to minimize the number of background tracks from cosmic-rays and environmental radiation. Two Brick Manipulating Systems on the lateral sides of the detector position the bricks in the target walls and also extract those bricks containing neutrino interactions.
While running the experiment, after the analysis of their CS doublets, bricks with neutrino events are brought to the LNGS external laboratory, exposed for several hours to cosmic-ray muons for film alignment [19] and then disassembled. The films are developed with an automatic system in parallel processing chains and dispatched to the scanning labs.
The expected number of bricks extracted per running-day with the full target installed and CNGS nominal intensity is about 30. The large emulsion surface to be scanned requires fast automatic microscopes continuously running at a speed of ∼ 20 cm 2 film surface per hour. This requirement has been met after R&D studies conducted using two different approaches by some of the European groups of the Collaboration (ESS) [20] and by the Japanese groups (S-UTS) [21].
The first run with CNGS neutrinos
For a detailed description of the CNGS beam operation during the first run with neutrinos of August 2006 we refer to the official CNGS WEB page [12]. The commissioning of the beam started on 10 July 2006 following a series of technical tests of individual components performed from February to May. During this phase the SPS delivered 7×10 15 p.o.t., equivalent to 1 hour of CNGS running with nominal intensity.
The first shot of the extracted proton beam onto the CNGS target was made on 11 July. A low intensity run with neutrinos took then place from 18 to 30 August 2006 with a total integrated intensity of 7.6 × 10 17 p.o.t. (Fig. 3). The beam had been active for a time equivalent to about 5 days. The low intensity was partly due to the chosen SPS cycle and to the intensity of the spill that was 55% of the nominal value during the first part of the run and 70% during the second part.
The GPS clock used to synchronize the CERN accelerators and OPERA had been fine-tuned before the start of data-taking. At CERN the current pulse of the kicker magnet used for the beam extraction from the SPS to the TT41 line was time-tagged by a GPS unit with absolute time (UTC) calibration. An analogous GPS at the LNGS site provided the UTC timing signal to OPERA. The resulting accuracy in the time synchronization between CERN and OPERA timing systems was better than 100 ns. However, during the first days of the run a time offset of 100 µs was observed due to problems in adjusting the time tagging of the kicker pulse. This offset was eventually reduced to 600 ns.
The OPERA detector started collecting neutrino interactions from the very first beam spills with nearly all electronic detectors successfully operating. Altogether, 319 neutrino events, with an estimated 5% systematic uncertainty, were taken by OPERA during the August run. This is consistent with the 300 expected events for the given integrated intensity of 7.6 × 10 17 p.o.t.. The analysis of the CNGS data conducted at CERN and the comparison with simulations is in progress. Once completed, we expect to reach a 20% systematic error on the prediction of the number of muon events from neutrino interactions in the rock. This error is due to uncertainties in the neutrino flux prediction, in the cross-section and in the muon transport in the rock. The event analysis was performed in two ways. In the first one the event timing information was treated as a basic selection tool, since the time window of beam events is well sized in a 10.5 µs interval, while the uniform cosmic-ray background corresponded to 10 −4 of the collected statistics (Fig. 4). The second analysis dealt with the reconstruction of track-like events disregarding timing information. Neutrino events are classified as: 1) CC neutrino interactions in the rock upstream of the detector or in the material present in the hall leading to a penetrating muon track (Fig. 5, top-left); 2) CC and NC neutrino interactions in the target material (Fig. 5, top-right and bottom-right) and CC interactions in the iron of the spectrometers (Fig. 5, bottom-left).
The θ angular distribution with respect to the horizontal axis obtained by selecting single-track events is shown in Fig. 6. Events were selected with a minimum number of 6 layers of fired RPCs in each spectrometer. In the same Figure, the distribution of simulated cosmic-ray muons from [5] is also shown. The comparison between experimental data and Monte Carlo events proved the beam-induced nature of the muons in the peak around the horizontal direction. By counting events selected with topological criteria we found ∼ 10% of the events corresponding to beam spill data missing in the CERN database. surrounding the detector crossed the CS plane surface. 5 muon tracks predicted by the electronic detectors were found by scanning the emulsion films. The reasons of inefficiency can be traced-back to the tight cuts applied in this preliminary analysis and in the significant decrease of the fiducial volume. In fact, the dead space between adjacent emulsion films was ∼ 10% and the scanning was only performed up to 3 mm from the film edge, bringing the overall dead space to ∼ 20%. However, the test proved the capability in passing from the centimeter scale of the electronic tracker resolution to the micrometric resolution of nuclear emulsions. The angular difference between predicted and found tracks is better than 10 mrad, largely dominated by the electronic detector resolution. Fig. 7 shows the display of one of the 6 reconstructed events.
Conclusions
We reported the first detection of neutrino events from the long baseline CERN CNGS beam with the OPERA experiment in the underground Gran Sasso lab. The electronic detectors of the experiment performed successfully with an overall data-taking efficiency larger than 95% during the August 2006 run. The scintillator Target Trackers and the spectrometers equipped with RPCs allowed to identify muon tracks from CC neutrino interactions occurring in the rock and in the material upstream of the detector, as well as in the detectors themselves.
319 neutrino-induced events were collected for an integrated intensity of 7.6 × 10 17 p.o.t. in agreement with the expectation of 300 events. The reconstructed zenith-angle distribution from penetrating muon tracks is centered at 3.4 • with a 10% statistical error, as expected for neutrinos originating from CERN and traveling under the earth surface to LNGS.
A test of the association between muon tracks reconstructed with the electronic detectors and with an emulsion detector plane was also successfully performed, proving the capability of passing from the centimeter scale of the electronic tracker resolution to the micrometric resolution of nuclear emulsions. The angular difference in the track association is better than 10 mrad, largely dominated by the electronic detector resolution. The success of this first OPERA run with CNGS neutrinos is the first step towards the operation of the complete detector. to non Italian researchers. We are finally indebted to our technical collaborators for the excellent quality of their work over many years of design, prototyping and construction of the detector and of its facilities. | 2019-04-14T02:27:59.587Z | 2006-11-13T00:00:00.000 | {
"year": 2006,
"sha1": "35d3cb0c67cff78dbe1c68c68709b4de9c340c6f",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1088/1367-2630/8/12/303",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "f8a63afea1b1b9e419223f0753a4376bef6efaed",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
229370851 | pes2o/s2orc | v3-fos-license | SOLUTION OF INVERSE NON-STATIONARY BOUNDARY VALUE PROBLEMS OF DIFFRACTION OF PLANE PRESSURE WAVE ON CONVEX SURFACES BASED ON ANALYTICAL SOLUTION
Research in the field of unsteady interaction of shock waves propagating in continuous media with various deformable barriers are of considerable scientific interest, since so far there are only a few scientific works dealing with solving problems of this class only for the simplest special cases. In this work, on the basis of analytical solution, we study the inverse non-stationary boundary-value problem of diffraction of plain pressure wave on convex surface in form of parabolic cylinder immersed in liquid and exposed to plane acoustic pressure wave. The purpose of the work is to construct approximate models for the interaction of an acoustic wave in an ideal fluid with an undeformable obstacle, which may allow obtaining fundamental solutions in a closed form, formulating initial-boundary value problems of the motion of elastic shells taking into account the influence of external environment in form of integral relationships based on the constructed fundamental solutions, and developing methods for their solutions. The inverse boundary problem for determining the pressure jump (amplitude pressure) was also solved. In the inverse problem, the amplitude pressure is determined from the measured pressure in reflected and incident waves on the surface of the body using the least squares method. The experimental technique described in this work can be used to study diffraction by complex obstacles. Such measurements can be beneficial, for example, for monitoring the results of numerical simulations.
INTRODUCTION
One of the most pressing problems of modern mechanics is the study of unsteady interaction of shock waves propagating in continuous media with various deformable barriers. Research in this area is of considerable interest both from the point of view of developing mathematical methods for solving initial boundary-value problems of mechanics, and for a number of technical applications, in particular, the calculation of thin-walled structural elements loaded by shock waves in a liquid. At this point we study the inverse non-stationary boundary-value problems of diffraction of plane pressure wave on convex surfaces immersed in liquid and exposed to acoustic shock waves. As an example, we study the diffraction of direct pressure of plane wave on a convex surface in the form of parabolic cylinder. To determine the hydrodynamic pressure acting on an obstacle, we used a transition function, built on the basis and hypothesis of a thin layer [1][2][3]. In this case, approximate models of interaction of a wave in fluid with a rigid obstacle, which allows obtaining fundamental solutions in closed form, were built. The diffraction of weak shock waves in liquid was studied on the basis of approximate models [4]. During the study of various problems of continuum mechanics, two main approaches to the statement of prob-* ov-egorova@nuos.pro lems naturally arise -direct and inverse [5][6][7]. A lot of works have been devoted to various problems of continuum mechanics, both direct and inverse [8][9][10]. In this work, we consider a method for solving the boundary inverse problem of determining the amplitude pressure. Numerous computational experiments have been carried out in which the experimental pressure values were determined from the solution of the direct problem with the addition of error.
MATERIALS AND METHODS
When stating and solving diffraction problems for an unsteady direct pressure of plane wave on a hard obstacle, the parameters of the incident wave are often not known and it is difficult to measure them in field and bench experiments [11,12]. At the same time, the technique of measuring pressure on the surface of an obstacle is significantly developed [13]. A problem arises: by measuring the pressure on the surface of the body, to determine the parameters of the incident wave. The leading method in this research is the method of solving the boundary inverse problem of determining the amplitude pressure. By measuring the pressure on the surface of the body at spatio-temporal points using the analytical solution (least squares method), the amplitude pressure value is determined. Numerous computational experiments have been carried out in which the experimental pressure values were determined from the solution of the direct problem with the addition of error. In this case, the accuracy in the obtained values does not exceed the accuracy in the experimental data. The mathematical apparatus developed in the work are the transition functions -fundamental solutions to the unsteady initial-boundary-value problem of diffraction of an acoustic medium on a smooth convex surface. In particular, a transition function is used, built on the basis of the hypothesis of thin layer [14][15][16]. The use of transition functions provides a transition from solving the associated non-stationary problem of joint movement of the acoustic medium and the deformable obstacle to solving the problem only for the obstacle, the mathematical model of which takes into account interaction with the environment in form of integral relations [17]. The cores of integral terms of the equations of motion of the obstacle were formed on the basis of transition functions of the diffraction problem. Therefore, the dimension of the problem was reduced, which makes it possible to significantly simplify the numerical solution on the basis of the finite element or finite difference approach, and in some important particular cases, construct analytical solutions and estimate the accuracy introduced by the accepted hypotheses. The mathematical formulation of direct problem has the following form [18] (Eqs. 1-3): (1) where φ is the velocity potential in acoustic medium, p is the pressure in the reflected and incident waves, v is the velocity vector of the acoustic medium, Δ is the Laplace operator. Then, the problem is solved by determining the pressure at the boundary of the body in a dimensionless form [19][20][21]. Furthermore, all linear dimensions are assigned to the focal distance α, velocities to the speed of sound in an acoustic medium c 0 , quantities having the dimension of pressure to the complex ρ 0 c 0 2 , time to tc 0 /α [22][23][24]. The pressures p 1 in the reflected wave can be found using the transition function G(x i , ) constructed in the framework of thin layer hypothesis (an asterisk denotes the convolution operation in time ) (Eqs. 4-5): At this, the influence function G(x i , ) satisfies the following initial-boundary-value problem (Eqs. 6-8): where δ( ) is the Dirac delta function, * is the time convolution operation. The transition function of the effect G 0 (ξ 1 , ) on the surface of the obstacle F is found by the operational method and has the form [2] (Eqs. 9-11); at r→∞, where F 0 ([a], [b, c], z) is the generalized hypergeometric function.
In this case, the expressions for the pressure in reflected and radiated waves, taking into account (Eqs. 8-10), can be presented in form (Eq. 12):
Diffraction of plane wave of pressure on convex surfaces
Let us consider the problem of diffraction of plane step pressure wave at a rigid motionless curvilinear obstacle [25]. A direct plane acoustic wave with front, at the initial moment of time =0, touches at a point A (Fig. 1) the surface of parabolic cylinder with a guide G, with a focal distance a>0 in Cartesian rectangular coordinate system Ox 1 x 2 , which is defined as follows (Eqs. [13][14]: where the linear size in (1.2.23) is the value a: L=a. The pressure behind the wave front in the coordinate system Ox i =(i=1,2) is set by the relation [6] (Eq. 15) or (Eq. 16): where p 0 is the amplitude pressure.
The main curvature is determined by the formula (16), where the average curvature takes the form k(ξ)/2, and the components of normal vector are given by expressions (17) for the case of plane problem (Eqs. [17][18][19] [26][27][28]: The pressure of the reflected wave is determined by the equality [6] (Eq. 20): (20) Figure 2 shows sections of the spatio-temporal total pressure (Eq. 21), upon action of a unit pressure jump p 0 =1 by planes =const:
Studying the inverse boundary value problems to determine the pressure jump
According to the experimental measurements in spacetime points (ξ i , k ): i=1..I;k=1..K, the overall pressure p(ξ, ) on the surface of the parabolic cylinder is necessary to determine the value of the amplitude of pressure (jump) p 0 . From (14) and (20) we get (Eq. 22): To determine p 0 using the least squares method, we compile the functional (Eq. 23): (23) where p ̃i k are the experimental values of the total pressure on the surface of parabolic cylinder. Calculating the gradient from the functional (23) by the parameter p 0 and equating it to zero, we get, taking into account (22): Then we express parameter p 0 from (24): Formula (25) lets us calculate the value of the amplitude pressure p 0 with controlled accuracy, while the more experimental values we have, the higher is the accuracy of determining the parameter p 0 .
Simulation using the computational experiment
To simulate the experimental values, we calculate the values of total pressure p(ξ, ) according to formula (22) at and add a random relative error in the range of 10% and 20%: p 0 =12.3 (Eqs. 26-28): Values p ĩ k are shown in Table 1.
CONCLUSIONS
Consequently, the problem of diffraction of direct pressure of plane wave on a convex surface in the form of a parabolic cylinder was studied. A fundamental solution of the problem of the acoustic wave diffraction pressure on a smooth convex obstacle in the form of a parabolic cylinder was constructed. An algorithm for solving the inverse problem of the boundary to determine the amplitude-stand pressures was offered. Based on the analytical solution, a calculation was made to determine the amplitude pressure. Computational experiments were performed in which the experimental values of pressure were determined from the direct problem solution with the addition of error.
For the inverse problem, the amplitude pressure was determined from experimental data (measured pressure in the reflected and incident waves on the surface of the body) using the least squares method. Computational experiments demonstrated that the amplitude pressure can be determined with controlled accuracy, despite the high (up to 20%) relative error in the experimental data.
ACKNOWLEDGMENTS
The work has been conducted with the financial support of the grant of the Russian Foundation for Basic Research, project code No 19-01-00675. | 2020-11-26T09:07:08.907Z | 2020-11-23T00:00:00.000 | {
"year": 2020,
"sha1": "893b78b2a4df71dd78d55854f1dd0ff4d13e00fa",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.5937/jaes0-28051",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "b7178801dcb2edba645d04a3d94f2427bdedf64e",
"s2fieldsofstudy": [
"Physics",
"Engineering"
],
"extfieldsofstudy": [
"Physics"
]
} |
265337426 | pes2o/s2orc | v3-fos-license | Antioxidant Activity of Selected Medicinal Plants Used by Traditional Herbal Practitioners to Treat Cancer in Malawi
: This study evaluated the phytochemical composition and antioxidant activity of Piliostigma thonningii (Schumach.) Milne-Redh, Psorospermum febrifugum Spach, Inula glomerata Oliv. and Hiern, Zanthoxylum chalybeum Engl. and Monotes africanus A.DC., claimed to treat cancer by Malawian traditional herbal practitioners. Ground and dried plant extracts were analyzed for total phenolic content (TPC), total flavonoid content (TFC), total alkaloid content (TAC), ferric reducing antioxidant power (FRAP) and 2,2-Diphenyl-1-Picrylhydrazyl (DPPH) using standard assays. The TPC, TFC, and TAC ranged from 539 ± 2.70 to 4602 ± 32 mg GAE/g DW, 6.18 ± 0.03 to 64.04 ± 0.16 mg QE/g DW and 19.25 ± 0.07 to 76.05 ± 0.36 mg CE/g DW, respectively, and the variations were significant, p < 0.05. FRAP values ranged from 82.15 ± 0.7 to 687.28 ± 0.71 mg TEAC/g DW and decreased in the following order: P. thoningii (Schumach.) Milne-Redh > P. febrifugum Spach > M. africanus A.DC > Z. chalybeum Engl > I. glomerata Oliv. and Hiern. The scavenging activity (SA 50 ) of the extracts ranged from 0.09 ± 0.01 to 1.57 ± 0.01 µ g/mL of extract with P. thonningii (Schumach.) Milne-Redh showing the lowest value. Based on the levels of phenolic compounds and their antioxidant activity, the plants in this study could be considered for use as medicinal agents and sources of natural bioactive compounds and antioxidants.
Introduction
Natural compounds from some plants have anticancer properties with lower toxicity properties.These phytochemicals act as antioxidants by scavenging free radicals that are produced in the body.They also act as anti-inflammatory and anticancer agents by suppressing or blocking cancerous cell pathways [1].The known anticancer phytochemicals in plants include phenolics (including flavonoids) and alkaloids [2,3].The overproduction of free radicals (oxidants) can cause an imbalance, leading to oxidative stress, with subsequent oxidative damage to large biomolecules such as lipids, proteins, and deoxyribonucleic acids (DNA), resulting in an increased risk of cancer [2,3].Natural antioxidants in plants are thought to inhibit free radical chain reactions in the body by preventing initiation or propagation steps, causing chain termination reactions, and thereby delaying the oxidation process [4].Free radical species such as superoxide (O 2 •-), hydroxyl (OH − ), and nitric oxide ( • NO) are generated in the body during normal cellular metabolism, and their normal concentration in the body is maintained [5].At normal levels, free radicals enact useful normal physiological protective mechanisms.Nevertheless, when the reactive oxygen species are overproduced or the antioxidant system has been compromised, oxidative stress occurs [6][7][8][9].When in excess, these free radicals damage macromolecules such as deoxyribonucleic acid (DNA), cellular proteins, and unsaturated fatty acids, impairing the macromolecules' proper functioning and resulting in degenerative human diseases, such as cancer [5,10].
Plants contain antioxidant secondary metabolites such as phenolics, flavonoids, alkaloids and ascorbic acid [11].These antioxidants are strong scavengers of free radicals in the body, thereby averting oxidative stress damage to cellular components [11][12][13].In addition, antioxidants have both preventive and curative pharmacological activities against a wide range of diseases, including diabetes, cancer, inflammation, and dementia [14][15][16][17].Despite the availability of many synthetic drugs used to manage oxidative stress, the high costs and adverse side effects associated with them limit their usefulness [18].As a result, alternative nontoxic antioxidants, which are affordable, are needed to counter oxidative stress, thereby thwarting the associated diseases [19].Plants have phenolic and alkaloid compounds that have been shown to have an array of in vitro and in vivo antioxidant effects [20,21].
Many Malawian traditional herbal practitioners (THP) claim to know of medicinal plants with antioxidant activities, and use such medicinal plants for cancer treatment and management [22,23].In the northern region of Malawi, especially in Nkhata Bay and Mzimba districts, THPs use root barks of P. thonningii (Schumach.)Milne-Redh (Monkey bread), P. febrifugum Spach (Christmas berry) and I. glomerata Oliv.and Hiern (Hare's ears), and stem barks of Z. chalybeum Engl.(Knob wood) and M. africanus A.DC (Pink-fruited monotes), to treat and manage unhealing wounds, prostate cancer, cervical cancer, and stomach ulcers.
P. thonningii (Schumach.)Milne-Redh belongs to the Fabaceae family, and is usually a small-to medium-sized rounded tree, 3-5 m high, but it may reach 10 m in ideal conditions.P. thonningii is traditionally used for the management of inflammation, malaria, fever, rheumatism, and mental illness, among other diseases caused by a disturbed redox state in the body [5].In addition, Alagbe [20] reported that the leaves, roots, and stem bark have been traditionally used for the treatment of chronic ulcers, diarrhea, toothache, gingivitis, cough, bronchitis, snake bites, hookworms and skin diseases P. febrifugum Spach belongs to the Hypericaceae family.It is a shrub or small tree, 3-4 m high by occasionally reaching 7 m, occurring over a wide range of altitudes and scattered through open woodland.The stem bark of P. febrifugum Spach from Cameroon also has shown antitumor, anticancer, and antioxidant activities, while traditional medicine practitioners in Uganda use it for the treatment of skin sores in HIV/AIDS patients [24].I. glomerata Oliv.and Hiern of the Asteraceae family is a robust perennial herb, which grows up to 1.5 m high, with basal rosette leaves showing an irregularly toothed margin.Its roots are used to treat hypertension, while its leaves are used for treating erectile dysfunctions [8].Z. chalybeum Engl belongs to the Rutaceae family.In Uganda, Z. chalybeum Engl. is used for treating tuberculosis, malaria and sickle cells, and the root or stem barks are the most important sources of medicine [25].M. africanus A.DC belongs to the Dipterocarpaceae family, and is usually a small tree of 8 m high with simple concolorous leaves.M. africanus A.DC is reported to have anti-HIV effects [26].
In Malawi, most herbal plant species are promoted as medicinal plants without scientific evidence, and little work has been done to evaluate and validate their effectiveness [27].To the best of our knowledge, there is no scientific study on the antioxidant activities, total phenolic, flavonoid, and alkaloid contents of the five plants from Mzimba and Nkhata Bay districts.This study was, therefore, designed to evaluate and validate the in vitro ferric reducing antioxidant power (FRAP) and 2,2-Diphenyl-1-Picrylhydrazyl (DPPH) antioxidant capacities, including total phenolic, flavonoid and alkaloid contents, of root barks of P. thonningii (Schumach) Milne-Redh, root barks of P. febrifugum Spach, leaves of I. glomerata Oliv.and Hiern, stem barks of Z. chalybeum Engl.and leaves of M. africanus A.DC, which are used by traditional herbal practitioners to treat and manage cancer in the Mzimba and Nkhata Bay districts of North Malawi.
Sample Preparation
Samples were washed with tap water to remove any dirt and soil, as previously described by Imad et al. [28].Root and stem barks were cut into smaller pieces to enhance drying.Leaf samples were not cut to smaller pieces.The samples were sorted, named accordingly, and shed-dried as described by Nantongo et al. [25] for one month in the chemistry laboratory.After one month of shed-drying, the samples were pulverized using a Huang Cheng Yan high speed multifunctional mill (CGOLDENWALL), sieved through a 0.25 mm-mesh size using sieve number 60 and transferred into sealed bottles.The sealed bottles containing powdered samples were transferred into black plastic bags and kept in the dark until analyses.
Moisture Content
The percent moisture of the pulverized and sieved plant samples was determined using the method described by Tembo et al. [29].Samples (2 g) were accurately weighed in triplicate in labelled, preheated, desiccator-cooled, and pre-weighed porcelain crucibles with covers on a PW-214 AE Adams analytical balance (Isando, RSA).The samples in the covered porcelain crucibles were then placed in a Gallenkamp Pius II hot air oven (Cam-bridge, UK), set and thermostatically controlled at 110 • C overnight (12 h), during which the samples dried to a constant mass.The results are presented as percent moisture content.
Extraction of Phytochemicals
Extractions of phytochemicals were undertaken as described in the literature [23,[30][31][32][33]. Twenty percent mass per volume (20% m/v) mixtures were prepared by weighing the pulverized plant samples (20 g) into 250 mL quick fit Erlenmeyer flasks followed by the addition of 80% v/v methanol (100 mL), and stoppered.The plant and methanol mixtures were then magnetically stirred (Labcon MH10, Chamdor RSA) at a moderate speed for 2 h at an ambient temperature.Thereafter, the mixtures were transferred into 50 mL falcon tubes, vortexed (VarMix Vortex by SciQuip, Stuttgart, Germany) for 1 min, centrifuged (Thermo Scientific Medifuge centrifuge, Karlsruhe, Germany) at 4000 revolutions per minute (rpm) for 10 min followed by gravity filtration using Whatman filter paper No. 1.The residue was re-extracted with a second 80% v/v methanol (100 mL) and the two filtrates were pooled together into a pre-weighed quick fit round-bottomed flask.The solvent of the crude plant extracts was evaporated in vacuo using a rotary evaporator (BUCHI R100 Labortechnik AG, Flawil, Switzerland).The semidried residue was quantitatively transferred into 100 mL plastic beakers and further dried to a constant mass using a water bath (Clifton NE 2-28D by Nickel-Electron Limited, Weston-super-Mare, UK) set at 40 • C. The dried sample was weighed, transferred into sealed sample tubes, kept in a black plastic bag, and stored under refrigeration at 4 • C.
Preparation of 10 mg/mL and 1 mg/mL Stock and Working Plant Extracts, Respectively
Here, 10 mg/mL stock extract solutions were prepared by weighing and dissolving the dried extracts (0.1 g) into 50 mL falcon tubes followed by the addition of 80% methanol (MeOH) (10 mL) using a Lab bottle top dispenser (Shangai Rongtai Biochemical Company, Shanghai, China).In this way, 80% methanol is able to extract 100% of the phenolic compounds, some of which are more water-soluble (hydrophilic) [34].In addition, polar phytochemicals are present as dipoles, and they interact with one another electrostatically in solid form.Polar solvents also interact with the dipolar phytochemicals, and such interactions weaken the bonds between solid phytochemicals, resulting in enhanced dissolving [35].In addition, 80% v/v methanol has more polar organic properties, and represents a better solvent for the polar organic phytochemicals [32].When in solution, the dipolar phytochemicals are solvated (surrounded) by the polar solvents, and consequently keep in solution to stop the dipolar phytochemicals from recombining [35].The mixtures were vortexed for 1 min, then centrifuged at 4000 rpm for 10 min, and gravity-filtered using Whatman No.1 filter paper into 15 mL falcon tubes, which were then sealed.In total, 1 mg/mL working plant extract solutions (10 mL) were prepared by pipetting and diluting 1 mL of the 10 mg/mL stock plant extracts into 10 mL volumetric flasks, filling to the mark with 80% v/v methanol, then stoppering and homogenizing.Both the stock and working plant extract solutions were stored under refrigeration at 4 • C till subsequent analyses.
Determination of FRAP and DPPH Antioxidant Activities
The ferric reducing antioxidant power (FRAP) was determined as described by some researchers [25,31,36,37].Standards of trolox ranging from 0 to 100 mg/100 mL were prepared, and solutions of both the standards and 1 mg/mL sample extracts (1 mL) were pipetted into 50 mL falcon tubes using a 1000 µL Eppendorf micro-pipette followed by the addition of FRAP reagent (6 mL) using a Lab bottle top dispenser.The mixtures were vortexed for 1 min and incubated at ambient temperature for 10 min.After the 10 min incubation period, the samples were transferred into 10 mm cuvettes and their absorbance read at 593 nm using a UV-Vis spectrophotometer (Spectro 2092 PLUS, Analytical Technologies Limited, Gujarat, India).FRAP antioxidant activity was determined in triplicate and expressed as mg trolox equivalent antioxidant capacity (TEAC)/g dry weight (DW).In total, 20 µg/mL of dried plant extract (1 mL) was prepared by diluting the 1 mg/mL crude extract (0.02 mL) with 80% v/v MeOH (0.08 mL) in 50 mL falcon tubes using a 10-1000 µL Eppendorf micro-pipette.The DPPH antioxidant activities of the 20 µg/mL MeOH extracts' were determined using a 0.1 mM DPPH assay as described by Molyneux [38] and Masalu et al. [39], with some modifications.Later, 0.1 mM of DPPH solution (4.0 mL) was added to the mixtures in falcon tubes using a Lab bottle top dispenser.The volumes of both 80% v/v methanol (1.0 mL) and trolox (20 µg/mL, 1 mL) served as negative and positive controls, respectively, and were similarly treated with a 0.1 mM DPPH solution (4.0 mL).The mixtures of both extracts and controls were then vortexed for 30 s and allowed to stand in the dark at ambient temperature for 30 min.Absorbance values of the resulting solutions were measured at 517 nm using a Spectro 2092 PLUS UV/Vis spectrophotometer.
where As is the absorbance of the sample while Ac is the absorbance of the blank (control).A standard calibration plot was used to calculate the concentration of the extract that would halve the scavenging activity of 0.1 mM DPPH solution.SA 50 is the concentration in µg/mL of the plant extract required to scavenge 50% of 0.1 mM DPPH, according to Masalu et al. [39].
Total Phenolic, Flavonoid, and Alkaloid Contents
Total phenolic content (TPC) was determined using the Folin-Ciocalteu (FC) assay as previously described [23,29,31].Standards of gallic acid ranging from 0 to 100 mg/L and blank (80% v/v methanol) were prepared.Aliquots of both standards, 1 mg/mL extracts, and blanks (1 mL) were transferred into 15 mL falcon tubes using an Eppendorf micropipette followed by the addition of 10-fold diluted FC reagent (5 mL) and 1 M sodium carbonate (4 mL) using a sample dispenser.The preparation of the samples and reagents was done within 3-8 min, followed by vortexing for 1 min, and left to stand for 2 h to allow color development.The samples were then transferred into 10 mm cuvettes and their absorbance read at 765 nm using a UV-Vis spectrophotometer.The TPC analysis was done in triplicate, and the results are expressed as milligram of gallic acid equivalents per gram of dry weight (mg GAE g −1 DW).
The total flavonoid content (TFC) was determined using the aluminum chloride colorimetric method as described by Mwamatope et al. [23] and Santos et al. [31].Samples of both quercetin standards, 80% v/v MeOH (blank), and 1 mg/mL extracts (2 mL) were pipetted into 15 mL falcon tubes followed by the addition of 2% aluminum (III) chloride (A C 3 ) (2 mL) using an Eppendorf micro-pipette.The mixtures in the falcon tubes were vortexed for 1 min and incubated at ambient conditions for 30 min.After the incubation period, their absorbance values were read at 415 nm using 10 mm cuvettes and a UV-Vis spectrophotometer.The TFC analysis was done in triplicate, and the results were expressed as milligram of quercetin equivalent per gram of dry weight (mg QE g −1 DW).
Total alkaloid content (TAC) was estimated photometrically using the bromocresol green (BCG) method, as described by several authors [25,31,36,38].The BCG assay is based on the formation of a yellow-colored complex formed from a reaction between BCG and alkaloids.Caffeine working solutions of 0-2 µg/mL were prepared from a 100 µg/mL stock solution.Dried plant extract samples (0.1000 g) were weighed using a PW-214 AE Adams analytical balance (RSA) into 15 mL falcon tubes followed by the addition of 2N hydrochloric acid solution (5 mL) to dissolve the sample.The mixtures were then vortexed for 2 min, followed by centrifuging at 4000 rpm for 10 min.Volumes of each extract, including the working standards (1.0 mL), were transferred into 50 mL falcon tubes using an Eppendorf micro-pipette followed by the addition of phosphate buffer (5 mL) and BCG (5 mL).The mixtures were vortexed for 1 min using VarMix Vortexer.Chloroform (CHCl 3 ) (5 mL) was then added to the mixtures, which were swirled to allow the yellow complex to separate in the CHCl 3 layer.After phase separation, the upper yellow CHCl 3 layer was pipetted into a 10 mL volumetric flask using a Pasteur pipette and filled to the mark with CHCl 3 .The yellow complex solution was transferred into a 10 mm silica cuvette and the absorbance was at 450 nm against a blank (CHCl 3 ) using a UV-Vis spectrophotometer.The TAC analysis was done in triplicate, and the results are expressed as milligram of caffeine equivalent per gram of dried weight (mg CE g −1 DW) of sample.
Statistical Analysis
The analyses were done in triplicate and the data obtained are expressed as mean ± standard error of the means (mean ± S.E.M).The data have been subjected to one-way analysis of variance (ANOVA) and significance has been declared if p ≤ 0.05.
Moisture Content (%)
The percent moisture contents of the pulverized and sieved plant samples ranged from 14.57% to 17.62% (Table 1).
FRAP and DPPH Antioxidant Activities of the Plants
In this study, the FRAP antioxidant activity ranged from 82.15 ± 0.7 to 687.28 ± 0.71 mg TEAC/g DW (Table 2).The FRAP antioxidant activity decreased in the following order: P. thoningii > P. febrifugum > M. africanus > Z. chalybeum > I. glomerata.The SA 50 results in the current study ranged from 0.09 ± 0.01 to 1.57 ± 0.01 µg/mL of extract, while that of the positive control (trolox) was 0.05 µg/mL.The results in Figure 1 indicate the variation in TPC for different types of plant samples.The TPC contents ranged from 539 ± 0 mg to 4602 ± 32 mg GAE/g DW.The results in Figure 1 indicate the variation in TPC for different types of plant samples.The TPC contents ranged from 539 ± 0 mg to 4602 ± 32 mg GAE/g DW.
Discussion
The moisture content of the pulverized plant samples ranged from 14.57% to 17.62%.The moisture content for P. thonningii (Schumach) Milne-Redh obtained in our study was higher than the 8.34% reported by Alagbe [20].The variability of moisture content in the samples could be due to the uncontrolled drying associated with shed-drying [41,42].The
Discussion
moisture content of the pulverized plant samples ranged from 14.57% to 17.62%.The moisture content for P. thonningii (Schumach) Milne-Redh obtained in our study was higher than the 8.34% reported by Alagbe [20].The variability of moisture content in the samples could be due to the uncontrolled drying associated with shed-drying [41,42].The phytochemical yields obtained depend on the ages of plants, drying processes, extraction methods, geographical locations and soil types [20,21,23,25,43].The yields of dried crude plant extracts were highest in P. thonningii (Schumach) Milne-Redh and P. febrifugum Spach.The 49% yield for P. thonningii (Schumach) obtained in our study was higher than the 39% reported by Moriasi et al. [5].Moriasi et al. [5] used pure methanol during extraction.Pure methanol may not have extracted all the hydrophilic phytochemicals, as reported by Chigayo et al. [32] and Che Sulaiman et al. [34].The 45% dried extract from P. febrifugum Spach was higher than the 30.8% yield reported by Konan et al. [24].However, Konan et al. [24] used similar extraction conditions as our study.Therefore, the lower percent yield could be due to differences in the ages of plants, the drying processes, the geographical locations or the soil types [20,21,23,43].Finally, the 6% yield of I. glomerata Oliv.and Hiern was lower than the 8.5% reported by Ojo et al. [8], who used 17% v/v methanol as a solvent.According to Chigayo et al. [32] and Che Sulaiman et al. [34], solvents with lower than 80% v/v methanol content may not be as polar, preventing them from extracting most of the polar organic phytochemicals.Therefore, the higher percent yield obtained by Ojo et al. [8] could not be due to the low (17% v/v) methanol content of the solvent.The higher yield obtained could be due to differences in ages of plants, drying processes, geographical locations or soil types [20,21,23,43].This means that factors such as the ages of plants, drying processes, extraction methods, geographical locations and soil type should be considered when using plants as herbal medicines.
The FRAP antioxidant activity results ranged from 82.15 ± 0.7 to 687.28 ± 0.71 mg TEAC/g DW, and were within the 40.00 to 31,050 mg TEAC g −1 DW range reported by Surveswaran et al. [44].Similar observations of relatively high FRAP values in medicinal plants have been previously reported [45].SA 50 is defined as the concentration of total antioxidant necessary to reduce the initial radical concentration of DPPH by 50% [39].The decrease in concentration is also accompanied by a proportionate decrease in absorbance, as per the Beer-Lambert law.P. thonningii (Schumach) Milne-Redh had the highest value, followed by P. febrifugum Spach, in terms of both FRAP and DPPH antioxidant activities.The studied plants had relatively high FRAP values.Medicinal plants with considerably high antioxidant activity have been reported to possess various biological and pharmacological properties [6,16,18].However, confirmatory investigations of such activities are needed for the medicinal plants under study.FRAP is a single electron transfer (SET)-based assay [46,47], while DPPH, by virtue of being a free radical, undergoes a hydrogen atom transfer (HAT) mechanism that enables the hydrogen atom to bring the electron required for the formation of a single covalent bond [48].Therefore, of the studied plants, P. thonningii (Schumach) Milne-Redh had the highest levels of antioxidants, which can scavenge endorgenic free radicals (pro-oxidants) through both SET and HAT mechanisms (Table 2).It should, however, be noted that high DPPH values could also be due to the presence of non-phenolic antioxidants, which may also quench endogernic free radicals [49].However, Z. chalybeum Engl.and I, glomerata Oliv.and Hiern had the lowest HAT-and SET-based antioxidants levels, respectively (Table 2).The low SA 50 results of the plant extracts imply that the studied plants are strong in vitro scavengers of the DPPH radical.The strong antioxidant activities could be attributed to the presence of bioactive antioxidant phytochemicals in these extracts, which work synergistically to scavenge the DPPH radicals [11].The antioxidant activity results suggest that all five of the studied plants in this study could potentially restore and modulate the activity of endogenous antioxidant systems.Similarly, this supports the findings of earlier studies by Santos et al. [31], Zhang et al. [16] and Moriasi et al. [5].Therefore, the root barks of P. thonningii (Schumach) Milne-Redh, root barks of P. febrifugum Spach, leaves of I. glomerata Oliv.and Hiern, stem barks of Z. chalybeum Engl.
2023, 6
600 and leaves of M. africanus A.DC can attenuate the damaging effects caused by oxidative stress.However, further studies will be needed to analyze in vivo anticancer activity using cell lines and the fingerprinting of specific anticancer phytochemical properties.
Phenolic acids are derivatives of benzoic or cinnamic acids, which form hydroxybenzoic and hydroxycinnamic acids, respectively.These phytochemicals contribute significantly to the antioxidant properties of plant extracts [24], which are capable of scavenging free radicals and consequently preventing diseases [50].The results in Figure 1 indicate the variation in TPC for different types of plant samples.The observed variations could be attributed to the differences in genetic composition, geographical location, environmental conditions, stage of maturity and soil type [20,21,23,43,51].A total phenolic content of 50.2 mg GAE/g DW for P. thonningii (Schumach) Milne-Redh was reported by Alagbe [20], which result is less than the 1982 ± 2 mg GAE/g DW value obtained in this study.In addition, Ojo et al. [8] reported a TPC value of 0.08 mg GAE/g DW for I. glomerata Oliv.and Hiern, which is also lower than the 780 ± 4 mg GAE/g DW obtained in this study.Furthermore, Nantongo et al. [25] reported a TPC value of 1.70 mg GAE/g DW for Z. chalybeum Engl.stem bark, which is also lower than 4602 ± 32 mg GAE/g DW.During extractions, Alagbe [20] used diethyl ether, while Ojo et al. [8] and Nantongo et al. [25] used commercial-grade methanol with no water added.The usage of solvents that are so different from the 80% v/v used in our study might have contributed to the low TPC values, as diethyl ether is less polar than 80% v/v methanol.In addition, the pure methanol used by Nantongo et al. [25] during extraction may not have extracted most of the polar organic phytochemicals, as reported by Che Sulaiman et al. [34].However, a TPC value of 3761 mg/GAE/g DW reported by Alsiede [52] was derived from a dried extract fraction obtained via a sequential extraction procedure.Alsiede initially defatted the powdered Cassia singueana samples using petroleum ether (60-80 • C), followed by sequential extraction using chloroform, ethyl acetate, and finally methanol.The TPC values obtained in our study were from crude extracts.The fraction yields obtained from sequential extractions would be lower than those from crude extracts.Therefore, such a high value of TPC obtained by Alsiede [52] might have been due to differences in genetic composition, the age of plants, the drying processes, the extraction methods, the geographical location and the soil type [20,21,23,25,43].
Flavonoids have antifungal, antibacterial, and antioxidant properties [53,54].The TFC result of 11.99 ± 0.0.01 mg QE/g DW for P. thonningii (Schumach) Milne-Redh root bark obtained in our study is lower than both the 35.0 mg QE/g DW reported in India by Alagbe [20] and the 52.3 mg QE/g DW reported in Burkina Faso by Sombie et al. [49].As indicated earlier, Alagbe [20] used diethyl ether as a solvent.Unless the P. thonningii (Schumach) Milne-Redh used had a moderately high polar TFC, it is doubtful whether the less polar diethyl ether would have positively contributed to the yield of the TFC, because Chigayo et al. [32] reported that diethyl ether extracts usually contain low yields as compared to more polar extracts, such as methanol and water.The difference in TFC content between our result of 11.99 ± 0.0.01 mg QE/g DW and the 35.0 mg QE/g DW reported by Alagbe [20] could therefore be attributed to differences in geographical location, stage of maturity, drying processes, extraction processes and soil type [20,21,23,43].Sombie et al. [49] used an aqueous decoction with an unspecified temperature of extraction.Elevated decoction temperatures of ≥60 • C and less than 80 • C are reported to maximize extraction yields [34].Therefore, the high yield of 52.3 mg QE/g DW reported by Sombie et al. [49] could be related to the elevated decoction temperature that was used.
Alkaloids possess analgesic, antibacterial, and antiplasmodic properties [40].The TAC of 19.28 ± 0.01 mg CE/g DW obtained from Z. chalybeum Engl.stem bark was higher than the 0.08 mg CE/g DW reported by Nantongo et al. [25], but lower than the 71.3 mg CE/g DW reported by Alagbe [20].Nantongo et al. [25] used pure commercial methanol, while Alagbe [20] used diethyl ether as the solvent.The low TAC yield obtained in Nantongo's work may have been contributed by the pure methanol used, since pure alkanols are not as efficient in extracting polar compounds such as alkaloids [32].The high TAC yield obtained from diethyl extract and reported by Alagbe [20] may be due to other factors, such as differences in the geographical locations, ages of plants, drying processes and soil types [20,21,23,43,52].This could be the case since moderately polar solvents such as diethyl ether are less efficient in extracting polar solutes such as alkaloids [32,34].
Total phenolic contents are usually higher than the flavonoid contents [55].This is expected because flavonoids are a subclass of phenolics.In most plants, the common order of secondary metabolites is phenolics > alkaloids > flavonoids [25].Both trends have been maintained in our results, as demonstrated in Figures 1 and 2. One of the factors influencing the distribution of phytochemicals within a plant is environmental conditions.The Z. chalybeum Engl.was harvested from an anthill within a thick forest.The abundance of total phenolics was the highest in Z. chalybeum Engl., while this showed the lowest levels of alkaloids.Differences in metabolite abundance have been detected among and within species primarily due to genetic factors, environmental effects and their interaction [25,[56][57][58].Changing growth conditions, especially nitrogen (N) availability, have been shown to affect phenolic concentrations in plant tissues.Specifically, N deficiency or limitation leads to phenolic accumulation in different plant parts, such as stems and roots [57,58].The comparatively higher levels of constitutive secondary metabolites observed in Z. chalybeum Engl.may also reflect the levels of biotic and abiotic stress it experiences [59].These stresses are typical of the natural forests where the Z. chalybeum Engl.samples were collected.
Conclusions
Root barks of P. thonningii (Schumach) Milne-Redhead, root barks of P. febrifugum Spach, leaves of I. glomerata Oliv.and Hiern, stem barks of Z. chalybeum Engl.and leaves of M. africanus A.DC had strong FRAP and DPPH antioxidant activities.In addition, the same plants had phenolics, including flavonoids and alkaloids, suggesting that they could play an important role in preventing and managing many health problems, such as cancer, cardiovascular diseases, diabetes, and obesity.These plants should, however, be further analyzed for their in vivo anticancer activity using cell lines.In addition, a further study leading to the fingerprinting of specific anticancer phytochemicals is recommended.
n
= 3 and values with different superscripts are significantly different (p < 0.05).
Figure 1 .
Figure 1.Total phenolic contents (mg GAE g −1 DW) of medicinal plants.Mean values that do not share a letter indicate significant differences (p < 0.05).
Figure 1 .
Figure 1.Total phenolic contents (mg GAE g −1 DW) of medicinal plants.Mean values that do not share a letter indicate significant differences (p < 0.05).
Figure 2 .
Figure 2. Total flavonoids content (mg QE g −1 DW), and total alkaloids (mg CE g −1 DW), of medicinal plants.Mean values (capital letters for TFC and small letters for TAC) that do not share a letter indicate significant differences (p < 0.05).
Figure 2 .
Figure 2. Total flavonoids content (mg QE g −1 DW), and total alkaloids (mg CE g −1 DW), of medicinal plants.Mean values (capital letters for TFC and small letters for TAC) that do not share a letter indicate significant differences (p < 0.05).
Table 1 .
Percent yield of crude plant extracts of the five plants.
Table 2 .
Antioxidant activities of the medicinal plant species. | 2023-11-22T16:10:14.310Z | 2023-11-20T00:00:00.000 | {
"year": 2023,
"sha1": "6c46914945a997c468793abd5a16a8aba209bc4a",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2571-8800/6/4/39/pdf?version=1700453483",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "c19f645a8d401d7e5acf3e5592643bd076f143f2",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": []
} |
133082226 | pes2o/s2orc | v3-fos-license | The Importance of Monitoring at Irrigation Areas and GIS Applications : Water Table in Particular
Monitoring studies are inevitable in terms of ensuring the sustainability of irrigation, creating of awareness related to environmental impact, interfering and taking of measures if necessary besides efficient using water in irrigation applying. Furthermore, monitoring and evaluation studies are very important from the point of providing a basis for scientific researches on scenarios of climate change, drought, sea level rise in coastal areas by creating numerical analysis models of irrigation operation works. The one of the most important activities of irrigation operation phase is monitoring water table. The objective of this study is to assess the change of water table depth and groundwater salinity for the years 2003, 2008 and 2013. In this study, the data obtained from water table monitoring works in an irrigation project operated is evaluated by using Geographic Information Systems which provide efficient and rapid assessment and recommendations are presented.
Introduction
Water resources are indispensable for sustainability of life and the need to ensure food safety increases with rapid population growth in the world.The reliable and efficient planning and management of water resources have great importance.Monitoring activities, carried out after putting into operation irrigation projects after completing construction works by high investment costs, are essential to achieve the expected performance and sustainability of these projects.In many countries, the main reason why expected benefits from irrigation projects can not be achieved is the failure to monitor and assess regularly that provides clear and explicit determination of this project [1].With monitoring works, which is part of project management, by providing backward flow of information to project managers and operators at all levels, with target assessments for effectively and efficiently carrying out project performance, more learning and problem-solving process is carried out [2].Information system for monitoring and evaluation of an irrigation project includes four parts; analysis of water use efficiency, agricultural activities, environmental problems and socioeconomic situation [3][4].For sustainable irrigation increasing the amount of product obtained per unit area, potential environmental impacts must be kept under control, measures must be taken and should be altered if necessary.Monitoring and evaluation of spatial distribution of water table depth and groundwater salinity in irrigation projects in the operating phase, have great importance in terms of water management and environmental impact [5][6].Water table situation can be viewed and analysed best by drawing water table maps.Water table maps are surface maps created by combining equivalent observation values of water table wells that are pointly marked on topographic maps [7].In large areas, because of monitoring groundwater flow, water table depth, parameters as hydraulic gradient and salinity leads to more labor and time by using traditional methods, determining spatial and time-dependent changes by using Geographic information systems (GIS) ensures more efficient and faster evaluation [8].GIS provides probabilistic techniques for determining and estimating value of surface patterns on measurement areas [9].In this study, it is aimed to evaluate spatial and temporal changes at water table depth and quality in Hatay-Yarseli Irrigation Projects area located at Amik Plain in 2003, 2008 and 2013 water years preparing EC maps and water table depth maps by using GIS.
Material and Method
Hatay Yarseli Irrigation Project is located in Asi River Basin in Hatay in southern part of Turkey.The location of Hatay Yarseli Irrigation Project in Turkey is shown in Figure 1.The water resource of Yarseli Irrigation Project is Yarseli Reservoir that irrigation has 7300 ha gross and 6800 ha net irrigation area.Yarseli Dam was built for irrigation purpose on Beyazçay stream in Hatay in Asi River Basin.Yarseli Dam, irrigation channels, pump stations and irrigation plants are planned in Hatay Yarseli Irrigation Project.The important water resources of the project is Asi River and Beyazçay Stream.In the project area, there are Mediterranean climate features.The months of winter are mild and rainy, summer is hot and dry.The slope varies between 1-5% in the irrigation area.No important problem in terms of drainage were determined with the assessment of field studies within Hatay-Yarseli Project planning studies, but water table was found to be 100 cm at 213 hectares represented by two water table wells located in coastal side of Asi River.The crop pattern has been identified as cotton, wheat, vegetables, rice, potatoes and maize for project area in planning studies [10]. 25 °C) of water samples taken from monitoring wells according to analysis reports.Water table maps and numerical analysis are prepared by using Geographic Information System (GIS) technology.ArcGIS software is used in GIS studies.The locations and data records of 35 observation wells are associated with Hatay-Yarseli Irrigation Project area by using GIS technology to evaluate water table, and spatial and temporal changes in five-years periods are discussed in by using GIS.In this context, geographic analysis of water table depth and salinity data set is evaluated and the results are investigated.
Groundwater salinity (EC)
The quality of water table is an important indicator in determining the drainage problem.Also there is the necessity of determining the quality of the water table in terms of salinity tolerance of plants to root crops as far as the water table rises cases.EC value is between 0-2000 micromhos/cm in total project area in 2003 and in 2008.EC value is over 2000 micromhos/cm at % 4.2 of total project area in 2013 (Figure 2a, b, c).
Critical highest depth map of water table
The area where the water table is between 0-2 m in the irrigation area on this map which indicates water table that rises to the highest level in a year has the most extensive drainage problems.Critical highest depth maps of water table shown in Figure 3a., b. and c. prepared by using GIS are evaluated as; In 2003; in parts of %96 (6504,6 ha) of analyzed irrigation area, water table depth was found in 0-50 cm, in parts of %4 (242,2 ha) of analyzed irrigation area, water table depth was found in 50-100 cm, in parts of % 0,1 (8 ha) of analyzed irrigation area, water table depth was found in 100-150 cm.It is determined that there is no area in critical highest depth map that the water table depth is deeper than 150 cm (Figure 3a).
In 2008; in parts of %64 (4348,6 ha) of analyzed irrigation area, water table depth was found in 0-50 cm, in parts of %28 (1906,2 ha) of analyzed irrigation area, water table depth was found in 50-100 cm, in parts of %5 (339,7 ha) of analyzed irrigation area, water table depth was found in 100-150 cm, in parts of %2 (121,7 ha) of analyzed irrigation area, water table depth was found in 150-200 cm, in parts of %1 (38,6 ha) of analyzed irrigation area, water table depth was found in 200-300 cm (Figure 3b).
In 2013; in parts of %99 (6680,2 ha) of analyzed irrigation area, water table depth was found in 0-50 cm, in parts of %1 (74,6 ha) of analyzed irrigation area, water table depth was found in 50-100 cm.It is determined that there is no area in critical highest depth map that the water table depth is deeper than 100 cm (Figure 3c).
Water table depth map in August
The map of the most intensive month of irrigation is drawn to determine how the water table affected from irrigation.This map is drawn for August which is the most intensive month of irrigation in Yarseli Irrigation Project (Figure 4a,b,c).
In 2013; in parts of %58 (3892,7 ha) of analyzed irrigation area, water table depth was found in 0-50 cm, in parts of %15 (997,9 ha) of analyzed irrigation area, water table depth was found in 50-100 cm, in parts of %14 (951,1 ha) of analyzed irrigation area, water table depth was found in 100-150 cm, in parts of %8 (572,8 ha) of analyzed irrigation area, water table depth was found in 150-200 cm, in parts of %5 (340,3 ha) of analyzed irrigation area, water table depth was found in 200-300 cm (Figure 4c).
Critical lowest depth map of water table
The map drawn by using the lowest water table depth value of each observation well according to annual measurement results indicates the maximum fall of water table level in a year (Figure 5a,b,c).In this map, the area where the water table is between 0-1 m shows the water table is in roots in all of the year.These areas are also the farm (subsurface) areas that need implementation of drainage methods.
In 2003; in parts of %51 (3433,1 ha) of analyzed irrigation area, water table depth was found in 0-50 cm, in parts of %10 (676,9 ha) of analyzed irrigation area, water table depth was found in 50-100 cm, in parts of %10 (680,9 ha) of analyzed irrigation area, water table depth was found in 100-150 cm, in parts of %13 (859,3 ha) of analyzed irrigation area, water table depth was found in 150-200 cm, in parts of %16 (1073,7 ha) of analyzed irrigation area, water table depth was found in 200-300 cm (Figure 5a).
In 2008; in parts of %50 (3347,3 ha) of analyzed irrigation area, water table depth was found in 0-50 cm, in parts of %9 (608,3 ha) of analyzed irrigation area, water table depth was found in 50-100 cm, in parts of %9 (629,5 ha) of analyzed irrigation area, water table depth was found in 100-150 cm, in parts of %10 (657,4 ha) of analyzed irrigation area, water table depth was found in 150-200 cm, in parts of %22 (1510,4 ha) of analyzed irrigation area, water table depth was found in 200-300 cm (Figure 5b).
In 2013; in parts of %52 (3535,8 ha) of analyzed irrigation area, water table depth was found in 0-50 cm, in parts of %10 (638,6 ha) of analyzed irrigation area, water table depth was found in 50-100 cm, in parts of % 9 (614,1 ha) of analyzed irrigation area, water table depth was found in 100-150 cm, in parts of %12 (818,3 ha) of analyzed irrigation area, water table depth was found in 150-200 cm, in parts of %17 (1137,4 ha) of analyzed irrigation area, water table depth was found in 200-300 cm (Figure 5c).
Conclusions and recommendations
By using Geographic Information Systems (GIS) in evaluation of monitoring activities, the reliability of the results increases beside the easy use of mapping data obtained from observation points.In addition, long-term storage of data and evaluation, ensuring easy accessibility to save time and labor are other advantages of GIS.
It is possible to evaluate spatial and temporal distribution of water table and groundwater salinity in irrigation areas that are in the operational phase, preparing water table maps by using GIS and to examine the two criteria together.Both maintenance and repair activities, and measures taken for drainage are performed according to the results of this evaluation.
In this study, water
Fig. 2 .
Fig. 2. Water table salinity maps in the years 2003(a), 2008(b) and 2013(c) respectively at the irrigation area
Fig. 3 .
Fig. 3. Critical highest depth maps of water table in the years 2003(a), 2008(b) and 2013(c) respectively at the irrigation area
Fig. 4 .
Fig. 4. Water table depth maps in August in the years 2003(a), 2008(b) and 2013(c) respectively at the irrigation area
Fig. 5 .
Fig. 5. Critical lowest depth maps of water table in the years 2003(a), 2008(b) and 2013(c) respectively at the irrigation area area with five years interval in 2003, 2008 and 2013 years.Critical highest depth maps of water table are drawn by using the highest water table values in each observation wells.The maps drawn by using the lowest water table values in each observation wells are the critical lowest depth maps of water table.Water table depth maps in August, which is the intensive irrigation month, are drawn by using water table observation results.Groundwater salinity maps are drawn by using electrical conductivity values (EC X10 table, water table depth maps in August are drawn in order to assess the status of water table in the research area, water table salinity maps are drawn by using observed salinity values of groundwater wells in order to determine quantitative distribution of salinity.Water table maps are compared to determine spatial and temporal changes in the project , in parts of %11 (724,2 ha) of analyzed irrigation area, water table depth was found in 150-200 cm, in parts of %17 (1144,7 ha) of analyzed irrigation area, water table depth was found in 200-300 cm (Figure In 2008; in parts of %52 (3549,4 ha) of analyzed irrigation area, water table depth was found in 0-50 cm, in parts of %10 (671,5 ha) of analyzed irrigation area, water table depth was found in 50-100 cm, in parts of %10 (665,4 ha) of analyzed irrigation area, water table depth was found in 100-150 cm table monitoring studies and the change of water table depth and groundwater salinity in five year period in 2003, 2008 and 2013 in Yarseli Irrigation Project is evaluated.In 2003, water table depth in all irrigation area is less than 2 m according to critical highest water table map.In August, it is seen that water table depth is closer than 50 cm to soil surface at %52 of irrigation area according to water table maps of August.The results indicate that there are drainage problems in irrigation project.The lowest water table depth values between 0-1 m is the %61 of irrigated area according to critical lowest water table depth map and it shows that farm drainage system functions largely discredited or maintenance and cleaning works were not enough.Groundwater salinity value is less than 2000 micromhoss/cm and relaxes possible threats related to herbal and soil structure.In 2008, water table depth in % 99 of irrigation area is less than 2 m according to critical highest water table map.In August, it is seen that as in 2003 water table depth is closer than 50 cm to soil surface at %52 of irrigation area according to water table maps of August.The lowest water table depth values between 0-1 m is the %59 of irrigated area according to critical lowest water table depth map.We see that graoundwater salinity didn't cause any problem.In case of unsalted water table, the purpose of the project is only the removal of drainage water in root crops.Drainage water can be used for irrigation in these conditions.In 2013, water table depth in all irrigation area is less than 2 m according to critical highest water table map, it is seen that water table depth is closer than 50 cm to soil surface at %58 of irrigation area according to water table maps of August.The lowest water table depth values between 0-1 m is the %62 of irrigated area according to critical lowest water table depth map.It was determined that groundwater salinity is greater than 2000 micromhos/cm in % 4,2 of analyzed area in 2013.Compared to last years in 2013 according to critical highest and lowest water table depth maps, there is an increase where ground water level remains high, this increase is also seen in salinity maps.According to water table depth map in August, which indicates how water table level impressed from irrigation, the increase in the area where water table depth between 0-50 cm is %6 compared to 2003, and compared to 2008. | 2018-12-05T03:39:42.122Z | 2016-12-26T00:00:00.000 | {
"year": 2016,
"sha1": "8cecd28c01271026540c9aebfb0f866d38ef243a",
"oa_license": "CCBYNCSA",
"oa_url": "http://dergipark.gov.tr/download/article-file/263031",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "8cecd28c01271026540c9aebfb0f866d38ef243a",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
6503993 | pes2o/s2orc | v3-fos-license | Chloroform Extract of Rasagenthi Mezhugu, a Siddha Formulation, as an Evidence-Based Complementary and Alternative Medicine for HPV-Positive Cervical Cancers
Rasagenthi Mezhugu (RGM) is a herbomineral formulation in the Siddha system of traditional medicine and is prescribed in the southern parts of India as a remedy for all kinds of cancers. However, scientific evidence for its therapeutic efficacy in cervical cancer is lacking, and it contains heavy metals. To overcome these limitations, RGM was extracted, and the fractions were tested on HPV-positive cervical cancer cells, ME-180 and SiHa. The extracts, free from the toxic heavy metals, affected the viability of both the cells. The chloroform fraction (cRGM) induced DNA damage and apoptosis. Mitochondria-mediated apoptosis was indicated. Though both the cells responded to the treatment, ME-180 was more responsive. Thus, this study brings up scientific evidence for the efficacy of RGM against the HPV-mediated cervical cancer cells and, if the toxic heavy metals are the limitation in its use, cRGM would be a suitable candidate as evidence-based complementary and alternative medicine for HPV-positive cervical cancers.
Introduction
Cancer is one of the major public health problems worldwide and accounts for an estimated 2.5 million cases in India alone [1]. In the wake of resistance to chemotherapy and the escalating toxic effects of synthetic drugs/compounds, all possible avenues are being explored to develop new and novel anticancer drugs that will overcome these limitations. One of the avenues is phytotherapy, which is a recognized complementary and alternative (CAM) therapeutic modality [2]. Many cancer patients, who are already crippled with this disease, and further burdened by drug-induced toxic side effects, now turn to complementary and alternative medicines hoping for a better cure or at least palliation [3]. Herbalism is a common medical practice since time immemorial. More than 60% of the approved drugs are derived from nature, and most of these discoveries were led from traditional herbal medicines [4]. The Indian traditional systems of medicine and folk medicines make use of thousands of plant-based formulations [5]. The principle underlying the use of more than one plant/plant product in these formulations is that they may produce synergistic and/or additive effects, or one may neutralize the toxic effect of another, which is otherwise therapeutic in the given context [6].
Siddha is one among the three popular Indian traditional medicinal systems, the other two being Ayurveda and Unani. Siddha medicine formulations are mostly polyherbal, but may also include metals, chemicals, and/or animal products. The common Siddha preparations are Bhasma (calcined metals and minerals), Churna (powders), Kashaya (decoctions), Lehya (confections), Ghrita (ghee), Taila (oil), and Mezhugu (wax). Rasagenthi Mezhugu (RGM), a Siddha medicine, is a formulation containing 38 different botanicals 2 Evidence-Based Complementary and Alternative Medicine and 8 inorganic substances, some of which are heavy metals [7]. Siddha practitioners prescribe RGM as a therapy for different cancers [7]. However, scientific evidence for the therapeutic efficacy of RGM in cancer is far too limited. This is in view of the fact that the complexity of the formulation does not facilitate investigations in vitro. Also, the heavy metals in RGM, mercury, lead, and arsenic, are toxic [8]. To overcome both these limitations, a modality was developed whereupon RGM was extracted in solvents of increasing polarity, and the extracts were tested on cancer cell lines [7,9]. The extracts were found to be free from the toxic heavy metals and amenable for in vitro testing. Thus, chloroform fraction of RGM was shown to be cytotoxic to prostate cancer cell PC3 [7] and lung cancer cells A-549 and H-460 [9], and in both cases, the cells succumbed to death by apoptosis.
Cervical cancer is one of the serious health problems in women [10]. In India alone, more than 70,000 new cases of cervical cancer are reported every year [1]. Most of the cervical cancers are caused by HPV infection and integration of HPV genome into the host cell's genome [11]. Thus, these cervical cancers being etiologically different, it would be pertinent to find if prescription of RGM to cervical cancer patients can have a scientific backing. Therefore, we carried out this study to test the most efficacious extract of RGM, the chloroform extract, which is free from heavy metals [7,9], on HPV-mediated cervical cancer cell lines, ME-180 and SiHa. In doing so, we focused on apoptosis as the end point, since in these cervical cancer cells, the cell cycle progression and apoptosis cascade are deregulated [12].
Extraction of RGM.
The extraction procedure also has been previously described [7]. Briefly, RGM was extracted with methanol (MeOH) using Soxhlet apparatus. The MeOH phase was evaporated under reduced pressure to obtain a dark brown residue. This residue was suspended in water and extracted with four organic solvents with increasing polarity, namely, n-hexane, CHCl 3 , EtOAc, and n-BuOH, and the final residue was extracted in water. All five extracts were condensed into powder/paste under reduced pressure using a rotary evaporator (Buchi Labortechnik AG, Flawil, Switzerland).
Cell Culture.
Human cervical cancer cells ME-180 and SiHa were obtained from National Center for Cell Science (NCCS), Pune, India. The cells were maintained in DMEM medium supplemented with 10% FBS (Sigma-Aldrich, St. Louis, Mo, USA), and with 100 U/mL penicillin and 100 μg/mL streptomycin as antibiotics (Himedia, Mumbai, India) in a humidified atmosphere of 5% CO 2 and 95% air in a CO 2 incubator (Heraeus, Hanau, Germany).
Cell Viability Assay.
All five RGM extracts, in the concentration range of 0-500 μg/mL, dissolved in DMSO (Sigma-Aldrich), were added to the wells, 24 h after seeding of 5 × 10 3 cells per well of 96-well plate. DMSO was used as the solvent control. After 24 and 48 h of incubation, 20 μL of MTT solution (5 mg/mL in phosphate-buffered saline (PBS)) was added to each well, and the plates were wrapped with aluminum foil and incubated for 4 h at 37 • C. The purple formazan product was dissolved by addition of 100 μL of 100% DMSO to each well. The absorbance was monitored at 570 nm (measurement) and 630 nm (reference) using a 96-well plate reader (Bio-Rad, Hercules, Calif, USA). Data were collected for four replicates each and used to calculate the means and the standard deviations. The percentage inhibition was calculated from this data using the following formula: The percentage inhibition = Mean OD of untreated cells (control) − Mean OD of treated cells Mean OD of untreated cells (control) × 100.
From the values thus obtained, the IC 50 for the respective extracts, and for the respective durations of treatment, that is, 24 and 48 h, was deduced from the curves obtained by plotting percentage inhibition against concentration. Since the MTT test indicated that the chloroform extract of RGM was the most efficacious among the five extracts, and it affected the viability of the cells at concentrations very low compared to the others, subsequent studies were limited to this extract (cRGM).
Hoechst 33528
Staining. The cervical cancer cells ME-180 and SiHa were cultured in 6-well plates and treated with IC 50 concentration of cRGM. After 24 and 48 h incubation, the treated and untreated cells were harvested and stained with Hoechst 33258 (1 mg/mL, aqueous) for 5 min at room temperature. A drop of cell suspension was placed on a glass slide, and a cover slip was laid over to reduce light diffraction. At random 300 cells, in duplicate, were observed at ×400 in a fluorescent microscope (Carl Zeiss, Jena, Germany) fitted with a 377-355 nm filter, and the percentage of cells reflecting pathological changes was calculated.
Acridine Orange (AO) and Ethidium Bromide (EB)
Fluorescent Assay for Cell Death. Acridine orange (AO) and ethidium bromide (EB) staining was performed as described by Spector et al. [13]. The cells were cultured in 6-well Evidence-Based Complementary and Alternative Medicine 3 plates and treated with IC 50 concentration of cRGM for 24 and 48 h. The treated and untreated cells (25 μL of suspension containing 5 × 10 5 cells) were incubated with acridine orange and ethidium bromide solution (1 part of 100 μg/mL acridine orange and 1 part of 100 μg/mL ethidium bromide in PBS) and examined in the fluorescent microscope using a UV filter (450-490 nm). Three hundred cells per sample were counted, in duplicate, for each time point (24, 48 h). The cells were scored as viable or dead, and if dead, whether by apoptosis or necrosis as judged from nuclear morphology and cytoplasmic organization. The percentages of apoptotic and necrotic cells were then calculated. Morphological features of interest were photographed.
Single-Cell Gel Electrophoresis (Comet Assay)
. DNA damage was detected by adopting the comet assay [14]. Treated (IC 50 concentration; 24 and 48 h treatment) and control cells were suspended in low-melting-point agarose in PBS and pipetted on to microscope slides precoated with a layer of normal-melting-point agarose. The slides were chilled on ice for 10 min and then immersed in lysis solution (2.5 M NaCl, 100 × 10 −3 M Na 2 EDTA, 10 × 10 −3 M Tris, 0.2 × 10 −3 M NaOH, pH 10.01, and Triton X-100), and the solution was kept overnight at 4 • C in order to lyse the cells and to permit DNA unfolding. The slides were then exposed to alkaline buffer (300 × 10 −3 M NaOH, 1 × 10 −3 M Na 2 EDTA, pH > 13) for 20 min to allow DNA unwinding. The slides were washed with buffer (0.4 M Tris, pH 7.5) to neutralize excess alkali and to remove detergents, before staining with EB. Photomicrographs were obtained using the fluorescent microscope. One hundred cells, in duplicate, from each treatment group were digitalized and analyzed using Comet Assay Software Program (CASP). The images were used to estimate the DNA content of individual nuclei and to evaluate the degree of DNA damage that represented the fraction of total DNA in the tail.
Assay for Mitochondrial Transmembrane Potential.
Mitochondrial transmembrane potential was assessed using the fluorescent probe JC-1, which produces green fluorescence in the cytoplasm and red-orange fluorescence when accumulated in healthy mitochondria. In case the mitochondrial membrane potential is affected, JC1 will be limited to cytoplasm, and the whole cell will fluoresce green. The cells were grown in six well plates and treated with IC 50 concentration of cRGM. After 12 and 24 h exposure, the cells were stained for 30 min with JC-1 (2 μg/mL) in the culture medium. The adherent cell layer was then washed with PBS and lifted using 250 μL of trypsinEDTA. The cells were collected in PBS, washed by centrifugation, resuspended in 0.3 mL of PBS, mixed gently, and examined in the fluorescent microscope using a UV filter (450-490 nM). The specific fluorescent patterns were indicative of intact (red fluorescence) or loss (green fluorescence) of mitochondrial transmembrane potential (ΔΨm).
Annexin V-Cy3 Apoptosis Assay.
Phosphatidylserine translocation from inner to outer leaflet of the plasma membrane is one of the early features of apoptosis. Cell surface phosphatidylserine was detected using phosphatidylserinebinding protein annexin V conjugated with Cy3 using the commercially available annexin V-Cy3 apoptosis detection kit (APOAC, Apoptosis Detection Kit, Sigma, Calif, USA). The cells were treated with IC 50 concentration of cRGM. After 12 and 24 h incubation, the cells were harvested, centrifuged, and pellets were collected. The cell pellet was washed with PBS and then with 1x binding buffer. The washed cell pellet was suspended in 50 μL of doublelabel staining solution (Ann-Cy3 and 6-CFDA) and kept in dark for 10 min. After the incubation, the excess label was removed by washing the cells with 1x binding buffer. The annexin-Cy3 and 6-CFDA-labelled cells were observed in the fluorescent microscope. 300 cells at random were observed. This assay facilitated detection of live cells (green), necrotic cells (red), and apoptotic cells (red nuclei and green cytoplasm). The percentage of cells reflecting cell death (apoptotic and necrotic, separately) was calculated. Data were collected from two individual experiments, each in duplicate, and used to calculate the respective means and the standard deviations.
Statistics.
Numerical data are expressed as mean ± standard deviation (SD). Statistical differences were evaluated by a one-way analysis of variance (ANOVA) using statistical package for social sciences (SPSS) software for window9 Version 11.5 (SPSS) Inc., Chicago, Ill, USA). Posthoc test was performed for comparisons using the least significant difference (LSD) test. Differences were considered statistically significant when P < 0.05.
Effect of cRGM on Viability of Cells as Revealed in MTT
Assay. MTT assay determines the integrity of mitochondria and reflects the viability or otherwise of the cells. The results of MTT assay showed that although all extracts of cRGM, other than water extract, inhibited proliferation of both SiHa and ME-180 cervical cancer cells in time-and dosedependent manner, cRGM was the most efficacious since it affected viability of the cells at a concentration many times lesser than the others (Table 1). Of the two cell types subjected to the test, ME-180 was more responsive than SiHa. Therefore, the rest of the study was limited to cRGM.
Changes in Nucleus and Chromatin as Revealed in Hoechst
Staining. Hoechst 33528 staining showed that there were significant changes in the chromatin of treated cells. In the untreated cells, the nuclei were round, even, and homogenous, and the chromatin was intact. After treatment with cRGM for 24 and 48 h, the intensity of blue fluorescence emittance in respect of the treated cells was much brighter than the control cells, and changes in the chromatin such as condensation, marginalization, and fragmentation were observed (Figure 1). The nuclei were found to be abnormal in 31% and 54% of cRGM-treated SiHa cells in the 24, 48 h treatment groups, respectively. In the case of ME-180 cell, the impact was much Though both SiHa and ME-180 cells responded with higher incidence of apoptosis than necrosis, the incidence of necrosis was higher in SiHa than ME-180 cells (Figure 4).
DNA Damage as Revealed in Single Cell Gel Electrophoresis.
In order to find if the treatment brings about DNA damage, which is an early event in apoptosis, single cell gel electrophoresis (Comet assay) was conducted. After staining with ethidium bromide and observation under fluorescent microscope (Figure 5), the cells were scored as dead, highly damaged, damaged, slightly damaged and intact, and histograms were prepared using the Comet Analysis Software (CASP) (Figure 6). The chromatin content in the nuclear head, the length of the comet tail, and other comet parameters were recorded for 100 individual cells, and the concurrent comparative data were generated. Though the treatment caused DNA damage to both the cell types, the response was higher in ME-180 cell than SiHa cell.
Annexin V-Cy3 Assay.
A well-established feature of an early event in apoptosis is externalization of phosphatidyl serine (PS) from inner to outer leaflet of plasma membrane. The results obtained with Annexin V binding assay of control and treated cells are represented in Figure 8. Treatment of SiHa cells with cRGM caused 25 and 33% of cells to succumb to apoptosis during 12 and 24 h treatment, respectively. In the case of ME-180 cell, the corresponding values were higher, 43 and 54%, respectively ( Figure 9). In both cell types, a small percentage indicated reflections of necrosis, and the incidence was higher with SiHa than ME 180 cells.
Discussion
Since ancient times, plant-based formulations have been practiced as remedies against diverse ailments [15]. Over the past two decades, interest in traditional medicines has increased considerably in many parts of the world [16]. The Indian systems of medicine in general, and Ayurveda and Siddha in particular, which originated several centuries ago, are holistic approaches to healthcare, and RGM is one of the few commonly prescribed medicines for cancer in the Siddha system. The aim of this study was to find if the chloroform extract of RGM, which is not only free from the toxic heavy metal ingredients (which are removed during the extraction process) but amenable for in vitro testing, and one which has been already shown to be cytotoxic to PC3, A-549, and H-460 cancer cells, would be cytotoxic to HPV-positive cervical cancer cells, and if so to infer the possible mechanism of action. The outcome of cytotoxicity assay in this study clearly shows that cRGM is cytotoxic to both the HPV-positive cervical cancer cells and produced the effect in very low doses compared to the other extracts, as has been the case with the prostate [7] and lung [9] cancer cells. An earlier study made a preliminary HPLC analysis of cRGM and found about 40-50 compounds in it [7]. Such heterogeneity would provide for the possibility of synergistic and/or additive interactions between the compounds, the sources of which are from the different herbals. Synergism, particularly, is important because it allows lower and safer doses of each compound. Most direct-acting natural compounds, if used alone, would require excessive and unsafe doses to inhibit cancer [17]. The data obtained in this study strongly suggest that, when used in combination, natural compounds can potentially produce synergistic effects in vitro. Natural compounds can be divided into three groups: those that inhibit cancer cell proliferation directly, those that act by indirect means to inhibit cancer progression, and those that stimulate the immune system [17]. There is evidence in the scientific literature that the herbals in RGM possess properties such as anticancer, antioxidant, detoxification, and immune modulation. Specifically, the following herbals possess one or more of these property/properties: Acorus calamus [18,19]; Alpinia galangal [20]; Azima tetracantha [21]; Celastrus paniculatus [22]; Cinnamomum zeylanicum [23,24]; Clerodendron serratum [25]; Cocos nucifera [26]; Cuminum cyminum [27]; Curcuma longa [3,28]; Elettaria cardamomum [24,29]; Embelia ribes [30,31]; Foeniculum vulgare [32,33]; Hygrophila auriculata [34]; Myristica fragrans [35]; Nigella sativa [3,36,37]; Piper longum [38]; Piper nigrum [39]; Plumbago zeylanica [40,41]; Psoralea corylifolia [42]; Quercus infectoria [43]; Saussurea lappa [44]; Semecarpus anacardium [3,45,46]; Sesamum indicum [47]; Smilax china [48,49]; Strychnos nux-vomica [50,51]; Strychnos potatorum [52]; Terminalia chebula [53,54]; Trachyspermum ammi [55]; Vernonia anthelmintica [56]; Vitis vinifera [57]; Withania somnifera [58]; Zingiber officinale [59][60][61]. Thus, cRGM presents a strong case for synergism as well as additivism of the multiplicity of compounds from the 38 herbals, most of which have been scientifically proven as associated with one or more aspects of interference with cancer.
The idea that an integrated approach is needed to manage cancer using the growing body of knowledge gained through scientific developments [3] is adequately taken care of in our approach of herbal medicine to cancer. Our finding is to be viewed in the background that synergistic interactions occur within a total extract of a single herb, as well as between different herbs in a formulation [62]. In fact, the formulations of traditional medicines used in China, India, and Japan have been constructed to expect desirable treatment of diseases.
Evidence-Based Complementary and Alternative Medicine 7 The principles are based on the interaction of several crude drugs or several ingredients even in a single crude drug. Therefore, the apparent combined effects are equivalent to the sum of effects of those components which underwent addition, potentiation, subtraction, and modulation [63].
Phytotherapy, the therapeutic efficacy of which is based on the combined action of a mixture of constituents, offers new treatment opportunities. Because of their biological defense function, plant secondary metabolites act by targeting and disrupting the cell membrane, by binding and inhibiting specific proteins or they adhere to or intercalate into RNA or DNA [64]. Cancer, by etiology, is multifactorial in origin and, hence, it is only logical that multiple drugs are used at a time.
The focus of the present study has been to find if cRGM would inhibit the proliferation of and induce apoptosis in HPV-positive cervical cancer cells, because these are the two major goals in cancer treatment [9]. This study provides evidence in support of the mode of cell death is essentially apoptosis is revealed in features such as chromatin condensation, nuclear fragmentation, and formation of apoptotic bodies. DNA fragmentation is one of the major events in apoptosis. The result of comet assay strongly suggests that cRGM brings about strand breaks in DNA of the cervical cancer cells. The mitochondrial permeability transition is an important step in the induction of cellular apoptosis, and the results clearly suggest that cRGM leads to collapse of the mitochondrial transmembrane potential in cervical cancer cells. This collapse is thought to occur through formation of pores in the mitochondria by dimerized Bax or activated Bid, Bak, or Bad proteins. Activation of these proapoptotic proteins is accompanied by release of cytochrome c into the cytoplasm, which would promote the activation of caspases which are directly responsible for apoptosis [65]. The phosphatidyl serine (PS) expression on the outer leaflet of plasma membrane was detected with annexinV-Cy3 binding, confirming the early stage of apoptosis. Based on the mitochondrial transmembrane potential depolarization assessment, it is reasonable to conclude that cRGM induces apoptosis through the mitochondria-mediated pathway. Study in the future to examine the proteins regulating mitochondriamediated apoptosis pathway, such as cytochrome c, Apaf-1, adenosine triphosphate, caspase-9, caspase-8, caspase-3, and IAP, will be highly relevant Cervical cancer takes the lives of more than 250,000 women each year globally [66], and most of the cervical cancers are associated with human papilloma virus (HPV) infection [67]. The HPV 16 and 18 oncoproteins E6 and E7 cause immortalization of the infected cells by interacting with and degrading p53 and the cell cycle regulator proteins such as Rb, p21, and p27 [68,69]. Since cRGM is highly efficient in inducing death of HPV-positive cervical cancer cells, it could be speculated that the extract might restore p53 and the cell cycle regulatory proteins to functional status by causing degradation of viral onco-proteins, which is worthy of investigation.
The major limitations of the Indian traditional medicines in presence of one or more toxic heavy metals in the preparations, some intentionally included in view of proprietary prescription in the original formulation (as in RGM), and/or presence of toxic heavy metals to unknown levels in the herbals that are present in the drug. As far as the first is concerned, it is an established fact that the original prescription requires thorough processing of the metal that detoxifies the metal and makes it into a therapeutic substance [70]. In the recent times, there is a concept that the treatments, to which the metals are subjected to make them into nanoparticles [70]. There is evidence that it is potentially toxic when, macro-or microparticles, heavy metal could be nontoxic and therapeutic when made into nanoparticles [71]. It is unfortunate that some quacks try to economize on the preparation and so do not adopt the prescribed procedures [72]. The presence of heavy metals in the herbal ingredients could be overcome through stringent quality control measures [72]. Even assuming that the heavy metals, present in whatever form, are not acceptable, the present study and a few earlier studies [7,9] show that, in spite of the limitation of deviating from the holism of the proprietary drug formulation which Regulatory Authorities of Indian systems of medicine may object to, even after extracting out the heavy metals RGM in the chloroform extract is potent enough to deal with cancers, especially prostate, lung, and cervical.
Conclusion
The original RGM formulation, if exonerated of heavy metal toxicity, or cRGM, which is free from heavy metals, would be a potential evidence-based complementary and alternative medicine for HPV-positive cervical cancers. | 2014-10-01T00:00:00.000Z | 2011-10-27T00:00:00.000 | {
"year": 2011,
"sha1": "0222e848f5f3ae9c788f2c1d64596ca2cfbce3dc",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/ecam/2012/136527.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "77d692d7552a3e9947f3d7eb5b4e9933d4e0b844",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
4767339 | pes2o/s2orc | v3-fos-license | An improved computer vision method for detecting white blood cells
The automatic detection of White Blood Cells (WBC) still remains as an unsolved issue in medical imaging. The analysis of WBC images has engaged researchers from fields of medicine and computer vision alike. Since WBC can be approximated by an ellipsoid form, an ellipse detector algorithm may be successfully applied in order to recognize them. This paper presents an algorithm for the automatic detection of WBC embedded into complicated and cluttered smear images that considers the complete process as a multi-ellipse detection problem. The approach, based on the Differential Evolution (DE) algorithm, transforms the detection task into an optimization problem where individuals emulate candidate ellipses. An objective function evaluates if such candidate ellipses are really present in the edge image of the smear. Guided by the values of such function, the set of encoded candidate ellipses (individuals) are evolved using the DE algorithm so that they can fit into the WBC enclosed within the edge-only map of the image. Experimental results from white blood cell images with a varying range of complexity are included to validate the efficiency of the proposed technique in terms of accuracy and robustness.
Introduction
Medical image processing has become more and more important in diagnosis with the development of medical imaging and computer technique. Huge amounts of medical images are obtained by X-ray radiography, CT and MRI. They provide essential information for efficient and accurate diagnosis based on advance computer vision techniques [1,2].
On the other hand, White Blood Cells (WBC) also known as leukocytes play a significant role in the diagnosis of different diseases. Although computer vision techniques have successfully contributed to generate new methods for cell analysis, which in turn, have lead into more accurate and reliable systems for disease diagnosis. However, high variability on cell shape, size, edge and localization, complicates the data extraction process. Moreover, the contrast between cell boundaries and the image's background may vary due to unstable lighting conditions during the capturing process.
Many works have been conducted in the area of blood cell detection. In [3] a method based on boundary support vectors is proposed to identify WBC. In such approach, the intensity of each pixel is used to construct feature vectors whereas a Support Vector Machine (SVM) is used for classification and segmentation. By using a different approach, in [4], Wu et. al developed an iterative Otsu method based on the circular histogram for leukocyte segmentation. According to such technique, the smear images are processed in the Hue-Saturation-Intensity (HSI) space by considering that the Hue component contains most of the WBC information. One of the latest advances in white blood cell detection research is the algorithm proposed by Wang [5] that is based on the fuzzy cellular neural network (FCNN). Although such method has proved successful in detecting only one leukocyte in the image, it has not been tested over images containing several white cells. Moreover, its performance commonly decays when the iteration number is not properly defined, yielding a challenging problem itself with no clear clues on how to make the best choice.
Since white blood cells can be approximated with an ellipsoid form, computer vision techniques for detecting ellipses may be used in order to recognize them. Ellipse detection in real images is an open research problem since long time ago. Several approaches have been proposed which traditionally fall under three categories: Symmetry-based, Hough transform-based (HT) and Random sampling.
In symmetry-based detection [6,7], the ellipse geometry is taken into account. The most common elements used in ellipse geometry are the ellipse center and axis. Using these elements and edges in the image, the ellipse parameters can be found. Ellipse detection in digital images is commonly solved through the Hough Transform [8]. It works by representing the geometric shape by its set of parameters, then accumulating bins in the quantized parameter space. Peaks in the bins provide the indication of where ellipses may be. Obviously, since the parameters are quantized into discrete bins, the intervals of the bins directly affect the accuracy of the results and the computational effort. Therefore, for fine quantization of the space, the algorithm returns more accurate results, while suffering from large memory loads and expensive computation. In order to overcome such a problem, some other researchers have proposed other ellipse detectors following the Hough transform principles by using random sampling. In random sampling-based approaches [9,10], a bin represents a candidate shape rather than a set of quantized parameters, as in the HT. However, like the HT, random sampling approaches go through an accumulation process for the bins. The bin with the highest score represents the best approximation of an actual ellipse in the target image. McLaughlin's work [11] shows that a random sampling-based approach produces improvements in accuracy and computational complexity, as well as a reduction in the number of false positives (non existent ellipses), when compared to the original HT and the number of its improved variants.
As an alternative to traditional techniques, the problem of ellipse detection has also been handled through optimization methods. In general, they have demonstrated to give better results than those based on the HT and random sampling with respect to accuracy and robustness [12]. Such approaches have produced several robust ellipse detectors using different optimization algorithms such as Genetic algorithms (GA) [13,14] and Particle Swarm Optimization (PSO) [15].
Although detection algorithms based on optimization approaches present several advantages in comparison to traditional approaches, they have been scarcely applied to WBC detection. One exception is the work presented by Karkavitsas & Rangoussi [16] that solves the WBC detection problem through the use of GA. However, since the evaluation function, which assesses the quality of each solution, considers the number of pixels contained inside of a circle with fixed radius, the method is prone to produce misdetections particularly for images that contained overlapped or irregular WBC.
In this paper, the WBC detection task is approached as an optimization problem and the differential evolution algorithm is used to build the ellipsoidal approximation. Differential Evolution (DE), introduced by Storn and Price [27], is a novel evolutionary algorithm which is used to optimize complex continuous nonlinear functions. As a population-based algorithm, DE uses simple mutation and crossover operators to generate new candidate solutions, and applies one-to-one competition scheme to greedily decide whether the new candidate or its parent will survive in the next generation. Due to its simplicity, ease of implementation, fast convergence, and robustness, the DE algorithm has gained much attention, reporting a wide range of successful applications in the literature [18][19][20][21][22]. This paper presents an algorithm for the automatic detection of blood cell images based on the DE algorithm. The proposed method uses the encoding of five edge points as candidate ellipses in the edge map of the smear. An objective function allows to accurately measure the resemblance of a candidate ellipse with an actual WBC on the image. Guided by the values of such objective function, the set of encoded candidate ellipses are evolved using the DE algorithm so that they can fit into actual WBC on the image. The approach generates a sub-pixel detector which can effectively identify leukocytes in real images. Experimental evidence shows the effectiveness of such method in detecting leukocytes despite complex conditions. Comparison to the state-of-the-art WBC detectors on multiple images demonstrates a better performance of the proposed method.
The main contribution of this study is the proposal of a new WBC detector algorithm that efficiently recognize WBC under different complex conditions while considering the whole process as an ellipse detection problem. Although ellipse detectors based on optimization present several interesting properties, to the best of our knowledge, they have not yet been applied to any medical image processing up to date. This paper is organized as follows: Section 2 provides a description of the DE algorithm while in Section 3 the ellipse detection task is fully explained from an optimization perspective within the context of the DE approach. The complete WBC detector is presented in Section 4. Section 5 reports the obtained experimental results whereas Section 6 conducts a comparison between state-of-the-art WBC detectors and the proposed approach. Finally, in section 7, some conclusions are drawn.
Differential evolution algorithm
The DE algorithm is a simple and direct search algorithm which is based on population and aims for optimizing global multi-modal functions. DE employs the mutation operator as to provide the exchange of information among several solutions.
There are various mutation base generators to define the algorithm type. The version of DE algorithm used in this work is known as rand-to-best/1/bin or ''DE1" [17]. DE algorithms begin by initializing a population of p N and D-dimensional vectors considering parameter values that are randomly distributed between the prespecified lower initial parameter bound ,low j x and the upper initial parameter bound ,high j x as follows: , , ,low ,high ,low rand(0,1) ( ); The subscript t is the generation index, while j and i are the parameter and particle indexes respectively. Hence, , , is the jth parameter of the ith particle in generation t. In order to generate a trial solution, DE algorithm first mutates the best solution vector , best t x from the current population by adding the scaled difference of two vectors from the current population. 1 2 , , , i t best t r t r t with , i t v being the mutant vector. Indices 1 r and 2 r are randomly selected with the condition that they are different and have no relation to the particle index i whatsoever (i.e., 1 2 r r i ≠ ≠ ). The mutation scale factor F is a positive real number, typically less than one. Figure 1 illustrates the vector-generation process defined by Equation (2).
In order to increase the diversity of the parameter vector, the crossover operation is applied between the mutant vector , , if rand(0,1) or , , otherwise.
Here, f() represents the objective function. These processes are repeated until a termination criterion is attained or a predetermined generation number is reached.
Data preprocessing
In order to detect ellipse shapes, candidate images must be preprocessed first by an edge detection algorithm which yields an edge map image. Then, the ( , )
Individual representation
Similar to lines which need two different points to define them; ellipses require five points to draw one. Thus, each candidate solution E (ellipse candidate) uses five edge points to encode an individual. Under such representation, edge points are selected following a random positional index within the edge array P. This procedure will encode a candidate solution as the ellipse that passes through five points 1 p , 2 p , 3 p , 4 p and 5 p ( x Please cite this article as:
5
Considering the configuration of the edge points shown by Figure 2, the ellipse center 0 0 ( , ) x y , the radius maximum ( max r ), the radius minimum ( min r ) and the ellipse orientation (θ ) can be calculated as follows:
Objective function
Optimization refers to choosing the best element from one set of available alternatives. In the simplest case, it means to minimize an objective function or error by systematically choosing the values of variables from their valid ranges. In order to calculate the error produced by a candidate solution E, the ellipse coordinates are calculated as a virtual shape which, in turn, must also be validated, i.e. if it really exists in the edge image. The test set is represented by x y r x r y r r ≅ + − , where max r and min r represent the major and minor axis, respectively. However, MEA avoids computing square-root calculations by comparing the pixel separation distances. A method for direct distance comparison is to test the halfway position between two pixels (sub-pixel distance) to determine if this midpoint is inside or outside the ellipse boundary. If the point is in the interior of the ellipse, the ellipse function is negative. Thus, if the point is outside the ellipse, the ellipse function is positive. Therefore, the error involved in locating pixel positions using the midpoint test is limited to one-half the pixel separation (sub-pixel precision). To summarize, the relative position of any point (x, y) can be determined by checking the sign of the ellipse function: The ellipse-function test in Eq. 12 is applied to mid-positions between pixels nearby the ellipse path at each sampling step. Figure 3a and 4a show the midpoint between the two candidate pixels at sampling position. The ellipse is used to divide the quadrants into two regions the limit of the two regions is the point at which the curve has a slope of -1 as shown in Figure 4. (a) symmetry of the ellipse: an estimated one octant which belong to the first region where the slope is greater than -1, b) In this region the slope will be less than -1 to complete the octant and continue to calculate the same so the remaining octants.
7
In MEA the computation time is reduced by considering the symmetry of ellipses. Ellipses sections in adjacent octants within one quadrant are symmetric with respect to the dy/dy=-1 line dividing the two octants. These symmetry conditions are illustrated in Figure 4. The algorithm can be considered as the quickest providing a sub-pixel precision [24]. However, in order to protect the MEA operation, it is important to assure that points lying outside the image plane must not be considered in S.
The objective function J(E) represents the matching error produced between the pixels S of the ellipse candidate E and the pixels that actually exist in the edge image, yielding: v v x y S ∈ and s N being the number of pixels lying on the perimeter corresponding to E currently under testing. Hence, function ( , ) v v G x y is defined as: which have been marked by darker pixels while the virtual shape is also depicted through a dashed line A value of J(E) near to zero implies a better response from the "ellipsoid" operator. Figure 5 shows the procedure to evaluate a candidate action E with its representation as a virtual shape S. Figure 5
Implementation of DE for ellipse detection
The ellipse detector algorithm based on DE can be summarized in the following steps:
The White blood cell detector
In order to detect WBC, the proposed detector combines a segmentation strategy with the ellipse detection approach presented in section 3.
Image preprocessing
To employ the proposed detector, smear images must be preprocessed to obtain two new images: the segmented image and its corresponding edge map. The segmented image is produced by using a segmentation strategy whereas the edge map is generated by a border extractor algorithm. Such edge map is considered by the objective function to measure the resemblance of a candidate ellipse with an actual WBC.
The goal of the segmentation strategy is to isolate the white blood cells (WBC's) from other structures such as red blood cells and background pixels. Information of color, brightness and gradients are commonly used within a thresholding scheme to generate the labels to classify each pixel. Although a simple histogram thresholding can be used to segment the WBC's, at this work the Diffused Expectation-Maximization (DEM) has been used to assure better results [25].
DEM is an Expectation-Maximization (EM) based algorithm which has been used to segment complex medical images [26]. In contrast to classical EM algorithms, DEM considers the spatial correlations among pixels as a part of the minimization criteria. Such adaptation allows to segment objects in spite of noisy and complex conditions. The method models an image as a finite mixture, where each mixture component corresponds to a region class and uses a maximum likelihood approach to estimate the parameters of each class, via the expectation maximization (EM) algorithm, coupled with anisotropic diffusion on classes, in order to account for the spatial dependencies among pixels. For the WBC's segmentation, it has been used the implementation of DEM provided in [27]. Since the implementation allows to segment gray-level images and color images, it can be used for operating over all smear images without matter the way in which they are acquired. The DEM has been configured considering three different classes (K=3), ( ) As a final result of the DEM operation, three different thresholding points are obtained: the first corresponds to the WBC's, the second to the red blood cells whereas the third represents the pixels classified as background. Figure 6(b) presents the segmentation results obtained by the DEM approach employed at this work considering the Figure 6(a) as the original image.
Once the segmented image has been produced, the edge map is computed. The purpose of the edge map is to obtain a simple image representation that preserves object structures. The DE-based detector operates directly over the edge map in order to recognize ellipsoidal shapes. Several algorithms can be used to extract the edge map; however, at this work, the morphological edge detection procedure [28] has been used to accomplish such a task. Morphological edge detection is a traditional method to extract borders from binary images in which original images ( B I ) are eroded by a simple structure element ( E I ) composed by a template of 3x3 ones. Then, the eroded image is inverted ( E I ) and compared with the original image ( E B I I ∧ ) in order to detect pixels which are present in both images. Such pixels compose the computed edge map from B I . Figure 6(c) shows the edge map obtained by using the morphological edge detection procedure.
Ellipse detection approach
The edge map is used as input image for the ellipse detector presented in Section 3. After several calibration experiments, Table 1 presents the parameters used in this work for the DE algorithm. The final configuration coincides with the best possible calibration proposed in [29], where it has been analyzed the effect of modifying the DE-parameters in several generic optimization problems. The population-size parameter (m=20) has been selected considering the best possible balance between convergence and computational overload. Once defined, such configuration has been kept for all test images employed in the experimental study. Under such assumptions, the complete process to detect WBC's is implemented as follows: Step 1: Segment the WBC's using the DEM algorithm (described in 4.1)
Step 2:
Get the edge map from the segmented image.
Step 3:
Start the ellipse detector based in DE over the edge map while saving best ellipses (Section 3).
Step 4:
Define parameter values for each ellipse that identify the WBC's.
Numerical example
In order to present the algorithm's step-by-step operation, a numerical example has been set by applying the proposed method to detect a single leukocyte lying inside of a simple image. Fig. 7(a) shows the image used in the example. After applying the threshold operation, the WBC is located besides few other pixels which are merely noise (see Fig. 7(b)). Then, the edge map is subsequently computed and stored pixel by pixel inside the vector P. Fig. 7(c) shows the resulting image after such procedure.
The DE-based ellipse detector is executed using information of the edge map (for the sake of easiness, it only considers a population of four particles). Like all evolutionary approaches, DE is a population-based optimizer that attacks the starting point problem by sampling the search space at multiple, randomly chosen, initial particles. By taking five random pixels from vector P, four different particles are constructed. Fig. 7 { , , , } T T T T = T (ellipses) are generated, their locations are shown in Fig. 7(e). Then, the new population 1 E is selected considering the best elements obtained among the trial elements T and the initial particles 0 E . The final distribution of the new population is depicted in Fig. 7(f). Since the particles 0 2 E and 0 2 E hold (in Fig. 7(f)) a better fitness value ( 0 2 ( ) J E and 0 3 ( ) J E ) than the trial elements 2 T and 3 T , they are considered as particles of the final population 1 E . Figures 7(g) and 7(h) present the second iteration produced by the algorithm whereas Fig. 6(i) shows the population configuration after 25 iterations. From Fig. 7(i), it is clear that all particles have converged to a final position which is able to accurately cover the WBC.
Experimental results
Experimental tests have been developed in order to evaluate the performance of the WBC detector. It was tested over microscope images from blood-smears holding a 960 x 720 pixel resolution. They correspond to supporting images on the leukemia diagnosis. The images show several complex conditions such as deformed cells and overlapping with partial occlusions. The robustness of the algorithm has been tested under such demanding conditions. All the experiments has been developed using a PC based on Intel Core i7-2600, with 8GB in Ram.
Figure 87(a) shows an example image employed in the test. It was used as input image for the WBC detector. 12 edge map and the white blood cells after detection, respectively. The results show that the proposed algorithm can effectively detect and mark blood cells despite cell occlusion, deformation or overlapping. Other parameters may also be calculated through the algorithm: the total area covered by white blood cells and relationships between several cell sizes. Other example is presented in Figure 9. It represents a complex example with an image showing seriously deformed cells. Despite such imperfections, the proposed approach can effectively detect the cells as it is shown in Figure 9(d).
Comparisons to other methods.
A comprehensive set of smear-blood test images is used to test the performance of the proposed approach. We have applied the proposed DE-based detector to test images in order to compare its performance to other WBC detection algorithms such as the Boundary Support Vectors (BSV) approach [3], the iterative Otsu (IO) method [4], the Wang algorithm [5] and the Genetic algorithm-based (GAB) detector [16]. In all cases, the algorithms are tuned according to the value set which is originally proposed by their own references.
Detection comparison
To evaluate the detection performance of the proposed detection method, Table 2 tabulates the comparative leukocyte detection performance of the BSV approach, the IO method, the Wang algorithm, the BGA detector and the proposed method, in terms of detection rates and false alarms. The experimental data set includes 50 images which are collected from the ASH Image Bank (http://imagebank.hematology.org/). Such images contain 517 leukocytes (287 bright leukocytes and 230 dark leukocytes according to smear conditions) which have been detected and counted by a human expert. Such values act as ground truth for all the experiments. For the comparison, the detection rate (DR) is defined as the ratio between the number of leukocytes correctly detected and the number leukocytes determined by the expert. The false alarm rate (FAR) is defined as the ratio between the number of non-leukocyte objects that have been wrongly identified as leukocytes and the number leukocytes which have been actually determined by the expert.
Experimental results show that the proposed DE method, which achieves 98.26% leukocyte detection accuracy with 2.71% false alarm rate, is compared favorably against other WBC detection algorithms, such as the BSV approach, the IO method, the Wang algorithm and the BGA detector.
Robustness comparison
Images of blood smear are often deteriorated by noise due to various sources of interference and other phenomena that affect the measurement processes in imaging and data acquisition systems. Therefore, the detection results depend on the algorithm's ability to cope with different kinds of noises. In order to demonstrate the robustness in the WBC detection, the proposed DE approach is compared to the BSV approach, the IO method, the Wang algorithm and the BGA detector under noisy environments. In the test, two different experiments have been studied. The first inquest explores the performance of each algorithm when the detection task is accomplished over images corrupted by Salt & Pepper noise. The second experiment considers images polluted by Gaussian noise. Salt & Pepper and Gaussian noise are selected for the robustness analysis because they represent the most compatible noise types commonly found in images of blood smear [30]. The comparison considers the complete set of 50 images presented in Section 6.1 containing 517 leukocytes which have been detected and counted by a human expert. The added noise is produced by MatLab©, considering two noise levels of 5% and 10% for Salt & Pepper noise whereas 5 = σ and 10 = σ are used for the case of Gaussian noise. Such noise levels, according to [31], correspond to the best trade of between detection difficulty and real existence in medical imaging. Using higher noise levels, the detection process would be unnecessarily complicated without representing a feasible image condition. 10 shows two examples of the experimental set. The outcomes in terms of the detection rate (DR) and the false alarm rate (FAR) are reported for each noise type in Table 3 and
Stability comparison
In order to compare the stability performance of the proposed method, its results are compared to those reported by Wang et al. in [5] which is considered as an accurate technique for the detection of WBC.
The Wang algorithm is an energy-minimizing method which is guided by internal constraint elements and influenced by external image forces, producing the segmentation of WBC's at a closed contour. As external forces, the Wang approach uses edge information which is usually represented by the gradient magnitude of the image. Therefore, the contour is attracted to pixels with large image gradients, i.e. strong edges. At each iteration, the Wang method finds a new contour configuration which minimizes the energy that corresponds to external forces and constraint elements.
In the comparison, the net structure and its operational parameters, corresponding to the Wang algorithm, follow the configuration suggested in [5] while the parameters for the DE-based algorithm are taken from Table 1. Figure 11 shows the performance of both methods considering a test image with only two white blood cells.
Since the Wang method uses gradient information in order to appropriately find a new contour configuration, it needs to be executed iteratively in order to detect each structure (WBC). Figure 11(b) shows the results after the Wang approach has been applied considering only 200 iterations. Furthermore, Figure 11(c) shows results after applying the DE-based method which has been proposed in this paper. The Wang algorithm uses the fuzzy cellular neural network (FCNN) as optimization approach. It employs gradient information and internal states in order to find a better contour configuration. In each iteration, the FCNN tries, as contour points, different new pixel positions which must be located nearby the original contour position. Such fact might cause the contour solution to remain trapped into a local minimum. In order to avoid such a problem, the Wang method applies a considerable number of iterations so that a near optimal contour configuration can be found. However, when the number of iterations increases the possibility to cover other structures increases too. Thus, if the image has a complex background (just as smear images do) or the WBC's are too close, the method gets confused so that finding the correct contour configuration from the gradient magnitude is not easy. Therefore, a drawback of Wang's method is related to its optimal iteration number (instability). Such number must be determined experimentally as it depends on the image context and its complexity. Figure 12(a) shows the result of applying 400 cycles of the Wang's algorithm while Figure 12(b) presents the detection of the same cell shapes after 1000 iterations using the proposed algorithm. From Fig. 12(a), it can be seen that the contour produced by Wang´s algorithm degenerates as the iteration process continues, wrongly covering other shapes lying nearby.
In order to compare the accuracy of both methods, the estimated WBC area which has been approximated by both approaches, is compared to the actual WBC size considering different degrees of evolution i.e. the cycle number for each algorithm. The comparison considers only one WBC because it is the only detected shape in the Wang's method. Table 5 shows the averaged results over twenty repetitions for each experiment. In order to enhance the result analysis, Fig. 13 presents the response Error % vs. Iterations of an extended version of the outcomes exposed in Table 5 Please cite this article as: Cuevas, E., Díaz, M., Manzanares, M., Zaldivar, D., Perez-Cisneros, M. An improved computer vision method for white blood cells detection, Fig. 13. Error % vs. Iterations of an extended version of the results exposed in Table 5 7. Conclusions.
In this paper, an algorithm for the automatic detection of blood cell images based on the DE algorithm has been presented. The approach considers the complete process as a multiple ellipse detection problem. The proposed method uses the encoding of five edge points as candidate ellipses in the edge map of the smear. An objective function allows to accurately measure the resemblance of a candidate ellipse with an actual WBC on the image. Guided by the values of such objective function, the set of encoded candidate ellipses are evolved using the DE algorithm so that they can fit into actual WBC on the image. The approach generates a sub-pixel detector which can effectively identify leukocytes in real images. | 2014-06-27T06:20:21.000Z | 2014-06-26T00:00:00.000 | {
"year": 2014,
"sha1": "cfc512f03ad2a4affc71966d155a09369ae599f2",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "cfc512f03ad2a4affc71966d155a09369ae599f2",
"s2fieldsofstudy": [
"Computer Science",
"Medicine"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
259442739 | pes2o/s2orc | v3-fos-license | Efficient Bayesian High-Dimensional Classification via Random Projection with Application to Gene Expression Data
Inspired by the impressive successes of compress sensing-based machine learning algorithms, data augmentation-based efficient Gibbs samplers for Bayesian high-dimensional classification models are developed by compressing the design matrix to a much lower dimension. Ardent care is exercised in the choice of the projection mechanism, and an adaptive voting rule is employed to reduce sensitivity to the random projection matrix. Focusing on the high-dimensional Probit regression model, we note that the naive implementation of the data augmentation-based Gibbs sampler is not robust to the presence of co-linearity in the design matrix – a setup ubiquitous in n < p problems. We demonstrate that a simple fix based on joint updates of parameters in the latent space circumnavigates this issue. With a computationally efficient MCMC scheme in place, we introduce an ensemble classifier by creating R ( ∼ 25 – 50 ) projected copies of the design matrix, and subsequently running R classification models with the R projected design matrix in parallel. We combine the output from the R replications via an adaptive voting scheme. Our scheme is inherently parallelizable and capable of taking advantage of modern computing environments often equipped with multiple cores. The empirical success of our methodology is illustrated in elaborate simulations and gene expression data applications. We also extend our methodology to a high-dimensional logistic regression model and carry out numerical studies to showcase its efficacy.
Introduction
With the advent of modern technologies, it is now commonplace in many disciplines, including but not limited to bioinformatics, ecology, remote sensing etc., to collect data containing massive numbers of predictors, ranging from thousands to millions or more.In such settings, it is commonly of interest to consider classification models (Albert and Chib, 1993;Loaiza-Maya and Nibbering, 2022;Cao et al., 2022) such as where (•) is the cumulative distribution function of N(0, 1), X is an n × p matrix of predictors, p n, y is an n × 1 binary response vector.As traditional techniques such as maximum likelihood cannot be used, a rich variety of alternatives have been proposed mainly in the context of linear regression models, ranging from frequentist penalized optimization methods (Tibshirani, 1996;Zou and Hastie, 2005;Zou, 2006;Zhang, 2010;Xie and Huang, 2009) to Bayesian variable selection or shrinkage priors.Examples include the classical "two-group" discrete mixture priors with a point mass at zero (Mitchell and Beauchamp, 1988;George and McCulloch, 1993;Shin et al., 2015), and continuous shrinkage priors expressed as global-local variance mixtures of Gaussian distribution (Polson and Scott, 2011;Park and Casella, 2008;Hans, 2009;Brown and Griffin, 2010;Carvalho et al., 2009Carvalho et al., , 2010;;Bhadra et al., 2017;Piironen and Vehtari, 2017;Armagan et al., 2013;Bhattacharya et al., 2015).The Bayesian approaches are particularly attractive since they provide probabilistic characterization of uncertainty in the high-dimensional regression coefficients and in the resulting predictions, while penalization methods tend to focus on point estimation.It is well known that computing the posterior under Bayesian variable selection priors is an intractable problem, so that one can at best hope for a rough approximation using Markov chain Monte Carlo (MCMC) sampling unless p is small.However, recent developments in this regard have largely improved the computational aspects of the "two-group" (Biswas et al., 2022) and continuous shrinkage priors (Bhattacharya et al., 2016), but scalability still remains to be an quite open area of enquiry.Alternatively, a commonly used approach is to approximate the posterior with a computationally tractable distribution.This gives rise to variational Bayes approximations (Girolami and Rogers, 2006;Titsias and Lawrence, 2010;Faes et al., 2011;Mukherjee and Sen, 2021).Guhaniyogi and Dunson (2015) proposed a new approach for high-dimensional regression problems based on random projections of the scaled predictor vector prior to analysis which solved several problems simultaneously.In particular, for linear regression models, it completely avoids the computational bottleneck due to the enormous p. Their approach is inspired by the data squashing literature (DuMouchel, 2002;Madigan, 2004;Owen, 2003;Lee et al., 2010) and dramatic success of compressed sensing to facilitate storage and analysis, while retaining the ability to reconstruct the compressed signals with high accuracy under sparsity conditions (Donoho, 2006;Candes et al., 2006).
While such data compression based approaches have largely proved to be extremely successful in various contexts (Guhaniyogi and Dunson, 2015;Cannings and Samworth, 2017), MCMC computation in the context of Bayesian classifications approaches is still illusive.In this article we primarily focus on Bayesian high-dimensional probit regression.We propose a compressed sensing framework equipped with a data augmentation based Gibbs sampler that calculates a conjugate Gaussian-inverse gamma posterior for the regression coefficients corresponding to the compressed predictors in parallel for different random projections in the latent space.Finally, we aggregate the outcomes corresponding to the different compression via a simple but adaptive voting rule.We propose a principled approach to tune the cut-off parameter α of the binary classifier that dramatically improves classification accuracy across extensive simulation and real data examples.Importantly, our proposed methodology is inherently paralizable and continues to enjoy computational tractability even in ultra high-dimensional set up.We also note that, a naive implementation of the data augmentation Gibbs sampler results in poor mixing in the MCMC chains.It turns out that a simple fix based on joint update of the parameters in the latent space alleviates the problem to a large extent, and improves computational efficacy.We end our discussion with an extension to the high-dimensional logistic regression case, where we employ a Polya-Gamma data augmentation with the compressed covariate matrix.
In summary, our primary contributions in this article are three-fold.First, we present a scalable and inherently parallelizable compressed sensing framework equipped with a data augmentation based Gibbs sampler for high-dimensional probit regression.Secondly, for the classification task, we adapt an alternative Gibbs sampling scheme with joint update of the parameters in the latent space that showcases improved mixing.Thirdly, we present an adaptive voting rule involving a data-driven choice a cut-off parameter for our ensemble classifier, that improves the accuracy of our classifiers without significant increase in compute time.
Rest of the paper is organised as follows.Section 2 introduces a data compression strategy, a data augmentation based Gibbs sampler equipped with adaptive cut-off to carry out Bayesian high-dimensional probit regression.Section 3 presents elaborate empirical studies to demonstrate the efficacy of our methodology.Section 4 includes Micro-array gene expression data analyses to showcase practical utility and scalability of our prescription.Section 5 features an extension to our methodology to Bayesian high-dimensional logit regression, as well as an supporting empirical study.In section 6, we conclude.
Notations
For subjects i = 1, ..., n, let y i ∈ Y denote a response and x i = (x i1 , ..., x ip ) ∈ X ∈ R p denote predictors.We consider compressed regression models having the form where is the cumulative distribution function of N(0, 1), is an m × p projection matrix with m < min(n, p), and β = (β 1 , ..., β m ) T are coefficients on the compressed predictors which a priori are from some distribution π(•).To ensure that our inferential procedure is robust to the specific choice of random projection , we consider R different random compression matrices.The sparsity of a projection matrix is controlled by a parameter s, refer to (2.2) for details.Now we are in a position to systematically unfold the pieces of our proposal.
Compression Mechanism
The choice of the projection scheme in (2.1) is a vital component of our methodology, and there is potential merit in attempting to utilize data-driven dimension reduction techniques, i.e, estimate based on the data.To that end, we can turn to the enormous body of literature on linear dimension reduction techniques, including principal component analysis (Hotelling, 1933;Jolliffe and Cadima, 2016), non-negative matrix factorization (Sra and Dhillon, 2005), singular value decomposition (Banerjee and Roy, 2014), sufficient dimension reduction (Adragni and Cook, 2014), semantic mapping (Corrêa and Ludermir, 2007), multi-dimensional scaling (Cox and Cox, 2001), to name a few.Besides, various non-linear dimensionality reduction techniques are available in our arsenal, like kernel-PCA (Mika et al., 1998), locally linear embedding (Roweis and Saul, 2000), stochastic neighbourhood embedding (Hinton andRoweis, 2002), t-SNE (van der Maaten andHinton, 2008), etc.Such techniques are routinely incorporated in predictive models to ensure scalability, but developing data driven projection schemes still convey a huge computational price that is often practically infeasible in ensuing applications.Further, some of the linear dimension reduction techniques, e.g singular value decomposition, principal components analysis, classical multidimensional scaling, etc. suffer from unfavorable local properties (van der Maaten and Hinton, 2008).We make this precise in the next paragraph, focusing on singular value decomposition.
Given the design matrix X = (x 1 , x 2 , . . ., x n ) T , in order to embed the n points in R p into R m via the singular value decomposition, we project them onto the m-dimensional space spanned by the singular vectors corresponding to the m largest singular values of X.This produces an optimal rank m approximation of X under several popular matrix norms.But this optimality implies no guarantees regarding local properties of the resulting embedding.That is, we can easily devise examples where the new distance between a pair of points is arbitrarily smaller than their original distance.In increasing number of modern machine learning applications where dimensionality reduction is desirable, the absence of such local guarantees can make it hard to exploit embeddings algorithmically.
With these limitations in mind, following Guhaniyogi and Dunson (2015), Cannings and Samworth (2017), we do not attempt to estimate based on the data.Instead, we take refuge to random projection schemes, owing to their favourable local properties, and computational simplicity of the resulting algorithms.The seminal paper by Johnson and Lindenstraus (1984) proved the existence of lower dimensional projection mechanisms that satisfy favorable local properties, under certain sufficient conditions.To add on to the impressive theoretical developments in Johnson and Lindenstraus (1984), Achlioptas (2003) provided a concrete construction of such projection mechanisms as follows: (i) Let U ⊂ R p be an arbitrary set of n points collected in a n × p design matrix X.Also, given ε, ν > 0, define m 0 ∝ log n, where the proportionality constant depends on ν and ε. (ii) For any integer m m 0 ; define = ((ψ ij )) to be a p × m random matrix such that ψ ij are independent random variables from the following probability distribution: where, for example, s = 1 or 3. (iii) Let E = (1/ √ m)X , and suppose f : R p → R m projects the i-th row of X to the i-the row of E.Then, for any u, v ∈ U, , with probability at least 1 − n −ν .Random compression matrices with similar local properties can also be obtained via populating a random matrix with elements drawn from the standard normal distribution (Dasgupta, 2013;Li et al., 2006b).But the sparse random projections scheme described in equation (2.2),only processes 1/s-th of the data, and only involves simulation from uniform density.Hence it is typically preferred in many applications.Further, Li et al. (2006a,b) demonstrated that we can even use very sparse random projection with s 3, e.g., s = √ p, or s = p/ log p to significantly speed up the computation.However, for improved robustness, they recommended choosing s less aggressively, e.g, s = √ p.In practice, we observed that fixed s = 5 or 10 works reasonably well in varied empirical settings.Further, to ensure that our methodology is robust to the specific choice of random projection , we consider R different random compression matrices and utilize an ensemble classifier that combines outputs obtained corresponding to each of the random projection.In this context, readers may be aware that the usage of Bayesian ensemble methods is ubiquitous in classification and prediction settings; including Bayesian model averaging (Hoeting et al., 1999), Bayesian additive classification and regression tree (Chipman et al., 1998(Chipman et al., , 2006)), Bayesian version of bagging (Clyde and Lee, 2001), to name a few.Bayesian Model Averaging (Hoeting et al., 1999) describes an umbrella of techniques where we not only quantify model parameter uncertainty, but also the associated model uncertainty.Clyde and Lee (2001) introduced a Bayesian version of bagging based on the Bayesian bootstrap that often results in more efficient estimators.Bayesian CART (Chipman et al., 1998(Chipman et al., , 2006) ) is constructed via an ensemble of binary decision trees built by dividing the predictor space repeatedly into partitions based on splitting rules and it has enjoyed immense empirical success in this context.Other related ideas include Bayesian classifier combination (Kim and Ghahramani, 2012), Bayesian boosting (Lorbert et al., 2012), cascading classifiers (Li et al., 2010), bucket of models, stacking etc.
In this article, we adapt an adaptive voting scheme in 2.5 to combine outputs obtained corresponding to each of the random projections.In our empirical studies, we typically observe that the performance of the classifier improves with the increase in R, but usually plateaus after a while.A choice of R = 25 to 50 provided favorable results across the different numerical studies we considered.Next, we describe MCMC algorithms to learn the parameters in (2.1).
A Naive Blocked Gibbs Sampler
The probit regression model in (2.1) enjoys an equivalent representation via latent variables (Tanner and Wong, 1987;Albert and Chib, 1993), where δ(•) is the Dirac delta measure, y i is simply the deterministic conditional on the sign of the stochastic latent variable z i .In what follows, we denote y = (y 1 , . . ., y n ) T and z 3) is exactly same as in (2.1).The advantage of working with representation in (2.3) is that, for judicious choice of π(β), we easily device efficient blocked Gibbs sampler (Albert and Chib, 1993).In particular, we assume a normal prior on β, i.e, π(β) ≡ N(μ, ), where μ is set to vector of p zeros, and is diagonal matrix of order p.Then, the full conditional distribution of β is remains to be normal: (2.4) The full conditional for each element z i is then truncated normal, where TN a,b (•, •) is the pdf of truncated normal distribution restricted to (a, b).The full conditional distributions in equations (2.4)-(2.5)are extremely straight forward to sample from, and we refer to this algorithm as Algorithm 1: AC from here on.The above latent variable based augmentation method offers a convenient framework to device simple Markov chain Monte Carlo (MCMC) algorithm by iteratively sampling from the full conditional densities.However, a potential problem lurks in that there is strong posterior correlation between regression coefficients β and the latent variables z, clearly indicated in the above model.In the standard Albert-Chib iterative updates, this correlation is likely to cause slow mixing of the MCMC chain.
An Improved Blocked Gibbs Sampler
To combat the issue of auto-correlation in the MCMC chain, based on Held and Holmes (2006), we suggest a simple approach that reduces auto-correlation and dramatically improves the mixing in the Markov chain.Here, we put the factorisation, π(β | z, y) = π(z | y) π(β | z) to use, and update β and z jointly.Note that, in the above display, the distribution π(β | z) is unchanged from earlier (2.4), but now we update z from its marginal distribution obtained via integrating over β.In particular, we assume that the prior for π(β) is a mean zero normal density N(0, ), Algorithm 1 Ensemble AC/ AC+.Input : (a) Data: binary response vector y n×1 , design matrix X n×p ; a query point x ∈ R p .
(c) Hyper-parameters: compression dimension m, number of projections R, sparsity parameter s.
where is a diagonal matrix of order p.Then we have π(z | y) ∼ N(0, I n + XV X T T ) truncated to an appropriate region.Direct sampling from the multivariate truncated normal is known to be difficult, however, it is straightforward to Gibbs sample the distribution, where z −i = (z 1 , . . ., z i−1 , z i+1 , . . ., z n ) T , and Following an update to each z i we recalculate B via where B old and z old i denote the values of B and z i prior to the update of z i , and F i denotes the i-th column of F = V z X T T .The full conditional distributions in equations (2.4) and (2.6) describes our Algorithm 2: HH.It is important to note that, we only need to calculate F , u i and v i need only be performed once before we run the MCMC iterations.Consequently, the algorithm carries little increase in computational burden over the naive Gibbs sampling approach in section 2.3.The simple modification of the sampler based on use of joint updates dramatically improves mixing and sampling efficiency in the Markov chain across all the numerical studies that we have performed.
With a computationally efficient MCMC scheme in place, we next introduce an ensemble classifier via first creating R (∼ 25-50) projected copies of the design matrix, and then running R classification models with the R projected design matrices in parallel.Finally, we combine the output from the R replications via an adaptive voting scheme equipped with a data driven approach, reminiscent of leave one out cross validation, to choose cut-off parameter introduced next.
Algorithm 2 Ensemble HH/ HH+.Input : (a) Data: binary response vector y n×1 , design matrix X n×p ; a query point x ∈ R p .
(c) Hyper-parameters: compression dimension m, number of projections R, sparsity parameter s.
Adaptive Voting Scheme
We calculate an ensemble of predictions corresponding to the R projections, and combine the results via a simple voting scheme following Cannings and Samworth (2017).Suppose {y k (x)} R k=1 ∈ {0, 1} R are the predictions at x corresponding to the R projections, then the combined classifier takes the form: (2.9) where α ∈ (0, 1) is a hyper-parameter, and δ(•) denotes the Dirac delta measure.We emphasise that additional flexibility is afforded by not pre-specifying the voting threshold α to be 0.5.
In order to develop a data-driven approach to determine α, we introduce some notations.Suppose that the pair (X, Y ) takes values in R p × {0, 1} with joint distribution characterised by where π (r) has the cumulative distribution function G n,r (•), r = 0, 1.Note that, the oracle choice of the cut-off parameter α minimises the miss-classification error rate, i.e, Obviously, we cannot calculate α oracle in practice, since (π r , G n,r ), r = 0, 1 are unknown.So, we estimate it via replacing the unknown quantities by their sample counter parts, i.e, αoracle = arg min where and Ĝn,r where δ(•) is the Dirac delta measure.Since empirical distribution functions are piece-wise constant, the objective function in (2.10) does not have a unique minimum, so we choose αoracle to be the average of the smallest and largest minimisers.From here on, we refer to the versions of AC and HH equipped with the adaptive voting scheme in (2.9)-(2.10)as AC+ and HH+, respectively.AC+ and HH+ enjoy superior performance across numerous empirical studies, compared to AC and HH that uses the default choice of α = 0.5.
Simulation Study
In this section we compare the predictive performance of various versions of the proposed high dimensional probit regression methodology equipped with the data augmentation based Gibbs sampler (2.3).We also consider the alternative implementation of data augmentation based Gibbs sampler that potentially enjoys improved computational efficacy (2.4).We take the proposed principled approach to tune the cut-off parameter α of the binary classifier (2.5).Before presenting the empirical results, we propose default set ups for our algorithms.
The algorithms described in Sections 2.3 and 2.4 involve three tuning-parameters: (1) the dimension of the compressed linear subspace m, (2) the sparsity parameter s of the compression matrix, and (3) the number of random projections R. We propose default choices of these hyperparameters for ease of use of the practitioners: 1.Based on the recommendation in Guhaniyogi and Dunson (2015), we also propose to use the dimension of the linear subspace to compress to, m = 40.This choice works reasonably well in practice while preserving the computational convenience.2. We set the sparsity parameter s of the compression matrix to 10, for both sparse and dense examples in Sub-sections 3.1 and 3.2, respectively.It is important to note that the sparsity in the projection matrix increases as s increases, refer to equation (2.2) for details.Consequently, there is potential merit in using smaller s for dense cases in 3.2.We prefer a default choice of s = 10 since it works reasonably well under both the set ups, and provides concrete guideline to the users.Moreover, Li et al. (2006a,b) demonstrated that we can use very sparse random projection with s 3, e.g., s = √ p, or s = p/ log p to significantly speed up the computation.
In particular, when the data are approximately normal, log p of the data usually suffice, i.e. s = p/ log p, because of the exponential tail bounds in normal-like distributions.A less aggressive choice of s = 10 provides a good balance between computational convenience and robustness of the procedure in a wide range of empirical studies.3. We typically observe that the performance of our classifiers improve with the increase in R, but usually plateaus after a while, refer to Figures 1, 3 for details.A choice of R = 25 to 50 provide favorable results across the different numerical studies we considered.For the sake of objectivity, we set the number of random projections R = 50.
In the next two subsections, we carry our repeated simulations and benchmark our proposed classifiers: AC/AC+ and HH/HH+.The performance metric we mainly focus on is the median of the missclassification error rates obtained from the different repetitions of a simulation.For a query point x ∈ R p , a missclassification is observed if y vote (x) calculated via (2.9) is different from the true class level.We also report the between repetition standard deviation of missclassification error to demonstrate the stability of the numerical results.
Figure 1: Choice of R in sparse cases.Miss-classification error rates with varying number of weak classifiers R, for (n, p, ζ ) = (10 2 , 10 4 , 10) and ρ ∈ {0.0, 0.5, 0.7, 0.9} for algorithms AC and AC+.We typically observe that the performance of our classifiers improve with the increase in R, but usually plateaus after a while.This observation holds for all other (n, p, ζ ) combinations presented in Tables 1 and 3 both for AC/AC+ and HH/HH+, and hence the additional plots are not presented to avoid repetitiveness.
Sparse Cases
We generate observations from the high-dimensional Probit Regression model.We consider the following scenarios, and in each of the scenarios we simulate 50 data sets.We keep the sample size fixed at n = 10 2 but vary the number of covariates p = 10 3 , 10 4 and number of non-zero regressions coefficients ζ = 5, 10, in order to assess how sparsity impacts performance.We set first ζ regression coefficient at 1, and remaining p − ζ at 0. Further, we generate the design matrix X such that corr(x i , x j ) = ρ |i−j | and vary ρ = 0.0, 0.5, 0.7, 0.9, in order to assess the sampling efficiency under correlated design.To complete the model-prior specification, we set the prior variance to be the identity matrix.
For MCMC based model implementations, we discard the first 5000 samples as a burnin and draw inference based on the next 5000 samples.In particular, we report median miss classification error rates and corresponding 95% confidence intervals obtained from the repeated simulations.We also report effective sample size (ESS) as an empirical measure of sampling efficiency under the two MCMC schemes, in order to investigate the mixing behavior of our Figure 3: Choice of R in dense cases.Miss-classification error rates with varying number of weak classifiers R, for (n, p, ζ ) = (10 2 , 10 4 , 10) and ρ ∈ {0.0, 0.5, 0.7, 0.9} for algorithms AC and AC+.We typically observe that the performance of our classifiers improve with the increase in R, but usually plateaus after a while.This observation holds for all other (n, p, ζ ) combinations presented in Tables 1 and 3 both for AC/AC+ and HH/HH+, and hence the additional plots are not presented to avoid repetitiveness.
samplers.The effective sample size is a measure of the amount of the auto-correlation in a Markov chain, and essentially amounts to the number of independent samples in the MCMC path.From an algorithmic robustness perspective, it is desirable that the effective sample sizes remain stable across varying sparsity and co-linearity in the design matrix, and this is the aspect we wish to investigate here.
We present the miss-classification error rates of the classifiers averaged over the repetitions and the corresponding standard errors under various simulation scenarios for (n, p) = (10 2 , 10 3 ) in Table 1.While all the versions of our methodology enjoyed similar accuracy, the classifiers equipped with the data-driven choice of the cut-off parameter α compared to the default choice α = 0.5, seem to slightly improve the performance.Next, we present the corresponding effective sample sizes of the classifiers averaged over the repetitions and the corresponding standard errors under various simulation scenarios in Table 2.The alternative implementation of the data augmentation Gibbs sampler seems to be more robust to the presence of co-linearity in the design matrix, compared to the vanilla implementation.In particular, the effective sample size of AC/AC+ drops by 15% as the ρ changes from 0 to 0.9, whereas the drop is only about 1% for HH/HH+.Moreover, the alternative implementation of the data augmentation Gibbs sampler enjoys significantly higher effective sample size.Although, the gain is about 10% for the independent design, we observe a more pronounced improvement of 25% at ρ = 0.9.This indicates Table 1: (Sparse cases) median miss-classification error proportions and between repetition standard error in the subscript for (n, p) = (10 2 , 10 3 ) with varying sparsity ζ .Independent ρ = 0.5 ρ = 0.7 ρ = 0.9 Independent ρ = 0.5 ρ = 0.7 ρ = 0.9 that HH/HH+ will be particularly preferred when the design matrix is highly correlated.
For the case (n, p) = (10 2 , 10 4 ), we present the miss-classification error rates of the classifiers averaged over the repetitions and the corresponding standard errors under various simulation scenarios in Table 3, and the corresponding effective sample sizes of the classifiers averaged over the repetitions and the corresponding standard errors under various simulation scenarios in Table 4. Notably, the effective sample size for AC/AC+ plummeted less compared the case earlier, i.e it drops by 10% as the ρ changes from 0 to 0.9, whereas the is practically no drop for HH/HH+.
Dense Cases
We stick to the same data generation scenarios, except now we set all the regression coefficients to 1.These dense cases correspond to a one dimensional subspace with no sparsity, and are motivated by the practical applications where each of the covariates has small effect on the outcome.We continue to use the same MCMC configurations as before, and focus the same performance metric.
We present the miss-classification error rates of the classifiers and effective sample size averaged over the repetitions and the corresponding standard errors under various simulation scenarios in Table 1.The classifiers equipped with the data-driven choice of the cut-off parameter α compared to the default choice α = 0.5, still slightly improve the performance and remains to be more robust to the presence of co-linearity in the design matrix.In particular, the effective sample size of AC/AC+ drops by 5% as the ρ changes from 0 to 0.9, whereas the drop is only about 1% for HH/HH+.Moreover, the alternative implementation of the data augmentation Gibbs sampler enjoys significantly higher effective sample size.Although, the gain is only about 1% for the independent design, we observe a more pronounced improvement of more than 10% at ρ = 0.9 that still demonstrates that HH/HH+ may render more efficient when the design matrix is highly correlated.
Leukemia Data
Leukemia data from high-density Affymetrix oligonucleotide arrays were previously analyzed in Golub et al. (1999), and is freely available on the website data.mendeley.com.There are p = 7129 genes and n = 72 samples coming from two classes: 47 in class ALL (acute lymphocytic leukemia) and 25 in class AML (acute mylogenous leukemia).Before classification, we standardize each The three plots from left to right correspond to γ = 0.5, 0.6, 0.7, respectively.
sample to zero mean and unit variance.
To complete the specification of the compression mechanism, we set the dimension m of the linear subspace to compress to at 40, the sparsity parameter of the compression matrix at 5, and the number of random projections R = 25.For MCMC based model implementations, we discard the first 5000 samples as a burn-in and draw inference based on the next 5000 samples.
To evaluate the performance of the classifiers, we randomly split the 72 samples into training and test sets.Specifically, we set approximately 100×γ % of the observations as training samples, and the rest as test samples.The various version of our algorithm are used to the training data, and their performances are evaluated by the test samples.The above procedure is repeated 50 times for γ = 0.4, 0.5, 0.6, respectively, and the distributions of miss-classification errors in Figure 5.The classifier AC+(HH+) equipped with the data-driven choice of the cut-off parameter α, significantly improve the performance compared to the classifier AC (HH) across all the set ups.Further, as γ increases, i.e, we use more and more training samples, the miss-classification error rates decrease.Moreover, the alternative implementation of the data augmentation Gibbs sampler (HH, HH+) enjoys 3-6 times higher effective sample size compared to AC, AC+.
Lung Cancer Data
We evaluate our method by classifying between malignant pleural mesothelioma (MPM) and adenocarcinoma (ADCA) of the lung, freely available on data.mendeley.com.Lung cancer data were analyzed by Gordon et al. (2002).There are 181 tissue samples (31 MPM and 150 ADCA).Each sample is described by 1626 genes.
As in the Leukemia data set, we first standardize the data to zero mean and unit variance, and then apply various classification methods to the standardized data set.We follow the same procedure as that in Leukemia example to randomly split the 181 samples into training and test sets, and utilize the same compression and MCMC specifications.Various classification methods are applied to the training data, and the test errors are calculated using the test data.The procedure is repeated 50 times with γ = 0.4, 0.5, 0.6, respectively, and the distributions of miss-classification errors in Figure 6.The classifier AC+(HH+) equipped with the data-driven choice of the cut-off parameter α, quite dramatically improve the performance compared to the classifier AC (HH) across all the set ups.Moreover, the alternative implementation of the data augmentation Gibbs sampler (HH, HH+) enjoys 1.5-2 times higher effective sample size compared to AC, AC+.
Prostate Cancer Data
The last example uses the prostate cancer data studied in Singh et al. (2002), also freely available on data.mendeley.com.The training data set contains 102 patient samples, 52 of which (labeled as "tumor") are prostate tumor samples and 50 of which (labeled as "Normal") are prostate samples.There are around 329 genes.
As in the Leukemia data set, we first standardize the data to zero mean and unit variance, and then apply various classification methods to the standardized data set.We follow the same procedure as that in Leukemia example to randomly split the 102 samples into training and test sets, and utilize the same compression and MCMC specifications.Various classification methods are applied to the training data, and the test errors are calculated using the test data.The procedure is repeated 50 times with γ = 0.4, 0.5, 0.6, respectively, and the distributions of miss-classification errors in Figure 7.The classifier AC+(HH+) equipped with the data-driven choice of the cut-off parameter α, tend to improve the performance compared to the classifier AC (HH) across all the set ups.Moreover, the alternative implementation of the data augmentation Gibbs sampler (HH, HH+) enjoys 2.5 − 3 times higher effective sample size compared to AC, AC+.
Extension: Sparse Binary Logistic Regression
Maintaining parity with the section (2.3), we very briefly consider compressed logistic regression models having the form (5.1) where Logit : t → log(t/1−t), 0 < t < 1; is an m×p projection matrix with m < min(n, p); and β = (β 1 , ..., β m ) T are coefficients on the compressed predictors which a priori are from some distribution π(•).The data augmentation mechanism that we adapt here, known as the Polya-Gamma data augmentation scheme (Polson et al., 2013) enjoyed the most empirical success in practice.
To introduce the methodology, we first present the following definitions of Polya-Gamma family of distributions.In particular, the Polya-Gamma distribution PG(b, 0), b > 0 (Polson et al., 2013) is defined as the distribution with the characteristic function: t → (cosh √ t/2) −b , b > 0. The general form PG(b, c), b > 0 of the Polya-Gamma distribution (Polson et al., 2013) has the probability density function: . This family of distributions has been carefully constructed to yield a simple Gibbs sampler for the Bayesian logistic-regression model.We assume π(β) ≡ N(μ, ), and to sample from the posterior distribution using the Polya-Gamma method, simply iterate two steps: where V z = (X T T X + −1 ) −1 , κ = (y 1 −1/2, . . ., y n −1/2) T , and is a diagonal matrix with ii = z i .Noteworthy, the two basic differences in the sampler described in (5.2) from the AC sampler in (2.4)-(2.5)for probit regression are that the full conditional distribution of [β | •] is a scale mixture of Gaussian distributions rather than a location mixture; and the full conditional distribution of [z i | •] are the Polya-Gamma latent distribution instead of the truncated normals.From here on, we refer to the data augmentation based Gibbs sampler in (5.2) by Algorithm 3: PG.As an alternative, we also consider PG equipped with the adaptive choice of cut-off α in (2.5), referred to as PG+.
We consider simulated examples to demonstrate the efficacy of the proposed extension.We generate observations from the high-dimensional Probit Regression model, with specifications described in sections (3.1)-(3.2).Note that, we do not generate data from a high-dimensional logit regression model in order to assess our methodology under mild model misspecification.
To conduct inference, we consider the Gibbs sampler augmented with the Polya-gamma scheme in the latent space.We continue to utilize the compression and MCMC specifications in the simulations presented in sections (3.1)-(3.2).We present the miss-classification error rates of the classifiers averaged over the repetitions and the corresponding standard errors under various simulation scenarios in Table 6.While all the versions of our methodology enjoyed similar accuracy, the classifier equipped with the data-driven choice of the cut-off parameter α compared to the default choice α = 0.5, seems to slightly improve the performance.We also present the effective sample sizes of the classifiers averaged over the repetitions and the corresponding standard errors under various simulation scenarios.The effective sample size of PG/PG+ drops drastically as the ρ changes from 0 to 0.9.In order to maintain the focus of the document on high-dimensional probit regression, and given non-robust simulation results with respect to colinearity in this set up, we leave this as an avenue for future enquiry aimed at designing MCMC schemes robust to presence of co-linearity in the design matrix.
Conclusions
In this article, we presented efficient data augmentation based Gibbs samplers for Bayesian highdimensional Probit and logit models.Focusing on high-dimensional Probit regression model, we demonstrate that the naive implementation of the data augmentation based Gibbs sampler is not robust to the presence of co-linearity in the design matrix-a set up ubiquitous in n < p problems, and considered a simple fix based on joint updates of parameters in the latent space that seems to circumnavigate this issue.With a computationally efficient MCMC scheme in place, we introduced an ensemble classifier via first creating R (25 to 50) projected copies of the design matrix, and then running R classification models with the R projected design matrix in parallel.Finally, we combine the output from the R replications via an adaptive voting scheme reminiscent of leave one out cross validation.Notably, each of the projected design matrix is n × m, compared to the actual design matrix which is n × p.Since m p, each of the projected design matrix induces significantly less storage burden.Moreover, perhaps more importantly, since our scheme is inherently parallelable, it's extremely computationally convenient and is capable of taking advantages of modern multiple cores computing environments.
In principle, ensembles of data augmentation based Gibbs samplers like ours can be developed for high-dimensional multinomial Probit or logit models with ordinal as well as nominal categories.Ensuring the stability of the resulting samplers is an interesting alley for future enquiry.For the sake of brevity of presentation in this article, we do not explore such extensions here.Moreover, a criticism of the compressed regression frameworks is it's inability to carry out variable selection, and this too provides a scope for future exploration.
Figure 5 :
Figure 5: Leukemia data: box plots of miss-classification error rates of AC, AC+, HH, HH+ over 50 random splits of 72 samples, where 100 × γ % of the samples are set as training samples.The three plots from left to right correspond to γ = 0.5, 0.6, 0.7, respectively.
Figure 6 :
Figure 6: Lung cancer data: box plots of miss-classification error rates of AC, AC+, HH, HH+ over 50 random splits of 181 samples, where 100 × γ % of the samples are set as training samples.The three plots from left to right correspond to γ = 0.5, 0.6, 0.7, respectively.
Figure 7 :
Figure 7: Prostate cancer data: box plots of miss-classification error rates of AC, AC+, HH, HH+ over 50 random splits of 102 samples, where 100 × γ % of the samples are set as training samples.The three plots from left to right correspond to γ = 0.5, 0.6, 0.7, respectively. | 2023-07-11T02:52:11.064Z | 2023-01-01T00:00:00.000 | {
"year": 2023,
"sha1": "b5bc17b5d7bff5641e858e1c528afd13eebcc6cc",
"oa_license": "CCBY",
"oa_url": "https://jds-online.org/journal/JDS/article/1337/file/pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "177b209710fc0e47977a8843e65bd84bf8f6ed9b",
"s2fieldsofstudy": [
"Computer Science",
"Biology"
],
"extfieldsofstudy": []
} |
23736015 | pes2o/s2orc | v3-fos-license | Isolation and Molecular Characterization of the Transformer Gene From Bactrocera cucurbitae (Diptera: Tephritidae)
transformer (tra) is a switch gene of sex determination in many insects, particularly in Dipterans. However, the sex determination pathway in Bactrocera cucurbitae (Coquillett), a very destructive pest on earth, remains largely uncharacterized. In this study, we have isolated and characterized one female-specific and two male-specific transcripts of the tra gene (Bcutra) of B. cucurbitae. The genomic structure of Bcutra has been determined and the presence of multiple conserved Transformer (TRA)/TRA-2 binding sites in Bcutra has been found. BcuTRA is highly conservative with its homologues in other tephritid fruit flies. Gene expression analysis of Bcutra at different developmental stages demonstrates that the female transcript of Bcutra appears earlier than the male counterparts, indicating that the maternal TRA is inherited in eggs and might play a role in the regulation of TRA expression. The conservation of protein sequence and sex-specific splicing of Bcutra and its expression patterns during development suggest that Bcutra is probably the master gene of sex determination of B. cucurbitae. Isolation of Bcutra will facilitate the development of a genetic sexing strain for its biological control.
Sex determination that generates female and male dimorphism is a fundamental characteristic of animals and is an essential component of sexual reproduction that is important for the continuity of life. The sex determination of insects has been heavily studied and has been one of the best understood systems (Sanchez 2008, Bachtrog et al. 2014, Bopp et al. 2014). There are several genetic mechanisms that determine the sexual fate among different insects , Sanchez 2008, Gempe et al. 2009, Hediger et al. 2010, Shukla and Palli 2012, Verhulst et al. 2013, Kiuchi et al. 2014. In general, the primary signals that initiate the sexual fate are perceived by early effectors and thereafter such genetic information are conveyed in early embryonic stages to determine sex and sustained for the sexual identity in the whole life of an organism.
The genetic mechanism of sex determination in Drosophila melanogaster has been extensively examined. In D. melanogaster, the primary signal has been considered to be the ratio of X chromosomes to autosomes (X:A), which actually represents the interaction of the proteins encoded by X-linked elements with the proteins encoded by autosomes (Erickson and Quintero 2007). Sex lethal (Sxl) that acts as a binary switch gene can be switched on or off by the primary signals (Nagoshi et al. 1988). When the X:A ratio is 1:1 (XX:AA), the early female-specific promoter of Sxl is activated, producing an early female-specific SXL protein. This early protein enables the generation of the late SXL protein that can maintain the female-specific splicing of its own pre-mRNA by autoregulation. The late SXL protein also directs the female-specific splicing of its subordinate gene tra, generating a functional TRA protein (Boggs et al. 1987, Nagoshi et al. 1988, Belote et al. 1989, Inoue et al. 1990). TRA and the nonsex-specific Transformer-2 (TRA-2) form a complex to regulate the female-specific splicing of the pre-mRNAs of double sex (dsx) and fruitless (fru), leading to the generation of non-functional FRU F peptide and functional DSX F protein that stimulates the downstream genes to adopt the female pathway of development. On the contrary, when the ratio of X:A is 0.5 (XY:AA, or XO:AA), the early promoter of Sxl is not activated and no early SXL protein is synthesized. Consequently the pre-mRNA of Sxl adopts the male-specific splicing way by default and yields a malespecific transcript with premature in-frame stop codons, resulting in a truncated nonfunctional SXL peptide. The absence of functional SXL protein leads to a cascade of its downstream genes with the male-specific splicing mode, including the generation of nonfunctional TRA and functional male-specific DSX M and FRU M proteins that promote male sexual development (Burtis and Baker 1989, Hoshijima et al. 1991, Tian and Maniatis 1993. In contrast to D. melanogaster, the primary signal of sex determination in Ceratitis capitata (Wiedemann), a tephritid fruit fly, is an uncharacterized male determining factor (M factor) on Y chromosome (Willhoeft and Franz 1996). It has been proposed that the Y-linked M factor regulates the sex-specific expression of the gene tra by suppressing maternal TRA . Sxl, however, is expressed in both sexes regardless of the existence of the M factor (Saccone et al. 1998), suggesting that Sxl probably does not have the switch function as in D. melanogaster. In XX females where the M factor is absent, the splicing of the tra homologue of C. capitata (Cctra) is female specific and can produce a functional TRA protein, and CcTRA in turn facilitates its own female-specific splicing through forming an autoregulatory loop , Gabrieli et al. 2010. The CcTRA/ CcTRA-2 complex has been postulated to bind to the putative TRA/ TRA-2 binding sites only existed in the male-specific exons and to inhibit the default male-specific splicing mode. Therefore, the femalespecific splicing of dsx pre-mRNA is activated, producing a functional female-specific DSX F protein and promoting female sexual development and differentiation. In XY males, however, the presence of the M factor leads to the male-specific splicing of Cctra that generates a truncated non-functional TRA peptide. Without functional TRA, the downstream gene dsx is expressed as functional male-specific DSX M , Salvemini et al. 2009, Gabrieli et al. 2010.
Although the primary signals of sex determination are different among species, the downstream genes are relatively conserved, such as tra, tra-2, and dsx. tra has also been isolated and characterized from other Dipteran flies, such as Bactrocera dorsalis , Peng et al. 2015, Laohakieat et al. 2016, B. oleae (Lagos et al. 2007), B. tryoni, B. jarvisi , Anastrepha suspensa (Schetelig et al. 2012), A. obliqua (Ruiz et al. 2007), Lucilia cuprina (Concha and Scott 2009), L. sericata, Cochliomyia hominivorax, and C. macellaria (Li et al. 2013). Their molecular organizations and sex-specific splicing patterns are similar to those found in C. capitata. In females, the male-specific exons of those tra genes are spliced out and the resulting transcript is female-specific and can be translated into a functional TRA protein; however, in males, the final transcript includes the male-specific exons with inframe stop codons and therefore only truncated nonfunctional TRA peptides are translated. The putative TRA/TRA-2 binding sites in the male-specific exons have been found in the tra genes of all the species mentioned above suggest that the regulation of their sexspecific splicing is also similar to the mode in C. capitata, i.e., an autoregulatory mechanism may also occur in those tra genes.
The master role of tra in determining sexual dimorphism makes tra a potential target gene for genetic control of pests. Previous studies have shown that the XX embryos that are supposed to develop into females were reversed to XX pseudomales when the expression of tra was transiently knocked down by microinjecting dsRNA of tra into the embryos of many Dipteran flies, such as C. capitata , B. dorsalis , Peng et al. 2015, Laohakieat et al. 2016, B. oleae (Belote et al. 1989), B. tryoni (Raphael et al. 2014), C. macellaria, L. sericata (Li et al. 2013), A. suspensa (Schetelig et al. 2012), and L. cuprina (Concha and Scott 2009). This ability to reverse XX females to pseudomales can be highly advantageous for the application of sterile insect techniques (SIT) in which the male-only release is much more efficient than the release of both sterile males and females (Rendon et al. 2004). Moreover, the region of male-specific exons of tra genes, which acts as an intron in females, has been applied in the control strategy of female-specific lethality in some species and the technology has shown its great potential in genetic control of pests. For example, a female-specific autocidal genetic system has been established in C. capitata through the insertion of a Cctra intron into the gene of a heterologous tetracycline-repressible transactivator (tTAV). In this system, the tTAV transcript is disrupted in males as the Cctra intron is not spliced out while the female tTAV transcript is complete due to the removal of the Cctra intron, and therefore generating a functional tTAV protein that acts as a transactivator and is toxic to cells when it is overexpressed (Fu et al. 2007). Similar systems have established for L. cuprina (Li et al. 2014) and C. hominivorax (Concha et al. 2016).
The melon fly Bactrocera cucurbitae (Diptera: Tephritidae: Dacini) is widely distributed in temperate, subtropical, and tropical regions of the world, causing severe damages in many countries, particularly in China and India. It has been reported that B. cucurbitae damages over 81 host plants and is a very destructive pest of cucurbitaceous vegetables, such as bitter gourd, cucumber, and pumpkin (Dhillon et al. 2005). Many management methods have been applied for its control and one of them is SIT. SIT can be improved by the male-only release that can be achieved by the introduction of femalespecific lethality obtained from molecular and genetic modification methods (Rendon et al. 2004, Fu et al. 2007, Li et al. 2014, Concha et al. 2016). However, the mechanism of sex determination and differentiation in B. cucurbitae is still obscure, particularly, the transformer gene of B. cucurbitae has not been identified yet and the role of transformer in sex determination of B. cucurbitae remains unclear.
In this article, we isolate and characterize the transformer gene of B. cucurbitae and examine its expressions at different developmental stages. Phylogenetic tree analysis of BcuTRA demonstrates that it is very similar to the TRA proteins from other tephritid insects. Our results will be beneficial to the understanding of sex determination of B. cucurbitae, the expansion of our knowledge about insect sex determination and the establishment of the theoretical basis for its genetic control in future.
Rearing of B. cucurbitae
The melon flies used in this study were collected on City West campus of Hainan University and maintained in the laboratory at 26 C, 70% RH, and a photoperiod of 14:10 (L:D) h. The larvae were reared on artificial diet (Zhou et al. 2016), and the grown larvae were transferred into small plastic boxes with sand before pupation. After 7 d, the pupae were transferred into insect cultivation cages for eclosion. The adults were fed water and a protein-rich food consisting of 1:2 brewer yeast powder/sugar (w/w).
Isolation of Bcutra
To isolate Bcutra, total RNA of B. cucurbitae female adults was prepared using TRI Reagent (Sigma-Aldrich) following the manufacturer's instructions. The total RNA was treated with DNase I (TaKaRa, Dalian, China), extracted with phenol/chloroform, precipitated with ethanol, and resuspended in nuclease-free water to be used directly for RT-PCR reverse transcription polymerase chain reaction. The first-strand of cDNA was synthesized with an oligo(dT) primer using TaKaRa Prime Script RT-PCR Kit (TaKaRa, Dalian, China). Bcutra cDNA was amplified by PCR with the primers F311þ and R2793-(Supp Table 1 [online only]). These primers were designed on the basis of the alignment of cDNA sequences and protein sequences from B. dorsalis, B. oleae, B. jarvisi, B. tryoni, and B. correcta. This procedure yielded a specific amplification product of 1,165 bp in length. To identify the 5 0 and 3 0 ends of the Bcutra transcript, 5 0 and 3 0 RACE rapid amplification of cDNA ends reactions were performed on the 5 0 and 3 0 RACE cDNA libraries that were made from B. cucurbitae females using SMARTer RACE 5 0 /3 0 Kit (Clontech). Two rounds of PCR were performed with the specific primers complimentary to the RACE adaptors and gene-specific primers. For the 3 0 RACE, the primer pairs for the first and second (nest) rounds of PCR were F2715þ/UPM and F2765þ/ NUP, respectively. For the 5 0 RACE, the primer pairs for the first and second (nest) rounds of PCR were R1625-/UPM and R338-/ NUP, respectively.
To determine the male-specific transcripts of Bcutra, similar RT-PCR using male total RNA was carried out with the primers F11þ and R2865-.
To determine the genomic structure of Bcutra, genomic DNA was prepared from adults of B. cucurbitae using Wizard Genomic DNA Purification Kit (Promega), the Bcutra gene was amplified by PCR with the primers F11þ and R2865-.
All PCR products were purified with TaKaRa Mini BEST Agarose Gel DNA Extraction Kit (TaKaRa, Dalian, China) and sequenced by outsourcing (Huada Gene, Shenzhen, China).
Sequence Analysis
Sequence similarity and alignment analysis were performed using BLAST program from NCBI website. Multiple sequence alignment of TRA proteins from different Dipterans was performed using the software DNAMAN (Lynnon Corp.). A neighbor-joining distance tree of TRA proteins was constructed using MEGA6 (Tamura et al. 2013), and the reliability was assessed by the bootstrap method.
Expression of Bcutra Over Development by Semiquantitative RT-PCR To analyze the temporal expression of Bcutra, total RNAs from different developmental stages (eggs at 0-0.5, 0.5-1, 1-3, 3-6, 6-12, 12-24, and 24-48 h after egg laying; the mixed sexes of the first, second, and third instar larvae, the mixed sexes of pupae; immediately eclosed females and males) were prepared using TRI Reagent (Sigma-Aldrich) and treated with DNase I (TaKaRa, Dalian, China), the synthesis of the first strand of cDNA was performed using TaKaRa Prime Script RT-PCR Kit (TaKaRa, Dalian, China) following the manufacturer's instructions. Semiquantitative RT-PCRs were performed using the primer pairs F11þ/R1573-and F311þ/R1063-. The RT-PCR product of the reference gene a-tubulin with the primer pair of a-tubF and a-tubR was used as a control .
All the primers used in this article are listed in Supp
Results
Gene Structure and Splicing Pattern of Bcutra To obtain the Bcutra transcription units, we carried out RT-PCR and RACE experiments using total RNA from females and males to obtain full-length cDNAs. Those results revealed that Bcutra is sexspecific splicing (Fig. 1A), similar to tra from other Bactrocera species (Lagos et al. 2007, Laohakieat et al. 2016) and other Dipterans (Verhulst et al. 2010, Geuverink andBeukeboom 2014). B. cucurbitae females contain only one transcript of 1,685 nt (GenBank: KY616908) that comprises a long open reading frame. However, males contain 2 different transcripts (male 1 and male 2) of 2,041 nt (GenBank: KY616909) and 2,424 nt (GenBank: KY616910), both of which are prematurely terminated in that multiple in-frame stop codons exist in the male-specific exons. The full-length Bcutra gene (GenBank: KY616911) is 3,038 bp. The female and male transcripts share the five exons that are designated as exons 1A, 1B, 2, 3, and 4, whereas the exons MS1, MS2, and MS3 are male specific (Fig. 1A).
The translation start codon is located on the second exon. The TRA protein in females consists of 420 amino acids with a predicted molecular weight of 48.85 kDa and PI value of 11.38. No transmembrane structure was found based on the bioinformatics method TMHMM (Krogh et al. 2001). Both of the male transcripts start translation at the same position with the female transcript but only produce a prematurely truncated and nonfunctional peptide with 66 amino acids as the first stop codon (TAA) exists in MS1 exon at the position of 371 bp (Fig. 1A). Based on the data from similarity analysis of D. melanogaster (Qi et al. 2007), B. dorsalis , Laohakieat et al. 2016, C. capitata , Anastrepha spp. (Ruiz et al. 2007), six TRA/TRA-2 binding sites, one TRA-2 intronic splicing silencer (ISS) sequence (Qi et al. 2007) and two type B of RBP1 binding sites (Heinrichs andBaker 1995, Qi et al. 2007) were found in the Bcutra gene (Fig. 1B). Moreover, BcuTRA contains 10 highly conserved amino acids at the region of 284-294, characteristic of the Serine-arginine dipeptides (SR) family that functions in the regulation of protein interactions and specific splicing site recognition (Manley andTacke 1996, Lagos et al. 2007).
Amino Acid Sequence Alignment and Phylogenetic Analysis of Bcutra
In order to explore the phylogenetic relationship of TRA proteins in Dipterans, including B. cucurbitae, B. dorsalis, B. oleae, B. correcta, B. tryoni, B. jarvisi, A. suspensa, C. capitata, D. melanogaster, multiple sequence alignment and phylogenetic analysis were performed. The results showed that these TRA proteins from Dipteran insects are very similar, exhibiting 71% amino acid identity ( Fig. 2A). BcuTRA exhibits the highest degree of similarity with TRA from B. oleae, 74% identical to B. oleae TRA. The secondary similarity occurs between BcuTRA and the TRA protein of B. dorsalis, which is 71% identical. On the other hand, BcuTRA is only 37% identical to the TRA of D. melanogaster. Indeed, Drosophila TRA is considerably shorter than BcuTRA, only 197 amino acids and lack of the amino terminal domain of tephritid TRA.
The phylogenetic tree of 15 Dipteran TRA proteins was reconstructed using neighbor-joining method replicated 1,000 times with bootstrap resampling. The results revealed that the phylogenetic relationship for TRA proteins from different Dipteran species is in high agreement with the taxonomic relationship (Fig. 2B). The species of genus Bactrocera, including B. correcta, B. cucurbitae, B. dorsalis, B. jarvisi, B. tryoni, and B. oleae, form one cluster, while Anastrepha species are clustered into the Anastrepha group. The genera of Bactrocera, Anastrepha, and Ceratitis are grouped into the branch of Tephritidae. As expected, TRA from B. cucurbitae and other Bactrocera species are more close to the TRA protein from other tephritid insects than that of Muscidae, Calliphoridae, and Drosophilidae.
Developmental Expression of Bcutra
To determine the expression pattern of Bcutra at various developmental stages of B. cucurbitae, RT-PCR was performed with the primers that amplify a region across the shared first and third exons of Bcutra, generating products of different sizes from the transcripts in females and males (Fig. 3A). As shown in Figure 3B, the female Bcutra transcript appeared from early embryonic stages to pupae as there was always an amplification product of 366 bp detected at these stages. Moreover, the female transcript appeared earlier than the male transcripts as the amplification products of 722 and 1,105 bp from the male transcripts were only detected from the embryos 2 h after egg laying (Fig. 3B). In order to determine the expression of Bcutra in males more precisely, the male-specific transcripts were amplified with the primers of F311þ that is common for the transcripts of males and females and R1063-that is located on the male-specific exon. The results revealed that the male-specific transcripts were able to be detected from the embryos 1 h after egg laying although the amplification bands were a little faint.
Discussion
In this study, the transformer gene of the tephritid pest B. cucurbitae was isolated using RT-PCR and RACE and characterized using multiple methods. Our data show that the transformer gene Bcutra in B. cucurbitae is sex-specific alternative splicing, similar to that of other tephritid fruit flies, such as B. oleae, C. capitata, A. suspensa, Bcutra in females produces a female-specific transcript that encodes a functional TRA protein, whereas Bcutra in males generates two malespecific transcripts that are translated as a nonfunctional truncated peptide in that there are multiple in-frame stop codons in the male transcripts.
The structure of Bcutra and the length of its transcripts in this work are significantly different from the previous transcriptome assembly by Geib and his colleagues in which one female-specific and two male-specific transcripts are predicted (Sim et al. 2015). The first exon predicted by them includes all the sequence of the first exon identified by our work except for the 10 bp (ACATATCTAT) at the very 5 0 end. Moreover, the 3 0 end of Bcutra in our work is 760 bp less than the one in the previous study. The polyadenylation signal sequence AATAAA exists at the 3 0 end of Bcutra isolated from our work and a stretch of poly A is present at the 3 0 end of the transcripts from 3 0 RACE experiments, suggesting that our results are reliable. Although the female-specific transcripts from two studies are different, the two predicted TRA proteins based on the transcripts have the same sequence of amino acids. In addition, the second exon of the male-specific transcripts are different in size in that the second exon of male 1 and male 2 transcripts in our work are 406 and 946 bp, respectively, their sizes in the previous study, however, are 1,291 and 1,517 bp, respectively.
Our study demonstrates that BcuTRA protein contains a region of Serine-arginine dipeptides ( Fig. 2A), suggesting BcuTRA belongs to the SR protein family that is likely to play an important role in proteinprotein interactions and splicing control. Although no RNA-binding protein domain exists in BcuTRA, there are TRA/TRA-2 binding sites, TRA-2 ISS sequence, RBP1 binding sites in the male-specific exon region that belongs to the second intron in the female-specific pre-mRNA. The study on TRA-2 of B. cucurbitae has shown that TRA-2 has no sex-specific alternative splicing and belongs to the SR protein family with RNA binding domain . These results indicate that BcuTRA may interact with TRA-2 to maintain the splicing of Bcutra, i.e., to autoregulate its own splicing, similar to the splicing of tra in other tephritid fruit flies. In previous studies from B. dorsalis, C. capitata, A. suspensa, et al., TRA has been found to interact with TRA-2 to from TRA/TRA-2 complex to bind the female-specific exon of the downstream gene dsx (Salvemini et al. 2009, Schetelig et al. 2012. The phylogenetic tree analysis of Dipteran TRA proteins demonstrates that BcuTRA clusters with the TRA proteins from genus Bactrocera and is genetically very close to the genera Ceratitis and Anastrepha. As discussed below, the female-specific Bcutra mRNA appears earlier than the male-specific counterparts (Fig. 3), indicating the existence of maternal Bcutra mRNA in embryos of B. cucurbitae. However, the sex determination of XY embryos is not disturbed by the maternal Bcutra mRNA, suggesting that its function is inhibited. Taken together, we propose the sex determination pathway in B. cucurbitae as shown in Figure 4. In B. cucurbitae, the existence of maternal Bcutra mRNA in XX embryos leads to the translation of functional BcuTRA to initiate the autoregulatory loop of the female-specific splicing of zygotic Bcutra pre-mRNA transcript. The newly translated zygotic BcuTRA and TRA-2 form BcuTRA/TRA-2 complex to maintain the autoregulation of Bcutra and control the female-specific splicing of the downstream gene dsx, which activates female sexual development. In contrast, in XY embryos a Y-linked M factor probably also exists in B. cucurbitae and prevents the expression or disrupts the functions of maternal Bcutra mRNA. Therefore, the initiation of the autoregulatory loop is suppressed and no functional BcuTRA is translated, resulting in the malespecific splicing of zygotic tra and dsx, and consequently male development.
The expression patterns of Bcutra at different developmental stages demonstrate that the female-specific transcript appears at very early embryonic stages. However, the male-specific transcripts appear at approximately 1 h later after egg laying, which is also later than the female counterpart. This pattern is similar to that of B. dorsalis . It is actually common as previous studies have shown that the appearance of male-specific transcripts is different in various species. For example, the male-specific transcripts are only detected 4-5 h after egg laying in C. capitata , Gabrieli et al. 2010 and are detected until the first larval stage in Lucilia cuprina (Concha and Scott 2009). Our research suggests that the upstream regulators of Bcutra may work in the sex determination pathway earlier in B. cucurbitae than in C. capitata and L. cuprina.
The isolation and characterization of Bcutra that reveal some interesting properties of Bcutra have made it become a potential target for genetic control of B. cucurbitae. The conservation of TRA/TRA-2 binding sites, TRA-2 ISS sequence, RBP1 binding sites, and SR region of tephritid tra, including C. capitata and B. cucurbitae, suggests that it The male-specific transcripts are amplified using RT-PCR with the primers F311þ and R1063-. M: DL1000 DNA marker; E0.5-E8: embryos of mixed sexes at 0-0.5, 0.5-1, 1-3, 3-6, 6-12, 12-24, and 24-48 h after egg laying; L1-L3: the first, second and third instar larvae of mixed sexes; P: pupae of mixed sexes; #: male adults; $: female adults. a-tub: the internal control gene a-tubulin.
is likely to use the strategy of female-specific lethality to genetically control B. cucurbitae. Although RNAi of knocking down Bcutra is not carried out yet, the high conservation of Bcutra indicates that a very similar result of the reversal from XX genotypic females to XX phenotypic males will probably occur when Bcutra is knocked down by RNAi techniques. And therefore, the genetic manipulation of Bcutra can be applied to improve the efficiency of SIT of the pest, or to control the pest by combining other genetic targets, such as the genes that can cause the pest lethal or sterile at a specific developmental stage when they are mutated or their expression is disrupted.
The research about sex determination of tephritid insects has been carried out heavily, however, most studies focus on the downstream regulated genes, such as tra, tra-2 and dsx, the upstream regulators of sex determination have been not very clear. It will be intriguing to study the primary signal of sex determination and its downstream target genes to better understand the whole picture of sex-determining pathway, and therefore providing substantial theoretical basis for the research and applications of sex determination of insects. | 2018-04-03T04:47:14.161Z | 2017-03-01T00:00:00.000 | {
"year": 2017,
"sha1": "111e0a957181a580c2f7bf77ddb0f4456e83b0b1",
"oa_license": "CCBYNC",
"oa_url": "https://academic.oup.com/jinsectscience/article-pdf/17/2/64/17707353/iex031.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "111e0a957181a580c2f7bf77ddb0f4456e83b0b1",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
270672471 | pes2o/s2orc | v3-fos-license | A Case Report Highlighting the Significance of COVID-19 Unveiling Megaloblastic Anemia and Worsening Dementia in the Elderly
The COVID-19 pandemic has resulted in substantial lifestyle changes with significant implications for nutritional health. Factors such as movement restrictions and disruptions in food supply chains led to the restricted availability of primary sources of essential micronutrients. To highlight this, we present the case of an elderly woman with an underlying subclinical cobalamin deficiency who developed symptomatic megaloblastic anemia, requiring hospital admission under lockdown conditions. This exemplifies how changes in diet during the COVID-19 lockdown have hastened the onset of B12 deficiency symptoms. Adverse outcomes can be avoided by identifying people at high risk of poor nutritional status and implementing policy initiatives that enhance their nutritional condition. This case report showed how important the B12 shortage was during the COVID-19 lockdown, especially for older people. They are more likely to be malnourished during COVID-19 for several reasons.
Introduction
Vitamin B12 (cobalamin) plays a vital role in DNA synthesis, cellular metabolism, and myelin sheath maintenance, and its deficiency can lead to significant clinical implications.Previous reports suggested that the prevalence of vitamin B12 deficiency with classic hematological and neurological manifestations is low.However, B12 deficiency appears to be common among the elderly, affecting up to 26% of the elderly population [1].Patients with severe vitamin B12 deficiency can present with variable manifestations, including megaloblastic anemia, weight loss, anorexia, paresthesia, and, in the most severe form, irreversible cognitive impairment and memory loss [2,3].In older adults, B12 deficiency can mimic symptoms of age-related cognitive decline and may go undetected [4].Particularly, enforced lockdown measures during COVID-19 have disrupted food systems and dietary practices, potentially inducing micronutrient deficiencies [5].Factors such as movement restrictions and disruptions in food supply chains led to restricted availability of primary sources of essential micronutrients [6].In addition, the fear of infection, compulsory stay-at-home orders, stress, and anxiety might have led to significant changes in dietary behaviors towards less consumption of nutrient-rich foods, subsequently increasing the risk of micronutrient deficiencies [5].Previous reports showed a notably high prevalence of micronutrient deficiencies (79%) in elderly patients hospitalized with COVID-19 [7].This case report describes how COVID-19 lockdown measures in 2020, from March to July, triggered the development of symptomatic megaloblastic anemia and worsening dementia in an elderly woman.
CTA: computed tomography angiography
Subsequent laboratory investigations revealed the presence of megaloblastic anemia, indicated by a drop in hemoglobin levels to 7 g/dL, compared to 11.4 g/dL six months prior, a reticulocyte count of 0.8%, and a mean corpuscular volume (MCV) of 125 fL (Figure 2).Further assessment of her B12 and folate levels revealed a significant B12 deficiency, with a level of 109 pg/mL (normal range: 200 pg/mL to 900 pg/mL) (Table 1).A thorough food history revealed a limited diet devoid of dairy, eggs, and meat and mostly composed of vegetables, likely as a result of COVID-19 lockdown limitations.A B12 shortage was probably caused by this restricted diet in addition to a progressive loss of appetite.
The patient was managed with intramuscular injections of vitamin B12 and subsequent oral supplementation.This treatment notably improved her serum B12 levels to 400 pg/mL, memory improved and ameliorated her symptoms.Upon discharge, the patient received comprehensive dietary education and appropriate nutritional supplements.
Discussion
Vitamin B12, also known as cobalamin, is an essential water-soluble vitamin with significant physiological roles in the human body.It is primarily derived from animal-based food sources.Rich sources include liver, clams, sardines, beef, fortified cereals, and dairy products [8].The vitamin is synthesized exclusively by certain bacteria and is found in animal products due to the bacteria's symbiotic relationship within the animal's gastrointestinal tract.The absorption process of vitamin B12 is complex and involves multiple steps.When ingested, it binds to a protein called haptocorrin (also known as R protein) present in the saliva and gastric juice.In the acidic environment of the stomach, the R protein is removed, and the free cobalamin binds to the intrinsic factor (IF).The distal ileum absorbs the IF-cobalamin complex into the bloodstream through specific receptors.Once absorbed, B12 is transported in the bloodstream bound to transcobalamin, forming holotranscobalamin, which is then stored in the mitochondria [9].Vitamin B12 facilitates the conversion of homocysteine to methionine, an essential amino acid.Methionine is then further converted into S-adenosylmethionine (SAM), which serves as a methyl donor for numerous methylation reactions, including the methylation of DNA.Vitamin B12 is also a crucial component of the enzyme methyl malonyl-CoA mutase, which converts methyl malonyl-CoA to succinyl-CoA, an important step in the breakdown of certain amino acids and lipids.Furthermore, B12 is essential for maintaining the myelin sheath surrounding neurons [10,11].In return, vitamin B12 deficiency can lead to various clinical, particularly neurological, complications [12].
Vitamin B12 deficiency can arise from several factors that broadly fall into two categories: decreased intake and impaired absorption.Impaired absorption of vitamin B12 is the most common cause of deficiency, resulting from pernicious anemia, atrophic gastritis, gastrointestinal resection, celiac disease, or medications [13].In addition, vitamin B12 deficiency can also develop from a poor intake of animal-based food sources or malnutrition [14].In particular, elderly people often suffer from malnutrition and inadequate food intake, increasing their risk of vitamin B12 deficiency.In addition, atrophic gastritis and protein-bound malabsorption are common in the elderly, which can lead to vitamin B12 deficiency [10,15].In our case, the patient had a history of stroke and dementia, potentially precipitating an increased risk of vitamin B12 deficiency.
Although vitamin B12 deficiency due to limited nutritional intake is rare, we believe that the COVID-19 lockdown measures further exacerbate the underlying subclinical vitamin B12 insufficiency in our case and induced neurological manifestations.Several factors can explain the role of the COVID-19 lockdown measures in developing clinical manifestations of the present case.The COVID-19 pandemic and associated lockdown measures have had a significant, disproportionate impact on the elderly population, particularly regarding nutritional inadequacy.The physiological changes associated with aging, such as impaired mastication and swallowing, alterations in taste, cognitive decline, and decreased mobility, inherently predispose this population to nutritional challenges [16].The restrictions imposed during the pandemic might have amplified these vulnerabilities.Socioeconomic factors during lockdowns further hinder the elderly's ability to maintain adequate nutrition.Food insecurity has surged due to reduced access, limited availability, escalated food prices, insufficient social support for food procurement, and the relative poverty frequently encountered by the elderly [17].Additionally, the fear of COVID-19 infection, isolation from family and friends, loss of independence, and loneliness have exacerbated depression and anxiety, consequently impacting dietary behaviors and nutritional intake [18].
Another potential explanation for our findings is the impact of COVID-19 on dietary habits.Although the COVID-19 lockdown had beneficial effects on eating practices such as home cooking, previous reports also suggested that the lockdown was associated with negative eating habits.Specifically, the consumption of snacks, potato chips, soups, and alcohol increased, and the intake of fresh foods, meat, and dairy declined.An elevated preference for comfort foods and alcohol was also noted.These unfavorable alterations in dietary practices were largely attributed to limited food availability and increased food prices [18].Thus, future lockdowns should be associated with short-and long-term policies to maintain good lifestyle habits and minimize long-term health impacts.
Extensive research has demonstrated the lasting impact of SARS-CoV-2 infections on neurological functions, including memory and cognition.A study by Xie et al. highlighted mental health issues in COVID survivors, while another analysis revealed that one million Americans suffered from significant memory and concentration problems during the pandemic.Furthermore, human brain research has unveiled accelerated aging, abnormal structural changes, and prolonged neuroinflammatory reactions as a result of COVID-19.These findings suggest that impaired cognition following SARS-CoV-2 infection may be linked to a dysfunctional hypothalamic-pituitary response and reduced serotonin-induced vagal signaling [19,20].
The present case report carries several healthcare and social policy implications.Prior to the pandemic, strategies such as enhancing public transportation, increasing the accessibility of high-quality, affordable food items in local supermarkets, and facilitating participation in food assistance programs were recommended for addressing food insecurity.These strategies remain pivotal in confronting pandemics such as COVID-19.As described, the COVID-19 pandemic has disproportionately impacted elderly individuals.This underlines the need to address the socioeconomic determinants of food availability and dietary quality.Social distancing measures have inadvertently compounded pandemic-related food poverty by restricting access to food security services for elderly individuals.Efforts to alleviate these economic and physical barriers can take several forms, including supplemental financial assistance for food and bills, waiving delivery fees, and providing information and assistance in applying for food assistance programs.Community meal services have benefited the elderly during the COVID-19 lockdown [21].Lastly, strategies should be developed, potentially through biofortification, vitamin supplementation, or government policies, to enhance access to meat and fish among the elderly.Adverse outcomes can be prevented by identifying individuals at high risk of malnutrition and implementing policy initiatives to improve their nutritional status [5].
Conclusions
This case study brought to light the significance of B12 deficiency during the COVID-19 pandemic and its disproportionate effect on the elderly population, specifically inadequate nutrition.As mentioned before, there are numerous reasons why the elderly population is more susceptible to malnutrition during COVID-19.Strategies for this high-risk population need to be created and put into action to prevent these outcomes.This case report also emphasizes the significance of addressing the socioeconomic factors that influence
FIGURE 1 :
FIGURE 1: CTA showing occluded common and internal carotid on the left side (indicated by the black arrow, with the right side showing a patent carotid vessel). | 2024-06-23T15:15:40.108Z | 2024-06-01T00:00:00.000 | {
"year": 2024,
"sha1": "503d7ea25222fe9a3e73a3b238725c2a0b800584",
"oa_license": "CCBY",
"oa_url": "https://assets.cureus.com/uploads/case_report/pdf/234409/20240621-28817-ncg7ta.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6505702687881a410773771631980a645cc2d417",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
56447168 | pes2o/s2orc | v3-fos-license | Searches for rare and forbidden kaon decays at the NA 62 experiment at CERN
The NA62 experiment at the CERN SPS aims at measuring the branching ratio (BR) of the rare K+ → π+νν̄ decay, with a precision of ∼ 10%. This goal will be achieved after two years of data taking by collecting ∼ 1013 K+ decays in the fiducial volume. The K+ → π+νν̄ is a “golden mode” in flavor physics because of the precise theoretical prediction. Thanks to the unprecedent kaon flux, it will also be possible to search for many other forbidden processes, including leptor flavor violation modes, sterile neutrinos, supersymmetric particles. The expected NA62 performances will wallow the exclusion limits for several decay modes to be improved. The experiment will start collecting data in late 2014.
Introduction
The kaon system, in spite of its relative simplicity (relatively few decay channels, low finalstate multiplicities), offers several opportunities to study deeply the Standard Model (SM).On one side the processes of flavor changing neutral current (FCNC) are a unique laboratory for studying flavor dynamics as described by the CKM matrix and possible extensions, on the other hand processes that violate lepton flavor conservation are particularly sensitive to the effects of new physics.The FCNC decays (for instance K L → µ + µ − and K → πl + l − ) are highly suppressed in the SM and proceed normally only through second order loop diagrams (boxes and penguins).Among the flavor changing neutral current K and B decays, the K → πν ν decays play a key role in the search for new physics through the underlying mechanisms of flavor mixing.These decays are strongly suppressed in the SM (the highest CKM suppression), and are dominated by top-quark loop contributions.The SM branching ratios have been computed to an exceptionally high precision with respect to other loop-induced meson decays: BR(K + → π + ν ν) = (7.81± 0.75 ± 0.29) × 10 −11 and BR(K 0 → π 0 ν ν) = (2.43 ± 0.39 ± 0.06) × 10 −11 , the uncertainties are dominated by parametric ones, and the irreducible theoretical uncertainties are at 1% level [1].The extreme theoretical cleanliness of these decays remains also in certain new physics scenarios.
Experimentally, the K + → π + ν ν decay has been observed by the BNL E787/E949 experiments, with a branching ratio of 1.73 +1.15 −1.05 × 10 −10 [2], based on 7 candidates.The achieved precision is inferior to that of the SM expectation.Because of the excellent precision in the theoretical prediction even relatively small difference with respect to the SM could be a clear signal of new physics.The simultaneous measurement in the charged and neutral kaons highly constraints alternative models of physics beyond the SM [3].In addition the precise measurement of K + → π + ν ν will be an alternative measurement of V td with respect to the B decays.
The lepton flavor conservation (LF) is an accidental symmetry of the SM, in the sense that there is no a fundamental reason prevents its possible violation.The observation of a small LF violation in neutrino mixing leads to a very small prediction in the SM for the BR of Lepton Flavor Violation (LFV) modes in meson decays.For this reason any observation of LFV processes is a clear signal of physics beyond the SM.Several extensions of SM can accommodate the LFV including supersymmetry, extra dimensions and others.Thanks to the availability of high intensity kaon beams and the clear experimental signature with respect to background processes, the LFV search in kaon sector have reached upper limits to the BR at level of 10 −10 .In table 1 we summarize the present status for the main LFV decay channels in charged kaon decays.Together with FCNC
Geneva-Saclay [7] Table 1: Current status of searches for the main LFV modes in Kaon decays.
PoS(EPS-HEP 2013)358
Searches for rare and forbidden kaon decays at the NA62 experiment at CERN Gianluca Lamanna and LFV modes, the kaon sector allows to search for other forbidden processes and exotic particles, such dark photons and sterile neutrinos, as well as angular momentum violation and others.
The NA62 experiment at the CERN SPS will start in late 2014 to study ultrarare kaon decays.It will observe ∼ 10 13 K + decays in its fiducial volume to carry out a rich program to search for K + decays, including the measurement of BR for the decay K + → π + ν ν and many others LFV and SM forbidden decays as well as the search for new particles, as described in this paper.
The NA62 experiment
The layout of the NA62 experiment is shown in fig. 1 .A high intensity and high energy kaon beam is exploited to provide kaon decays in flight over a long decay region.The unseparated hadron beam is obtained by the primary 400 GeV/c proton beam from the SPS accelerator impinging on a beryllium target.About ∼ 6% of the total amount of particles in the secondary 75 GeV/c beam are kaons (the rest are mainly pions (∼ 70%), protons (∼ 23%) and positrons (∼ 1% after filtering through a W foil)).The identification of the kaons in the beam is done in a differential hydrogen Cerenkov counter (CEDAR).A silicon pixel beam spectrometer (Gigatracker) placed on the 750 MHz hadron beam, measures the momentum and the direction of the particles improving the resolution on the beam momentum from 1% to 0.2% (RMS).The decay region is housed in a ∼ 80m long ∼ 2.5m diameter evacuated tube (10 −6 mbar).The charged particles produced in kaon decays (pions, muons and electrons) are measured by a straw tubes spectrometer (STRAW), composed by four stations and an analyzing magnet integrated directly inside the evacuated decay region.The Straw spectrometer is very thin (< 0.5 X 0 per chamber) to minimize the interactions with the photons coming from the kaon decays.A liquid krypton electromagnetic calorimeter (LKr) is devoted to measure photons and electrons in the forward direction, while a complex of 12 rings of lead glass blocks(LAVs) are placed along the decay region to identify the large angle photons.A 1 atm Neon RICH is used to distinguish among muons and pions in the 15-35 GeV/c range, increasing the particle identification power of the muon identification system (MUVs) placed just beyond the LKr.
PoS(EPS-HEP 2013)358
Searches for rare and forbidden kaon decays at the NA62 experiment at CERN Gianluca Lamanna At nominal intensity the main detectors will be exposed at a rate of 10 MHz of events.A multilevel trigger system is designed to reduce such intensity at few kHz.The first level (L0) is built in hardware by using the primitives produced in FPGA in the readout boards (TEL62), and reduces the rate from 10 MHz to 1 MHz.The additional reduction (at least a factor of 100) is achieved in the software levels (L1 and L2) running in the online PC farm.While the L0 latency is limited to 1 ms, the latency in the software levels is in the order of few seconds.
K + → π + ν ν
The main goal of the NA62 experiment is the measurement of the K + → πν ν decay rate at the 10% precision level, which would constitute a significant test of the SM.The experiment is expected to collect about 100 signal events in two years of data taking, keeping the systematic uncertainties and backgrounds as low as possible.Assuming 10% signal acceptance and the SM decay rate, the necessary kaon flux corresponds to, at least, 10 13 K + decays in the fiducial volume.In order to achieve a small systematic uncertainty, a rejection factor for generic kaon decays of the order of 10 12 is required, and the background suppression factors need to be measured directly from the data.The signature of the signal is one track in the final state matched with one K + track in the beam.The m 2 miss = (P K − P π ) 2 , with P K and P + π the four momenta of the K + and the charged pion, fully describes the kinematics of the decay.Backgrounds come from all the K + decay modes and from accidental single tracks matched with a K + -like track.The m 2 miss allows to define two signal regions, before and after the π 0 peak, as shown in fig. 2.
Anyway the background in these regions is still orders of magnitude larger than the signal, due to resolution effects, radiative tails, accidental particles and rare decays as shown in fig. 3. Additional rejection factor is provided by veto and particle identification system.The signal events will be selected with a single track in Straw spectrometer in time with a kaon identified in the CEDAR and measured in the GTK.The kaon identification allows to suppress most of the background due
Decay
Events/year 1.5 other rare decays 0.5 Total Background < 10 Table 2: Signal and background from K + decays estimated from the sensitivity studies.to the interactions of the beam with residual gas in the decay region.The pion identification will be done by using the RICH and the calorimeters (LKr and first two stations of the MUV system).Events with signals in the LAV and other veto systems compatible with a γ hypothesis will be rejected.Finally two global requirements will be applied: the kaon decay has to take place in the first 60 m of the decay volume; the measured momentum of the downstream π + must be between 15 and 35 GeV/c.In table 2 the numbers of signal and background events are estimated from sensitivity studies.The numbers are normalized to ∼ 4.5 × 10 12 kaon decays per year.
"Exotics" and "Forbiddens"
The high intensity of the kaon flux available for the NA62 experiment combined with a flexible trigger system allow to carry out a rich program of kaon and π 0 decays forbidden in the SM with unprecedented precision.In table 3 we summarize the most interesting decays modes, with a preliminary estimate of the acceptance (including a rough estimation of trigger efficiency) in the NA62 experiment.The K + → π − µ + µ + is particularly interesting from theoretical point of view.As shown in fig.4, lepton flavor violation implies the exchange of a virtual Majorana neutrino [8].This process is similar to the neutrino-less nuclear double beta decay, but, in this case, the second generation is involved.A dedicated trigger will be designed to collect LFV events, eventually by using online computing on GPU [?].The present limit to the BR is given by the NA48/2 experiment [6].NA62 aims at improving the upper limit at least at level of ∼ 10 −12 .
In addition to LFV modes other searches of exotics particles will be carried out by the NA62 experiment.The existence of low mass boson is requested by several theories to explain some of the SM anomalies (excess of positrons in cosmic rays [10], Dama/Libra dark matter signal [11] , (g − 2) µ anomaly [12]).This new particle, called either Dark Photon or U boson, can be seen in the π 0 decays, for instance, by studying the π 0 → Uγ, with U → e + e − , decay.More than 20% of total BR have neutral pions in the final state.In two years of data taking the NA62 detectors will be exposed to ∼ 2.5 × 10 12 π 0 decays.The prospect is to improve the exclusion limit in the mass range for the dark photon between tens of MeV/c 2 and 100 MeV/c 2 .Several other searches will
PoS(EPS-HEP 2013)358
Searches for rare and forbidden kaon decays at the NA62 experiment at CERN Gianluca Lamanna be carried out by exploiting the clean environment offered by the kaon system, for example: search for light sgoldstino (in K + → π + π 0 S), C violation in electromagnetic interaction (in π 0 → γγγ), right-handed sterile neutrinos (in π 0 → ν ν and K + → µν), angular momentum conservation (in K + → π + γ) and many others.
Conclusions
The NA62 experiment at the CERN SPS will start collecting data in 2014.A huge flux of kaons will be exploited to study ultrarare kaon decays.The main goal of the experiment will be the measurement of BR of K + → π + ν ν decay, precisely computed in the SM with high precision.A discrepancy at level of ∼ 10% with respect to the predicted value will be a clear signal of physics beyond the SM.With 10 13 K + in its fiducial volume in two years of data taking, the NA62 experiment will have a single event sensitivity of 10 −12 (10 −11 ) for a number of K + (π 0 ) LFV decays, as well as the potential to improve the upper limits for several searches for effects and particles not foreseen in the SM.
Figure 1 :
Figure 1: Layout of the NA62 experiment at CERN.
Figure 2 :
Figure 2: Definition of signal region in m 2 miss distribution.The backgrounds are normalized according to their branching ratio; the signal is multiplied by a factor 10 10 .
Figure 3 :
Figure 3: m 2 miss distribution for signal and backgrounds from the main K + decay modes.The backgrounds are normalized according to their branching ratio; the signal is multiplied by a factor 10 10 . | 2018-12-18T14:41:36.866Z | 2014-03-18T00:00:00.000 | {
"year": 2014,
"sha1": "46b1fd29f197a800eb6c1feec1b6ef4f76747631",
"oa_license": "CCBYNCSA",
"oa_url": "https://pos.sissa.it/180/358/pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "5ba0aceb507be04bd5c8e8592b76a870c24bc60a",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
11171109 | pes2o/s2orc | v3-fos-license | Results of Revision Surgery and Causes of Unstable Total Knee Arthroplasty
Background The aim of this study was to evaluate causes of unstable total knee arthroplasty and results of revision surgery. Methods We retrospectively reviewed 24 knees that underwent a revision arthroplasty for unstable total knee arthroplasty. The average follow-up period was 33.8 months. We classified the instability and analyzed the treatment results according to its cause. Stress radiographs, postoperative component position, and joint level were measured. Clinical outcomes were assessed using the Hospital for Special Surgery (HSS) score and range of motion. Results Causes of instability included coronal instability with posteromedial polyethylene wear and lateral laxity in 13 knees, coronal instability with posteromedial polyethylene wear in 6 knees and coronal and sagittal instability in 3 knees including post breakage in 1 knee, global instability in 1 knee and flexion instability in 1 knee. Mean preoperative/postoperative varus and valgus angles were 5.8°/3.2° (p = 0.713) and 22.5°/5.6° (p = 0.032). Mean postoperative α, β, γ, δ angle were 5.34°, 89.65°, 2.74°, 6.77°. Mean changes of joint levels were from 14.1 mm to 13.6 mm from fibular head (p = 0.82). The mean HSS score improved from 53.4 to 89.2 (p = 0.04). The average range of motion was changed from 123° to 122° (p = 0.82). Conclusions Revision total knee arthroplasty with or without a more constrained prosthesis will be a definite solution for an unstable total knee arthroplasty. The solution according to cause is very important and seems to be helpful to avoid unnecessary over-constrained implant selection in revision surgery for total knee instability.
Instability is the third most common cause of failure of a total knee arhtroplasty among many causes of revision arthroplasty as listed above. 10) These instability can lead to dislocation after TKA, although it is uncommon. Dislocation after TKA usually is a difficult problem to address. The most frequently reported causes of instability and dislocation after TKA are malpositioning of the implant, flexion-extension gap mismatch, excessive soft tissue release, extensor mechanism incompetence and inappropriate selection of the primary implant. Delayed rupture of posterior cruciate ligament (PCL), breakage of the polyethylene insert, 11) breakage of the polyethylene post and neurologic diseases are less common causes. 7,12,13) Modes of instability according to its direction can be classified as coronal (varus-valgus), sagittal (anteroposterior [AP]), flexion, recurvatum, and global instability. We analyzed the mode of instability and assumed the cause of the instability. 10,14,15) This study was designed to classify factors leading to the TKA instability and to evaluate the clinical and radiologic outcome of revision arthroplasty for unstable TKA. We hypothesized that unstable TKA could be expressed by various types and an evaluation of the causes of instability would be helpful to choice the implants or surgical techniques in the revision TKA.
Patient Enrollment
We studied a consecutive series of 63 knees in 61 patients that underwent revision TKA for various problems after TKA at our institution (Sun General Hospital, Daejeon, Korea) between December 2004 and December 2010. The procedures were performed by a single surgeon (ISS) and all cases were performed at the same institution. There was no external funding source for this study. An Institutional Review Board approval was obtained for this study. All patients expressed various symptoms, which may be associated with instability, including sense of giving way, start-up pain, recurrent effusion, anterior knee pain, pes, and hamstring tendinitis. No infection was clinically suspected in any patient. We decided to conduct a revision surgery in cases that demonstrated significant laxity in AP direction and/or valgus-varus direction and/or posterolateral rotational instability. And we certified the rela-tionship between the instability in radiographs and clinical expression. No loose component was demonstrated by radiographs in any patient and we made confirmed the implant stability during the operation. Twenty-four knees of unstable TKA were assigned to 10 knees (42%) of posterior stabilized design and 14 knees (58%) of a cruciateretaining (CR) type.
Surgical Technique and Component Evaluation
A longitudinal midline skin incision was used in all cases and care was taken to incorporate and not cross the previous skin incision. In case of simultaneously existing incisions we chose the most lateral previous incision to ensure minimal breakage of the blood supply. Exposure was obtained by a medial parapatellar arthrotomy in 22 knees (92%) and additional tibial tubercle osteotomy in 2 knees (8%). We checked the component stability and confirmed the causes of preoperative instability. We performed polyethylene exchange to thicker plastic when only posteromedial wear of the polyethylene existed and made certain the regained stability. Prosthesis and cement were removed using various tools, such as a small power saw and thin osteotome while preserving as much normal bone as possible. A standard cemented prosthesis was used if both collateral ligaments were felt to be competent, and a constrained prosthesis was used if one or both collateral ligaments were incompetent. Hinged prosthesis was not used in any case. All revision prostheses were of cemented design. Of the 24 knees, 7 knees received a cemented standard posterior cruciate ligament substituting (PS) prosthesis (3 Vanguard CR, Biomet Inc., Warsaw, IN, USA; 1 We used extended stems that could be helpful in regaining an attenuated collateral ligament and preventing a toggling effect of the implant. Exceptions were 7 cases of exchange of polyethylene insert only and 2 cases with confirmed stability of collateral ligament in the operation field. An extended femoral stem was used in 2 cases only, an extended tibial stem in 2 cases only and extended femoral and tibial stem together in 11 cases during the operation (Table 1).
Surgeries entailing revision of both femoral and tibial components were conducted in 13 knees (54%), only femoral component in 2 knees (8%), only tibial component in 2 knees (8%), and polyethylene exchange alone was performed in 7 knees (30%) ( Table 2). Collateral ligament repair in medial femoral epicondyle was used in one knee. Soft tissue balancing of the knee was assessed and the wound was closed.
Range of motion was started on the first postoperative day and weight bearing as tolerated was allowed on the second postoperative day. Mean operation time for revision surgery was 95 minutes (range, 45 to 185 minutes).
Classification and Evaluation
We classified the unstable 24 knees into 5 types: isolated coronal instability, sagittal instability with the knee flexed to 90°, combined coronal with sagittal instability, flexion instability and global instability with regard to its reasons. All 24 knees had complete radiographic follow-ups. Radiographs were obtained before and after surgery, including AP radiographs obtained with the patient standing and supine.
A lateral radiograph and a skyline patellar radiograph were obtained to assess the alignment of the limb and component position using α, β, Υ, δ angle and status of the joint line. Joint lines were determined on AP radiographs obtained before and after surgery with the patient supine by measuring the distance between the tip of the fibular head and the distal margin of the lateral femoral condyle preoperatively and between the tip of the fibular head and the distal margin of the lateral femoral component postoperatively. The skyline patellar radiographs were examined for patellar tilt, subluxation or dislocation according to the classification of Bindelglass and Vince. 16) We checked the valgus and varus stress radiographs before surgery and at the last follow-ups. Clinical outcomes were assessed according to the knee rating score of the Hospital for Special Surgery (HSS).
Statistical analyses for the evaluation of radiographs and clinical results were performed with paired t-test. All statistical analyses were performed with the SPSS ver. 14.0 (SPSS Inc., Chicago, IL, USA), and p < 0.05 was considered statistical significant.
RESULTS
Instability associated with frank dislocation in preoperative period was demonstrated in four cases (one case of flexion instability, two cases of coronal with sagittal instability, and one case of global instability), while the other 20 cases did not have frank dislocation. Of the 24 cases, coronal plane (mediolateral) instability with concomitant posteromedial polyethylene wear and lateral ligament attenuation showing 3° more difference than the opposite site in preoperative varus stress view was shown in 13 cases (in both sides for one patient) (Fig. 1). Coronal instability with polyethylene wear alone was shown in 6 cases. Coronal with sagittal plane (AP) instability was shown in 3 cases (Fig. 2). Among these 3 cases, 2 cases had a medial collateral ligament (MCL) and a PCL rupture (1 case of PS design and 1 case of CR design) and another case presented with MCL rupture and post fracture of the polyethylene insert (Fig. 3). Flexion instability with spin-out of the polyethylene insert during squatting was shown in one case. Globally, one case showed rotational instability and varus thrust gait on walking. Only in 24 cases we didn't find any sagit-tal instability (Table 3). Of 13 cases of coronal instability with posteromedial polyethylene wear and lateral ligament attenuation presented 10 cases with revision arthroplasty with long stems in femoral and/or tibial implants (Fig. 4). Of 6 cases with posterolateral polyethylene wear underwent 4 cases a bearing exchange to upsize and the remaining 2 cases underwent a bearing exchange to the same size. One case of flexion instability with spinout of polyethylene insert underwent bearing exchange to upsize and attained Table 4). The mechanical axis deviation at standing radiographs was changed from preoperative mean 3.4 mm (range, 14.0 to 0.0 mm) to postoperative mean 1.4 mm (range, 3.8 to 0.0 mm) and didn't have statistical significance between preoperation and postoperation (p = 0.63). The preoperative and postoperative varus angle was mean 5.8° (range, 13.0° to 1.0°) and mean 3.2° (tange, 7.2° to 1.0°; p = 0.713) on varus stress radiographs. The preoperative and postoperative valgus angle was mean 22.5° (range, 32.0° to 11.0°) and mean 5.6° (tange, 8.0° to 2.0°; p = 0.032) on valgus stress radiography. The postoperative α angle was mean 5.34°, β angle was mean 89.65°, γ angle was mean 2.74°, and δ angle was mean 6.77° in the implant position analyses. The outlier over three degree was 1 case
DISCUSSION
Causes of revision arthroplasty after total knee replacement are diverse and unclear. Hossain et al. 17) cited common causes for revision as infection (2.9%), instability (1.7%), and aseptic loosening (1.4%). Clinical instability has been estimated to be present in 1%-2% of patients following a TKA procedure and in 10%-20% after a TKA revision. 14) Instability after total knee replacement is being increasingly reported in the literature. [18][19][20][21][22] Knee dislocation after total knee replacement was first reported in 4 patients in a series of 220 patients by Insall et al. 23) in 1979. This TKA instability may or may not be accompanied by dislocation and can be classified according to causes. 20,22) First, mediolateral instability or coronal instability may be as frequent a reason for revision of TKA and it can be due to incorrect ligament balancing or lack of identification of an incompetent collateral ligament. Inadequate medial structural releases that can evoke the delayed MCL rupture or attenuation frequently lead to delayed coronal instability. We didn't experience such circumstances because we always attended to check the adequate medial release and mediolateral balance, but there were several cases of coronal instability showing posterolateral polyethylene insert wear. Treatments of unstable TKA with coronal instability don't need revision surgery using constrained type with long stem. And only the exchange of inserted polyethylene demonstrated sufficient stability in the treatment of 6 cases of coronal instability with posteromedial polyethylene wear in this study. But, lateral ligament attenuation is very important in choice of treatments in such cases. In this study, 13 cases with lateral ligament attenuation needed a revision of the implants. Of those 13 cases, a constrained implant type with long stem was used in 10 cases. So, an analysis of the cause of instability is most important in the revision of an unstable TKA and this analysis procedure is very important to prevent a re-revision of the recurrence of instability. Extensor mechanism incompetence, inadequate balancing of the PCL, excessive release of posterolateral structures, polyethylene post fracture, hyperextension or a broken polyethylene insert may all contribute to an AP TKA dislocation. Flexion instability means that the flexion gap is too loose. Factors contributing to early flexion instability included poor posterior offset restitution, PCL incompetence or component malpositioning. Late forms of flexion instability may be associated with a delayed rupture or degeneration of the PCL and rotational instability. In our study, 1 case showed locked posterior dislocation and coronal with sagittal instability 7 years after original procedure.
The knee demonstrated a broken polyethylene post by fatigue fracture that was assumed as a late form of flexion instability. So we performed revision surgery using constrained implant with femoral and tibial long stem and obtained the stability of the knee with excellent clinical results. In particular, CR prostheses may have a PCL problem, which can cause a loose flexion gap, sagittal instability and polyethylene dislocation, regardless of whether there is a delayed PCL rupture or PCL attenuation. Accordingly, its conditions require a total knee revision to the PS prosthesis system. Many elderly patients will be expected to have an incompetent PCL function. Considering possible PCL problems after CR prosthesis, we prefer PS prosthesis that has an effective joint motion with post-cam mechanism and are more stable to dislocation than CR prosthesis.
We can divide such TKA instability by type into early instability and late instability for consideration. An early instability can result from a component malalignment, incorrect mechanical axis, gap imbalance, ligament rupture (PCL or MCL) and extensor mechanism abnormality, while late instability may result from polyethylene wear, polyethylene post wear or fracture, ligament attenuation, extensor mechanism dysfunction and others. 23,24) In the research of the author, two cases indicated sagittal and coronal instability with PCL and MCL rupture due to trauma after 3 years and 5 years, respectively. As well, one case indicated a sagittal instability with polyethylene post fracture without a trauma history.
According to the reports of several authors, most ligament reconstructions cannot solve the problem of instability due to collateral ligament attenuation, which ultimately may progress into knee dislocation or polyethylene dislocation. 11,24,25) The principle of treatment for TKA instability is to exchange unstable knee to stable knee, but the exchange to thicker polyethylene must carefully consider the variation in the flexion and extension gap. It is considered that there will be few cases in which stability can be ensured with upsized polyethylene alone. According to recent reports, patients undergoing revision of femoral and tibial components had better outcomes than those undergoing isolated polyethylene exchange. 15) But, our study contained of 7 cases with posteromedial polyethylene bearing wear leading to coronal instability and to an exchange of isolated polyethylene bearing, and its final results demonstrated excellent results without recurrent instability.
The problem of PCL in the CR type of implant can be solved by exchanging to PS type. But an overall evaluation and solutions for coronal and global instability must be carefully considered as this only solves the problem of sagittal instability. In any case, the use of a more constrained type of implant must be considered for TKA instability and the semi-constrained prosthesis or the hinged type of implant can be used. Efforts must be made to carefully raise by stage the level of constraint to obtain stability.
The most fundamental point of such revision surgery is to obtain equal flexion and extension gap. For this, an accurate evaluation of the integrity of each ligament must be performed. Although the current diversified posterior stabilized knee arthroplasty as an advanced postcam mechanism can compensate for a certain amount of loose flexion gap, it applies considerable stress to polyethylene post and ultimately may cause fatigue fracture on such post. 26,27) Some authors asserted that coronal instability can be divided into reconstructable MCL and non-reconstructable MCL according to the stability of MCL. And the semiconstrained type of implants are used for reconstructable MCL, whereas linked or hinged implants are necessary for the case of absent or non-reconstructable MCL. 22,28,29) A hinged revision implant can be used in cases of absence of MCL or non-reconstructable MCL, unstable flexion gap, poorly functioning extensor mechanism and revision of previous hinge, but it has not been used in our study series. An increasing component constraint might reduce the instability.
Revision TKA usually requires a more constrained prosthesis than primary TKA. However, doing so may increase the forces transmitted to the fixation and implant interfaces, which might lead to premature aseptic loosening. A more constrained type of prosthesis was not always required in the cases of simple polyethylene wear or post fracture with TKA instability, but a more constrained type of prosthesis was always required when instability was accompanied by two planes or more. 24,25) The research of our series has its shortcoming as the volume of cases is not enough to classify the types of unstable TKA. An additional limitation was the simple coronal instability due to posteromedial wear of polyethylene.
To sum up, the present study shows that those cases of knee instability after primary TKA have various causes and an analysis of the causes of instability could be helpful to choice the implant and the surgical techniques in the revision TKA. A revision TKA with or without a more constrained prosthesis regardless of the implant types would be a definite solution to TKA instability, but the solution according to the causes is very effective and seems to have a chance of avoidance of unnecessary over-constrained implant selection in a revision surgery for an unstable TKA. | 2016-05-17T08:29:43.156Z | 2014-05-16T00:00:00.000 | {
"year": 2014,
"sha1": "1ecbb194b04af3c4b65453104d4f8445f0618452",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.4055/cios.2014.6.2.165",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1ecbb194b04af3c4b65453104d4f8445f0618452",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
220526868 | pes2o/s2orc | v3-fos-license | Rhinovirus Infection Drives Complex Host Airway Molecular Responses in Children With Cystic Fibrosis
Early-life viral infections are responsible for pulmonary exacerbations that can contribute to disease progression in young children with cystic fibrosis (CF). The most common respiratory viruses detected in the CF airway are human rhinoviruses (RV), and augmented airway inflammation in CF has been attributed to dysregulated airway epithelial responses although evidence has been conflicting. Here, we exposed airway epithelial cells from children with and without CF to RV in vitro. Using RNA-Seq, we profiled the transcriptomic differences of CF and non-CF airway epithelial cells at baseline and in response to RV. There were only modest differences between CF and non-CF cells at baseline. In response to RV, there were 1,442 and 896 differentially expressed genes in CF and non-CF airway epithelial cells, respectively. The core antiviral responses in CF and non-CF airway epithelial cells were mediated through interferon signaling although type 1 and 3 interferon signaling, when measured, were reduced in CF airway epithelial cells following viral challenge consistent with previous reports. The transcriptional responses in CF airway epithelial cells were more complex than in non-CF airway epithelial cells with diverse over-represented biological pathways, such as cytokine signaling and metabolic and biosynthetic pathways. Network analysis highlighted that the differentially expressed genes of CF airway epithelial cells' transcriptional responses were highly interconnected and formed a more complex network than observed in non-CF airway epithelial cells. We corroborate observations in fully differentiated air–liquid interface (ALI) cultures, identifying genes involved in IL-1 signaling and mucin glycosylation that are only dysregulated in the CF airway epithelial response to RV infection. These data provide novel insights into the CF airway epithelial cells' responses to RV infection and highlight potential pathways that could be targeted to improve antiviral and anti-inflammatory responses in CF.
Early-life viral infections are responsible for pulmonary exacerbations that can contribute to disease progression in young children with cystic fibrosis (CF). The most common respiratory viruses detected in the CF airway are human rhinoviruses (RV), and augmented airway inflammation in CF has been attributed to dysregulated airway epithelial responses although evidence has been conflicting. Here, we exposed airway epithelial cells from children with and without CF to RV in vitro. Using RNA-Seq, we profiled the transcriptomic differences of CF and non-CF airway epithelial cells at baseline and in response to RV. There were only modest differences between CF and non-CF cells at baseline. In response to RV, there were 1,442 and 896 differentially expressed genes in CF and non-CF airway epithelial cells, respectively. The core antiviral responses in CF and non-CF airway epithelial cells were mediated through interferon signaling although type 1 and 3 interferon signaling, when measured, were reduced in CF airway epithelial cells following viral challenge consistent with previous reports. The transcriptional responses in CF airway epithelial cells were more complex than in non-CF airway epithelial cells with diverse over-represented biological pathways, such as cytokine signaling and metabolic and biosynthetic pathways. Network analysis highlighted that the differentially expressed genes of CF airway epithelial cells' transcriptional responses were highly interconnected and formed a more complex network than observed in non-CF airway epithelial cells. We corroborate observations in fully differentiated air-liquid interface (ALI) cultures, identifying genes involved in IL-1 signaling and mucin glycosylation that are only dysregulated in the INTRODUCTION Lung disease is the major cause of morbidity and mortality in cystic fibrosis (CF) (1). Progressive lung damage is associated with mucus obstruction, neutrophilic inflammation, and chronic airway infection and is already evident in the first years of life (2)(3)(4)(5)(6). Intermittent pulmonary exacerbations occur in individuals with CF who experience increased respiratory symptoms and reduction in pulmonary function that are responsive to therapy with antibiotics (7). Moreover, the frequency of exacerbations is a predictor of long-term morbidity and irreversible loss of lung function (8,9). The triggers for these pulmonary exacerbations are not fully understood although it is recognized that lower respiratory infections caused by viruses are likely to play a significant role (10)(11)(12)(13)(14).
The most common virus detected in the airway of adults and children with CF is human rhinovirus (RV) (15)(16)(17)(18)(19). The clinical impact of RV includes reduction of lung function/FEV 1 (15,20,21), hospitalization (22), and increased requirement for intravenous antibiotic treatment (11,14). Recent longitudinal data suggest that RV infection persists for a longer period in individuals with CF compared to non-CF controls (14), a finding consistent with in vitro observations that suggest a defective innate response of epithelial cells to RV (23,24). The nature of any intrinsic deficiency still remains unclear although some explanations are now emerging (25).
In this study, we hypothesized that the antiviral responses of primary airway epithelial cells (AEC) from children with CF are dysregulated following RV infection. We utilized transcriptome sequencing (RNA-Seq) to assess the gene expression of CF ( Phe508del homozygous) and non-CF primary AEC pre-and post-RV infection. Differential expression analysis was carried out to compare the antiviral responses between CF and non-CF AEC. Functional analyses identified diverse biological pathways and complex networks in response to RV infection in CF AEC that were less apparent in non-CF AEC. We performed additional work to validate some of these unique biological pathways using primary differentiated AEC culture models, and data corroborates observations made from the RNA-Seq analysis. Overall, this study provides insights into the global transcriptomic response by non-CF and CF AEC to RV infection and has identified potential therapeutic targets that could reduce the harmful contribution of RV to progressive lung disease in individuals with CF.
Patient Recruitment and Establishment of Primary Bronchial Epithelial Cells
The study was approved by the St. John of Gods Human Ethics Committee (SJOG#901) and Perth Children's Hospital Ethics Committee (#1762), and written informed consent was obtained from parents or guardians. Children without CF were recruited prior to undergoing elective surgery for non-respiratory-related conditions. Children with CF and homozygous for the Phe508del mutation were recruited during annual early surveillance visits (2,3,23). Subject demographic data for RNA-Seq analysis are provided in Table 1. Samples were obtained by brushing of the tracheal mucosa of children using a cytology brush as previously described (23,26). Submerged monolayer primary airway epithelial (AEC) cultures from non-CF children and those with CF were then established, expanded in Bronchial Epithelial Basal Medium (BEBM R ; LONZA TM ), supplemented with growth additives and 2% (v/v) Ultroser G (Pall Corporation) (23,(26)(27)(28), and used for experimentation. Subject demographic data for the validation experiments are provided in Table 2. Here, primary AECs were differentiated into ciliated pseudostratified AECs as described previously (29). Briefly, AECs were initially seeded on 0.4-µm polyester membrane culture inserts grown to confluence (Corning, NY, USA) and ALI cultures established. These were maintained for 28 days, and both beating cilia and mucus production were well-established. Prior to ALI validation experiments, inserts were confirmed to have a transepithelial electrical resistance (TEER) measurement >800 /cm 2 .
Human RV Infection and RNA Extraction
To emulate an acute RV infection episode in vitro, we exposed AEC with RV1b (courtesy of P. Wark, University of Newcastle) at MOI 12.5 (23,30,31). After 24 h, culture supernatant was collected for cytokine measurement and cell pellets for RNA extraction. RNA was extracted using a PureLink R RNA (Life Technologies) mini kit as per manufacturer instructions. Total RNA was eluted with 30 µL RNase free water with the addition
Bioinformatics and Statistical Analysis
Bioinformatics and statistical analyses were performed on five non-CF and seven CF samples. Statistical analysis was conducted in PRISM 8 (v8.1.2; GraphPad Software Inc., California, USA) and included the Mann-Whitney test to compare the statistical variance between genotype, and the Wilcoxon test was used to compare the statistical difference between paired samples. All subsequent bioinformatic analyses post-alignment were performed in R (v3.4.1) (36). To remove low-abundance genes, only those that had a minimum of 10 counts per sample in at least five or more samples were included, resulting in a total of 12,757 genes analyzed. The R package RUVseq (1.10.0) (37) was applied to normalize RNA-Seq read counts between samples to remove the unwanted variance. Differential gene expression was determined using DESeq2(v1.16.1) (38) after calculating variance-stabilizing transformation (VST) from the fitted dispersion mean relation to yield count data with constant variance along the range of mean values. We determined those genes with an adjusted p-value ≤ 0.05 and ± 1.5-fold change as statistically and biologically significant, respectively. To visualize the variance between samples, a principal component analysis plot was generated using the plotPCA function in DESeq2 and visualized using ggplot2 (v3.1.0) (39). Next, we identified noninfected baseline non-CF and CF enriched gene ontology (GO) terms from the biological process (BP) using Metascape (http:// metascape.org) (40). Visualization of GO term analysis was performed using the GOPlot (v1.0.2) (41). The GoCircle function was used to highlight gene expression changes within each of the selected terms. The value of the z-score from GOPlot is calculated as zscore = (up -down) ÷ √ count, where up and down were the number of up-and down-regulated genes respectively.
Pathway Analysis and Protein-Protein Interaction Network-Based Enrichment Analysis
Pathway analysis based upon Reactome repositories was performed using Signature Over-Representation Analysis (SIGORA) version 2.0.1. The pathway enrichment by SIGORA was identified according to statistically over-represented Pathway Gene-Pair Signatures (Pathway-GPS) (42). To expose the interactive associations among the DEGs at the protein level, genes obtained from both non-CF and CF responses were mapped using protein-protein interactions (PPI) via NetworkAnalyst (http://www.networkanalyst.ca/). Network Analyst (43,44) and was based upon IMEX Interactome, a comprehensive, high-quality protein-protein interaction database curated from InnateDB (45) to characterize the relationships and interactions of input genes. The network was built by limiting the original seed proteins only and picking zero order interactions.
Corroboration of RNA-Seq Observations in Fully Differentiated Cultures
Experiments were then performed to assess whether unique pathways identified from the initial RNA-Seq analysis were evident in fully differentiated 3-D Table S8.
Patient Demographics
The demographic information of children participating in this study is shown in Table 1 (RNA sequencing) and Table S1 (entire sample sets, including those additional samples used for ELISA). Non-CF controls were children who underwent elective surgery for non-respiratory-related conditions and did not possess existing lung disease. RNA samples of primary AECs obtained from these children (n = 32, 16 non-CF and 16 CF) were originally collected, both pre-and post-infection with RV in vitro.
RNA sequencing was performed on all samples as summarized in the workflow diagram ( Figure 1A). A sample elimination process was carried out to exclude unqualified samples (detailed in Figure S3). Samples that were run on a different sequencer did not pass rigorous quality control for RNA sequencing (mapping quality score >30, n = 3), and those with sequencing depth of less than one million reads (n = 9) were excluded from analysis. Finally, RNA sequencing samples from a total of seven CF and five non-CF children were included for the differential expression analysis by applying a fold change cutoff of ≥1.5-fold. Only one child with CF had detectable microorganisms in bronchioalveolar lavage fluid (BALf) during the time of AEC sampling. The PRAGMA CT score presented as percentage of disease was also conducted to demonstrate the quantitative measurement of disease progression during the time of sampling in children with CF.
Distinct Transcriptional Changes of AEC in Response to RV Infection
The normalized read counts matrix was used to build a nonsupervised principal component analysis to visualize the major contributors to transcriptional variation within this data set ( Figure 1B). The first principal component (PC1, 69% of the variance) completely separated RV-infected and non-infected AEC, and separation of uninfected or infected CF and non-CF AEC was observed on PC2 (8% of the variance), indicating that Frontiers in Immunology | www.frontiersin.org patient genotype is the second largest source of variation within the data set.
Modest Transcriptional Differences Between Uninfected CF and Non-CF AEC
To determine whether the AECs transcriptional profiles from children with CF are intrinsically differed from non-CF controls, non-infected baseline CF and non-CF AECs were analyzed for differential gene expression. We observed a total of 162 DEGs with absolute fold change ≥1.5 between non-infected baseline CF and non-CF AECs. Among those, 92 genes were significantly downregulated, and 70 genes were significantly upregulated in CF AEC compared to non-CF AEC. To identify in which biological processes the 162 DEGs were involved, we performed gene ontology (GO) term enrichment analysis (41). The predominant enriched GO term in CF AEC is depicted by a circle plot (Figure 2A) . Analysis identified the cytokine-mediated signaling pathway and type 1 interferon signaling pathway as the top enriched GO terms with decreased z-score and extracellular matrix as the GO term with an increased z-score. The full list of the top upregulated and downregulated genes is summarized in Table S2. The top DEG from differential expression analysis comparing non-infected baseline CF and non-CF AEC was HLA-DQB1 (HLA Class II GWAS genes). We also identified the top 20 genes with the highest fold change between non-infected CF and non-CF AEC ( Figure 2B). These genes were found to be involved in biological processes including type 1 interferon signaling pathway (AIM2, BST2, IFI27), keratin (KRT14), DNA methylation (H19), cell cycle (BEX1), extracellular matrix (COL1A2, COL5A1, COL6A1, COL6A2), cell-cell interaction (LGALS7), signal transduction (FST, LRCH2, LRRN1), calcium ion binding (PCDH20), potassium channel (KCNJ5), transferase activity (NEURL3), and phosphatase activity (PTPRZ1).
CF AEC Have More Transcriptional Changes in Response to RV Infection Than Non-CF AEC
We next analyzed the RNA-Seq data to assess the transcriptomic response of CF and non-CF AECs collected after infection with RV. Comparative analysis of response profiles indicates that AECs from both CF and non-CF differentially modulated the expression of several genes related to the innate antiviral immune response in response to RV infection. The Venn diagram ( Figure 3A) was used to compare genes that were uniquely and commonly modulated between CF response (RVinfected CF AEC vs. uninfected CF AEC) and non-CF response (RV-infected non-CF AEC vs. uninfected non-CF AEC) to RV infection. A total of 896 (652 upregulated, 244 downregulated) DEGs were observed in the non-CF response to RV and 1442 DEGs (884 upregulated, 558 downregulated) in the CF response ( Figures 3A,B). Candidate genes were ranked according to their extent of differential expression when compared to uninfected samples. Although there was considerable overlap between the groups (778 common DEGs, Figure 3C), there were significantly more unique DEGs (Figures 3D,E) specific to the CF response (664) compared with the non-CF response (118). A majority of overlapping DEGs were involved in the core immune response to RV infection, including interferon signaling, interferon regulation, cytokine signaling, cell death, and metabolism. The unique DEGs for both CF and non-CF AEC in response to RV infection are summarized (Tables S3, S4, respectively). The top unique DEG for the non-CF response was CX3CL1, which is an important chemoattractant to attract other immune cells, such as dendritic cells. Other top unique genes for the non-CF response were found to be associated with the cellular component (FAXDC2, ARMCX4, RAB17, TMEM17), DNA repair (BRCA2, RMI2), and cellular metabolism (CBR3, B4GALNT3, HS3ST3B1, GIPR). Nevertheless, 46% (664 out of 1442) of DEGs for the CF AEC response to RV infection were found to be unique with the IL-1R2 gene, the IL-1 signaling decoy receptor, being the top unique DEG (4.8-fold change). Other unique genes for the CF AEC response were found to be associated with growth factor (PTN), immune response (NOD2, CCRL2, HMOX1, SLC7A2, SERPINB4), cellular metabolism (MDGA1, ANGPT1), cytoskeletal regulation (LRCH2), signal transduction (MAPK8IP2, STK32A), and transcription regulation (SPDEF, ZNF488).
RV Infection Drives Common Epithelium-Induced Innate Antiviral Response in CF and Non-CF AEC
Genes that were commonly modulated in CF and non-CF AECs (Table S5) were found to be key drivers of core epitheliuminduced innate antiviral response to RV infection. Specifically, RV infection triggered a significant upregulation of type I and III interferons (IFNB1, IFNL1, IFNL2, IFNL3) in both CF and non-CF AECs ( Figure 3C). However, it was evident that the fold changes (Log 2 FC) of IFNB1 (5.8-fold), IFNL1 (5.8-fold), IFNL2 (5.1-fold), and IFNL3 (6.1-fold) in gene expression in response to RV infection were lower in the CF AEC response compared to the non-CF AEC response (IFNB1: 6.9-fold, IFNL1: 7.2-fold, IFNL2: 7-fold, IFNL3: 7.5-fold). Interferon signaling also triggered the induction of a variety of interferon-stimulated genes (ISGs), including Mx1; viperin (RSDA2); and the IFITM, IFIT, and OAS family in both CF and non-CF AECs (Figure 3C).
We extended our analysis to identify the biological pathways corresponding to all DEGs in CF and non-CF AECs in response to RV infection. The full list of enriched biological pathways for CF and non-CF AECs' antiviral responses are provided in Tables S6, S7, respectively. SIGORA pathway analysis was then performed using gene-pair signature pathway analysis, which only accounts for statistically significant gene pairs unique to the over-represented pathways. This analysis identified 52 and 31 biological pathways responsible for CF and non-CF AEC host responses to RV infection, respectively. Comparing the two, we identified 26 common significantly enriched biological pathways (Figure 4), which are mainly categorized into five main functions, including (1) cytokine signaling in the immune system, (2) presentation to the adaptive immune system, (3) innate immune system, (4) metabolism or biosynthetic, and (5) signal transduction. Consistently, the core antiviral response was demonstrated by type I and III interferon and other antiviral factors as reported earlier with interferon-α/β signaling, interferon-γ signaling, and interferon signaling being the top
CF AEC Transcriptome Reveals More Biological Pathways and a More Complex Network in Response to RV Infection Than for Non-CF AEC
In addition to the common over-represented pathways induced by RV infection, we observed an additional 26 enriched pathways specific to the CF response ( Figure 4A). In addition to the five functions mentioned above, the unique over-represented pathways also fall under another two functions, including extracellular matrix organization and vesicle-mediated transport or transport of small molecules. Additional pathways categorized in cytokine signaling in the immune system, such as interleukin 1, 2, 7, 10, and 15 signaling pathways, were the unique enriched pathways specific to CF response. Genes associated with interleukin 1 family signaling-driven proinflammatory activity are IL36G; receptor antagonist IL36RN; IL1R2; IL1RN; receptor IL18R1; protein phosphatase PTPN12; pellino proteins PELI1, PELI3, and IRAK kinase IRAK2, IRAK3; and key immune and inflammatory response regulator S100A12. Other cytokines with essential immunomodulatory functions, including IL-7, IL-10, IL-15, and IL-2 family signaling, were the significantly overrepresented pathways unique for CF response to RV infection. Furthermore, we observed a significant upregulation of the chemotactic factors for neutrophils CXCL1 and CXCL2 in the CF AEC response to RV infection. Downregulation of genes encoding E3 ubiquitin ligases, such as TRIM45 (regulator of TNFα-induced NF-κB-mediated transcriptional activity) and RNF128 (inhibitor of cytokine gene transcription), were also only observed in the CF response. The transcriptional change of the HSPA5 gene was also observed in the CF response as part of major histocompatibility complex (MHC) class I molecules mediated adaptive immune regulation. Several metabolism/biosynthetic pathways of notable interest to CF airway disease include nucleobase catabolism, inositol phosphate metabolism, synthesis of IP3 and IP4 in the cytosol and tryptophan catabolism, which were all altered in CF response to RV infection ( Figure 4A). We observed transcriptional changes of ectonucleotidases in the nucleobase catabolism pathways, particularly ecto-nucleoside triphosphate diphosphohydrolases (ENTPDases) ENTPD3 (downregulated) and ENTPD6 (upregulated). The inositol phosphate metabolism pathway was also found to be altered in CF AECs, namely the downregulation of genes encoding phosphohydrolases NUDT11, phospholipase PLCH2 and PLCD4, kinase ITPKB, and phosphatase INPP4B. We also observed a group of upregulated genes, including KYNU, KMO, IDO1, AADAT, and CCBL1, which are associated with the key biosynthetic process of tryptophan catabolism. Biological pathways regulating metabolism of proteins, notably mucin metabolism (O-linked Glycosylation of mucins and sialic acid metabolism), were also over-represented pathways for the CF response. Additionally, RV infection in CF AEC triggered transcriptional changes of transport of small molecules (including cellular hexose transport, metal ion SLC transporters, transport of amino acids, and SLCmediated transmembrane transport). We noted transcriptional changes for genes involved in extracellular matrix organization, such as integrin α5 and β6 (ITGA5, ITGA6) and cell adhesion molecule ICAM1.
To better understand the potential functional interaction of DEGs, we also visualized expression and investigated the underlying molecular interactions between genes by generating zero-order PPI subnetworks (Figures S2A,B). The main CF and non-CF PPI subnetwork consisted of functionally enriched pathways that play imperative roles in the host antiviral response to RV infection. The non-CF AEC response subnetwork identified associations of 254 nodes and 565 edges (Figure S2A). We observed 172 genes with a degree more than one interactor, where 27 nodes were observed with ≥10 connections with other nodes. Key hub genes regulating the antiviral response found included STAT1, STAT2, IRF2, IRF1, ISG15, DDX58, IRF7, RIPK1, IKBKE, and CASP8. Conversely, a more complex CF AEC response subnetwork projected the associations of 493 nodes and 1156 edges ( Figure S2B). We observed 320 genes with a degree more than one, where 66 nodes were observed with ≥10 connections with other nodes. The key hub genes regulating the CF AEC response subnetwork included IRF1, ISG15, STAT1, STAT3, HSAP1B, CASP8, TBK1, IKBKE, TRAF2, and CASP8.
The key hub genes of both CF and non-CF subnetworks are represented by key regulators related to the innate immune system and cytokine signaling.
Aberrant Cytokine Production of CF AECs to RV Infection
In order to validate the transcriptional changes of the enriched cytokine signaling pathways, we measured the levels of key innate and inflammatory cytokine production at 24 h post-RV infection (Figure 5). Although IFNB1 was significantly induced upon RV infection in both cohorts, this is not reflected at the protein level with significantly lower levels (average 10.8-fold) of IFNβ1 (type 1 interferon) released by CF AEC (668.3 ± 576.2 pg/ml; p < 0.05) compared to non-CF AEC (7,265 ± 6,558 pg/ml). As shown in Figure 3C, all type 3 IFN genes (IFNL1, IFNL2, IFNL2) were upregulated post RV infection. Cytokine levels of the type III interferons IFNλ1, IFNλ2, and IFNλ3 were also significantly elevated in both CF and non-CF AEC infected with rhinovirus. However, levels of IFNλ1 (296.4 ± 293.3 pg/ml) and IFNλ2 (334.6 ± 642.8 pg/ml) produced by CF AEC in response to RV infection were significantly (3.5-to 5-fold) lower when compared to non-CF AEC (1,059 ± 1,170 pg/ml and 1,665 ± 1,932 pg/ml, respectively; p < 0.05). IFNλ3 produced by CF AEC (285.3 ± 287.3 pg/ml) following RV infection was somewhat but nonsignificantly lower compared to that produced by non-CF subject AEC (928.6 ± 997.9 pg/ml). Similar cytokine levels of antiviral chemokines CCL5 (RANTES) and IP10 and proinflammatory cytokines, including IL6 were detected in noninfected CF and non-CF AECs, and similar increases in these proteins occurred in response to RV infection. However, IL-8 and IL-1β cytokine production is significantly elevated in non-CF AECs in response to RV infection compared to CF AECs. FIGURE 5 | Cytokine production in the supernatant AEC of non-CF individuals and children with CF following RV infection. Cytokine release was measured in cell culture supernatants using commercial ELISA kits and an in-house time-resolved fluorometry detection system. Type 1 and III interferons (IFN-β, IFN-λ1, and IFN-λ2) were significantly higher in non-CF AEC post-RV infection compared to CF AEC. Inflammatory cytokines IL-6, IL-8, and IL-1B were significantly increased in both CF and non-CF RV-infected samples with significantly higher IL-8 and IL-1B levels produced by non-CF RV-infected samples compared to CF RV-infected samples. RANTES (CCL5) and IP-10 were significantly elevated in CF and non-CF RV-infected samples but not significant between phenotype. Note: n = 9-11 for non-CF and 6-12 for CF. The data were represented as median ± IQR, symbols show statistical significance in RV-infected samples relative to paired non-infected control samples; *p < 0.05, determined using Wilcoxon test. Statistically significant for comparison between CF and non-CF non-infected samples determined using unpaired t-test or Mann-Whitney depending on Gaussian distribution, # p < 0.05.
Corroboration of Unique Gene Expression Patterns in Response to RV Infection in CF ALI Cultures
To validate results generated from the RNA-Seq in a model that better represents the airway, we assessed the expression of some unique DEGs identified in submerged CF cultures post-RV infection by challenging ALI cultures with the same RV and again assessing gene expression at 24 h (Figure 6). Expression of the top unique DEG for the CF response, IL1R2, was validated with a consistent increase in CF ALI post RV infection (9.4-fold over uninfected, p < 0.05; Figure 6). Upregulation of IL1R2 appeared bimodal in non-CF ALI and was not significant (p = 0.30). Furthermore, expression of genes involved in glycosylation of mucins and sialic acid metabolisms, namely sialyltransferase ST8SIA4, ST6GALNAC2, mannosidase MAN1A1, and acetylglucosaminyltransferase B3GNT8, was also validated as unique to CF (Figure 6). A significantly higher level of sialyltransferase ST8SIA4 expression (2.2-fold, p < 0.05) was observed in RV-infected CF ALI cultures while ST6GALNAC2 was significantly downregulated (−1.4-fold, p < 0.05). The mannosidase MAN1A1 were dramatically upregulated by 16.3fold (p < 0.05) in CF ALI at 24 h post RV infection. B3GNT8, an acetylglucosaminyltransferase that adds N-acetylglucosamine (GlcNAc) to N-glycans was also increased by 1.9-fold (p < 0.05) in CF ALI in response to RV infection. Expressional changes of these genes were all consistent with the RNA-Seq data from submerged cultures. We observed that these genes did not change expression in non-CF ALI cultures upon infection with RV; ST8SIA4 (3.6-fold, p = 0.14); ST6GALNAC2 (−1.3-fold, p = 0.09); MAN1A1 (5.4-fold; p = 0.08); B3GNT8 (1-fold, p = 0.28).
DISCUSSION
To improve knowledge of the underlying epithelial transcriptional responses during infection with rhinovirus, a major respiratory pathogen, we performed RNA sequencing on primary AEC from children with CF and non-CF controls in vitro at baseline and post-RV infection. There are five important findings from this study: (i) There were only modest baseline transcriptional differences between non-infected CF and non-CF AECs prior to exposure to RV, (ii) there was conservation in certain core antiviral responses (e.g., IFN signaling) of CF and non-CF AECs at the transcriptomic level but not the protein level, (iii) CF AECs elicited a larger and more complex transcriptional response compared to non-CF AECs with multiple unique biological pathways represented, (iv) key among these biological pathways are cytokine signaling and biosynthetic pathways (e.g., O-linked glycosylation of mucins) as they are highly relevant to CF lung pathology, (v) we corroborated observations made from the RNA-Seq analysis in fully differentiated cultures and identified genes involved in IL-1 signaling and mucin glycosylation that are only dysregulated in the CF airway epithelial response to RV infection. Collectively, these results identify potential biological pathways and processes that could be contributing to the adverse outcomes typically seen in people with CF during virus infection.
There were only modest baseline transcriptional differences between non-CF and CF AECs. This is most likely reflective of the very early and mild lung disease in the CF cohort. Minimal baseline differences also provide confidence that any difference in the antiviral transcriptional changes that we observed were due to infection. Nevertheless, the top differentially expressed baseline was HLA-DQB1, previously identified in a GWAS study with a high association signal to CF lung disease severity [reviewed in (50)]. Interestingly, the major enriched GO terms for the differentially expressed genes in noninfected baseline samples were denoted by the cytokine-mediated signaling pathway and type 1 interferon signaling pathway. Among these, AIM2 inflammasome (associated with induction of pyroptosis, activation of pro-inflammatory cytokines, and viral suppression) (51, 52) IFI27 (also known as ISG12a) contributes to IFN-dependent perturbation of normal mitochondrial function and enhanced cellular apoptosis (53), and the IFN-dependent antiviral factor BST2 were all significantly downregulated in CF AECs. In response to RV infection, several common responses were found, including interferon signaling. However, the induction of type 1 and 3 interferon genes was lower in CF AEC. This was mirrored by reduced type 1 (IFNβ1) and type 3 interferon (IFNλ1 and IFNλ2) protein in supernatant. The reduction of type 1 and 3 interferon production of CF AECs in response to RV infection could be associated with the negative regulation of interferon signaling by the unique key gene, such as STAT3 (54,55); however, this requires further characterization. Conversely, the IL-1 family signaling pathway was unique to the CF AECs response to RV infection, but in this case, IL-1β protein was significantly lower in CF supernatant compared to non-CF. This unusual observation could be, in part, mediated by negative regulators of IL-1 signaling expressed in CF AEC, including IL1R2 and IL1RN, pellino protein genes PELI1 and PELI3, together with interleukin 1 receptor-associated kinase IRAK2 and IRAK3. We then assessed the expression of IL1R2 preand post-RV infection in a differentiated culture model and made similar observations to those obtained using submerged cultures.
The IL-1 signaling pathway has been suggested as a link between hypoxic cell death and sterile neutrophilic inflammation in CF (56). Both IL-1α and IL-1β were detectable in bronchioalveolar lavage fluids (BALs) of young children with CF in the absence of bacterial infection, highlighting potential for inflammation of the CF airway under sterile inflammation (57). Since S100A12 (key regulator of inflammatory process) is also part of the IL-1 family signaling pathway in CF response to RV infection, we postulate that the CF AECs could be directing from pro-inflammatory IL-1 signaling under sterile inflammation to a hyperinflammatory condition characterized by NF-κB signaling cascades during RV infection. Other evidence suggests the alteration of the inflammatory response with abundant cytokine signaling pathways (interleukin 1, 2, 7, 10, and 15 signaling) in CF AEC post-RV infection could be explained by downregulation of RNF128 genes, which functions as an inhibitor of cytokine gene transcription and could interact with TBK1 (key hub of CF AEC response in our study here) kinase activity to enhance antiviral immunity. We also observed an elevated IL-8 production in both CF and non-CF AECs post-RV infection with higher amounts produced by non-CF AECs compared to CF AECs. Our IL-8 results agree with a previous study that also utilized primary AEC cultures in a similar infection setting (58) but contrasts with another that observed elevated inflammatory mediator release by the CF AECs (23). Overall, the over-represented cytokine signaling pathways suggest a unique and prominent role in regulating inflammation in CF AECs when infected with RV. However, with conflicting observations in this area, elucidating the complexity of the inflammatory response with associated cell death in CF AECs warrants further investigation.
We also identified over-represented metabolic pathways in CF AEC in response to RV infection specifically involved in the regulation of immunity, including inositol phosphate metabolism and synthesis of IP3 and IP4 in the cytosol, suggesting an altered CF airway microenvironment after RV infection. The induction of inositol phosphate has previously been related to endoplasmic reticulum expansion and Ca 2+ storage, resulting in Ca 2+ -dependent transcriptional activity of inflammatory mediators (59), which could contribute to hyperinflammatory responses seen in the CF AEC to viral infection (23). Upregulation of extracellular ectonucleotidase in the inositol phosphate metabolism pathway was found to cause depletion of ATP concentrations, reduction of airsurface liquid volume, ASL collapse, and failure in mucociliary clearance may trigger CF lung disease exacerbations as shown previously in a model of respiratory syncytial virus infection (60). Another metabolic pathway, trytophan catabolism, was also one of the over-represented pathways in the CF AECs following RV infection. Tryptophan metabolism has been previously found to be dysregulated in CF AEC (61) and has been implicated in Pseudomonas aeruginosa infection, oxidative stress, and Th17 hyperinflammation (62,63). Alteration of tryptophan metabolism results in accumulation of kynurenine and anthranilate, which could subsequently disrupt the homeostatic balance of the host's innate immune system and reduce the antimicrobial activity of airway epithelium.
Other identified biosynthetic pathways associated with RV infection in CF include sialic acid metabolism and O-linked glycosylation of mucins. Sialic acids are a family of negatively charged monosaccharides that are commonly expressed as the terminal residues in glycans of the glycoconjugates on the epithelial cell surface lining the airways and are also major components of secreted mucins in the airway. Previous studies have identified increased fucoslyation and decreased sialylation in cultured AEC while a contrary observation was reported in CF sputum (64)(65)(66)(67). As a key player that contributes to the rheological properties of mucus, aberrant sialic acid metabolism may worsen the pathological conditions of CF. O-linked glycosylation is a post-translational modification process and occurs within the endoplasmic reticulum (ER) and Golgi complex. The enzymes in ER and Golgi complex regulate glycosylation of N-glycans and Oglycans by successively adding to and then remodeling mucin oligosaccharides prior to transport to cell membranes for tethering or secretion. Here, alteration of genes encoding glycosyltransferases, such as N-acetylgalactosaminyltransferases, N-acetlyglucosaminyltransferase, and galactosyltransferases, were reported from our RNA-Seq analysis. We corroborated a number of these as unique to the AEC response to RV in children with CF. Changes in these glycosyltransferases could potentially alter the O-glycans on cell surfaces and, thus, affect interactions with airway pathogens and irritant exposures. Emerging evidence suggests alteration of mucin glycosylation is a response to infection and inflammation and might induce extended conformational changes to prevent damage from proteolytic enzymes (68). Although the impact that CFTR mutations has on mucin biomolecules is unknown, our results suggest that RV infection could be a potential mechanism that contributes to changes in mucin glycosylation that are exclusive to CF and that might influence mucosal barrier function. A previous investigation has demonstrated that a surplus of unfolded proteins that results from blocked glycosylation leads to prolonged ER stress and activation of the unfolded protein response (UPR) causing cell death (69). Previous in vitro work using an immortalized cell line discovered a pronounced reprogramming of host cell metabolism toward an anabolic state, including upregulation of glucose uptake, glycogenolysis, nucleotide synthesis, and lipogenesis (70). Considering most of the metabolic changes found in this study occur post-RV infection, future studies integrating the transcriptomic signature patterns with analyses of the metabolites produced by CF AEC in response to RV infection will provide significant insight into the exact metabolic changes that occur during infection.
Increasing total iron and zinc has previously been associated with airway inflammation in CF (71). These results suggest that RV infection in the CF airway is associated with the presence of redox active biometals. A previous study (72) has suggested that the dysregulation of iron homeostasis is accompanied by a respiratory virus infection, which, in turn, facilitates pseudomonas biofilm growth. Understanding the mechanistic link of virus infection to the alteration of the cellular microenvironment and instigation of secondary infection might aid in development of new treatment.
We acknowledge some limitations in the experimental design. First, we only analyzed transcriptomics at the 24-h time point, primarily due to the limited number and expansion of primary cells established from each patient. However, early optimization of our infection model did assess the transcriptional changes earlier (data not shown), and the greatest transcriptional change identified occurred at the 24-h time point. Although methodologies now exist to assist with primary AEC expansion in vitro (29), its effects at the transcriptomic level remain unknown and, thus, the use of unaltered primary airway cells remains a significant strength of this study. Future investigations could possibly include additional time points to better appreciate the transcriptional signature changes over the full course of RV infection as well as the long-term consequence of viral infection on CF AECs. Second, this study utilized a laboratory strain of rhinovirus (RV1b), which might exert differential effects on CF AEC compared to clinically derived isolates known to cause exacerbations in this cohort. With different RV serotypes causing infection in CF airways (10), future studies may identify whether innate immune responses may be serotype-specific. Similarly, comparison studies to other viruses (respiratory syncytial virus, influenza) would also assist in our understanding of the contribution of early-life viruses to CF disease progression. Finally, the simplified monolayer cell culture model of basal CF AECs may be regarded as a limitation, but basal cells are the primary target of RV (73). While monolayer cultures may oversimplify the multicellular interactions of epithelial (ciliated, goblet, basal, secretory cells) and immune cells (dendritic cells, neutrophils), it is an important, repeatable model with low methodological variation, and we were able to validate genes in differentiated AEC. Overall, we are highly confident that limitations are minor and that our results provide new insight into new therapeutic targets for treating acute viral infections in CF that can be validated in future transcriptomic studies assessing differentiated AEC models.
In conclusion, this study shows that, at the transcriptomic level, CF AECs induce a complex and unique set of responses when infected with RV in vitro that have implications for lung disease progression in CF. Despite type 1, II, and III interferon signaling being involved in the core CF antiviral response, IFNs protein levels were lower in CF AEC when compared to non-CF AEC. Metabolic and biosynthetic pathways were unexpectedly integrated with the core CF antiviral response, and multiple key regulatory molecules of antiviral response were dysregulated in CF AEC, revealing new potential to modulate CF AEC innate immunity to RV infection. Future work will explore whether these regulatory molecules are potential targets for therapy unique to RV and may be leveraged to reduce the impact viral infections have on lung disease progression CF.
DATA AVAILABILITY STATEMENT
Raw datasets have been uploaded to GEO, with accession number GSE138167.
ETHICS STATEMENT
The study was approved by the St. John of Gods Human Ethics Committee (SJOG#901) and Perth Children's Hospital Ethics Committee (#1762) and written informed consent was obtained from parents or guardians. | 2020-07-16T09:03:40.384Z | 2020-07-16T00:00:00.000 | {
"year": 2020,
"sha1": "6d5072273414844280588536aa36fea1cb96b105",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fimmu.2020.01327/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a635564e68fee0dd17ba7fb339e3c28f85799422",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
236989186 | pes2o/s2orc | v3-fos-license | The Association between Outdoor Artificial Light at Night and Breast Cancer Risk in Black and White Women in the Southern Community Cohort Study
Qian Xiao,1 Gretchen L. Gierach,2 Cici Bauer,3 William J. Blot,4 Peter James,5,6 and Rena R. Jones7 Department of Epidemiology, Human Genetics and Environmental Health, School of Public Health, University of Texas Health Science Center at Houston, Houston, Texas, USA Integrative Tumor Epidemiology Branch, Division of Cancer Epidemiology and Genetics, National Cancer Institute, Rockville, Maryland, USA Department of Biostatistics and Data Science, University of Texas Health Science Center at Houston, Houston, Texas, USA Division of Epidemiology, Department of Medicine, Vanderbilt University Medical Center, Nashville, Tennessee, USA Department of Population Medicine, Harvard Medical School and Harvard Pilgrim Health Care Institute, Boston, Massachusetts, USA Department of Environmental Health, Harvard T.H. Chan School of Public Health, Boston, Massachusetts, USA Occupational and Environmental Epidemiology Branch, Division of Cancer Epidemiology and Genetics, National Cancer Institute, National Institutes of Health, Rockville, Maryland, USA
Introduction
Black women in the United States are more likely to develop breast cancer at a younger age and to be diagnosed with more aggressive subtypes and more advanced stage disease, both contributing to higher rates of breast cancer mortality among Black women. 1 Light at night (LAN) has been proposed as a breast cancer risk factor because it inhibits nighttime production of melatonin, a hormone that may modulate biological pathways involved in breast cancer carcinogenesis. 2,3 Several epidemiologic studies have linked higher outdoor LAN estimated from satellite imagery to elevated incidence of breast cancer, including in cohorts predominantly comprised of White women with relatively high socioeconomic status (SES). 4,5,6 However, it remains unclear whether LAN is associated with breast cancer risk among Black women and women of lower SES.
Methods
We examined the relationship between LAN and incident breast cancer in the Southern Community Cohort Study (SCCS). 7,8 The vast majority of participants (86%) were recruited from community health centers in the southeastern United States that primarily served uninsured and underinsured populations, and ∼ 2=3 were Black. Our analytic cohort included 30,518 Black and 12,982 White women who were cancer free and reported residential addresses at baseline. LAN exposures were estimated by linking geocoded baseline addresses (2002-2009) with satellite images in 2004 obtained by the U.S. Defense Meteorological Satellite Program's Operational Linescan System, and we used the highdynamic range data to avoid saturation in high-LAN areas. 9 Incident breast cancer cases were identified via linkage to state cancer registries and vital status was ascertained from the Social Security Administration-both through 31 December 2017. Data on estrogen receptor (ER) status and cancer stage were obtained from cancer registries and supplemented by pathology reports and medical records. Race was self-reported at baseline. Institutional review boards at Vanderbilt University (Nashville, TN) and Meharry Medical College (Nashville, TN) approved the study and participants provided informed consent at the time of enrollment. We used Cox proportional hazards models to estimate hazard ratios (HRs) and 95% confidence intervals (CIs) comparing higher quintiles of LAN (Q2-Q5) with the lowest quintile, as well as for each 10-unit increase in LAN. Models were adjusted for multiple covariates as listed in table footnotes.
Results
Among all women in the cohort, we found a statistically significant increased risk of breast cancer overall in association with increasing levels of LAN [HR Q5 vs: Q1 = 1.27 (95% CI: 1.00, 1.60), p trend = 0.05] and for ER + breast cancer specifically [HR Q5 vs: Q1 = 1.37 (95% CI: 1.02, 1.84), p trend = 0.01] ( Table 1). For Black women, the highest quintile was associated with a 28% increase in overall and ER + breast cancer risk [HR Q5 vs: Q1 = 1.28 (95% CI: 0.98, 1.68), p trend = 0.05 and 33% (1.33 (95% CI: 0.94, 1.88), p trend = 0.02), respectively] with borderline statistical significance. The patterns of association appeared similar in White women, but the effect estimates were relatively less precise owing to smaller sample sizes and the p trend values were not statistically significant. For ERbreast cancer in Black women, breast cancer incidence appeared higher for women in Q2-Q5 of LAN compared to Q1 but did not show a clear exposureresponse relationship. Results from the analysis stratified by tumor stage were mixed ( Table 2): in Black women, the relationship between LAN and increased breast cancer risk was observed for localized breast cancer only, whereas in White women, the relationship was observed for regional/distant stages.
Discussion
Our findings corroborate the previously reported positive association between LAN and breast cancer risk and extend prior work by characterizing this relationship among both Blacks and Whites in a large cohort of women recruited from disadvantaged communities. Several previous cohort investigations, including in the California Teachers Study, 4 the Nurses' Health Study II, 5 and the National Institutes of Health-AARP Diet and Health Study, 6 reported a modest increase in breast cancer risk associated with higher outdoor LAN levels (10-14%, comparing the highest to the lowest quintile). In our SCCS analysis, the effect sizes appeared larger compared with those in previous cohorts 4,5,6 although the distribution of LAN was similar and the confidence intervals overlap. We speculate that the large proportion of low SES and Black women in the SCCS may have partially contributed to the larger effect sizes. Compared with those in more advantaged populations, low SES individuals are more likely to have sleep disturbances and shorter sleep duration due to poor housing conditions, high stress, and irregular and unpredictable daily schedules, 10 and therefore they may be more likely to The authors declare they have no actual or potential competing financial interests.
Note to readers with disabilities: EHP strives to ensure that all journal content is accessible to all readers. However, some figures and Supplemental Material published in EHP articles may not conform to 508 standards due to the complexity of the information being presented. If you need assistance accessing journal content, please contact ehponline@niehs.nih.gov. Our staff will work with you to assess and meet your accessibility needs within 3 working days. engage in nonsleep activities at night that lead to higher exposures to ambient LAN. The strong correlation between LAN and urbanization may also suggest its correlation with cancer screening behaviors, and, subsequently, stage of disease at diagnosis. However, we did not see consistent evidence of a stronger relationship between LAN and stage of disease. We cannot exclude the possibility of residual confounding in our analyses due to factors such as lifestyle, work schedules, and access to health care. Moreover, outdoor LAN estimated from satellite imagery may not accurately reflect LAN exposures at the individual level. Future studies incorporating personal-level measures of light exposure may provide additional support for the association between LAN and breast cancer risk and help disentangle observed differences between groups.
Acknowledgments
This work was supported by the Intramural Research Program of the National Cancer Institute (G.L.G. and R.R.J.) as well as extramural funding (R00 CA201542 from the National Cancer Institute, P.J.; 80NSSC21K0510 from the National Aeronautics and Space Administration Health and Air Quality Applied Science Team, Q.X. and C.B.). | 2021-08-13T06:16:44.170Z | 2021-08-01T00:00:00.000 | {
"year": 2021,
"sha1": "e491f5a6dfcb69db611dbcee6ac7635e076f2500",
"oa_license": "pd",
"oa_url": "https://ehp.niehs.nih.gov/doi/pdf/10.1289/EHP9381",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "93ed08a387e8c20cc23f13f6f23fac6666ed76b8",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
268340102 | pes2o/s2orc | v3-fos-license | Advancement of fluorescent aminopeptidase probes for rapid cancer detection–current uses and neurosurgical applications
Surgical resection is considered for most brain tumors to obtain tissue diagnosis and to eradicate or debulk the tumor. Glioma, the most common primary malignant brain tumor, generally has a poor prognosis despite the multidisciplinary treatments with radical resection and chemoradiotherapy. Surgical resection of glioma is often complicated by the obscure border between the tumor and the adjacent brain tissues and by the tumor's infiltration into the eloquent brain. 5-aminolevulinic acid is frequently used for tumor visualization, as it exhibits high fluorescence in high-grade glioma. Here, we provide an overview of the fluorescent probes currently used for brain tumors, as well as those under development for other cancers, including HMRG-based probes, 2MeSiR-based probes, and other aminopeptidase probes. We describe our recently developed HMRG-based probes in brain tumors, such as PR-HMRG, combined with the existing diagnosis approach. These probes are remarkably effective for cancer cell recognition. Thus, they can be potentially integrated into surgical treatment for intraoperative detection of cancers.
Introduction
Surgical resection is the primary treatment for brain tumors, complemented by radiotherapy or chemotherapy for malignant types (1)(2)(3).Total resection is often challenging, especially for gliomas, due to their infiltrative nature and difficulty in distinguishing from surrounding tissues (4)(5)(6)(7).Fluorescence imaging, rapidly adopted in neurosurgery, addresses these challenges.It offers low-cost, high-resolution visualization of tumors, clearly differentiating them from adjacent brain tissue and aiding in the identification of ill-defined boundaries.This method is particularly crucial in reducing residual tumors and the associated risk of regrowth or relapse (8)(9)(10)(11).
Panel diagnostics using next-generation sequencing have advanced the identification of oncogenes in solid cancers, such as lung and breast cancers, paving the way for precision medicine (12,13).While these omics analyses provide comprehensive insights, they lack spatiotemporal data after cell homogenization.In contrast, fluorescence imaging in surgery offers a non-invasive, real-time, and high-resolution method for observing and quantitatively analyzing biomolecules within tissues (8,9).
Fluorescent probes can be broadly categorized based on their features (14).The "always on" probes continuously exhibit fluorescence, whereas "activatable" probes only become fluorescent upon interaction with a specific target.Today, "always on" probes such as indocyanine green (15) and fluorescein sodium are used in the neurosurgical field (9,16).However, these probes are not always accumulating in tumor tissues and have tendency to emit high background from the effect of "always on" (9).On the other hand, 5-aminolevulinic acid (5-ALA) serves as an "activatable" probe, distinguishing the tumor from the non-tumor tissues based on metabolic activity variations (8).
This review highlights advancements in fluorescent aminopeptidase probes, especially Hydroxymethyl Rhodamine Green (HMRG), 2-Methyl silicon rhodamine (2MeSiR), and 2-O-Methyl silicon rhodamine (2OMeSiR).These probes are expected to enhance surgical precision and rapid cancer detection.Their interaction with tumor enzymes allows accurate differentiation between tumor and surrounding tissues.We will explore their benefits and applications in fluorescence-guided surgery.scaffold, enabling the detection of acidic pH changes in vitro with HER2-positive cells.Significantly, these probes possessed the added advantage of reversibility, making them suitable for in vivo applications in lung cancer detection (18).To address the slower reaction rate of endo-peptidase detectable probes that hydrolyze non-terminal peptides, we employed intramolecular spirocyclization (19).This concept facilitated rapid responsiveness and precise molecular design, targeting the amino-terminal or carboxylterminal ends (19-21).Furthermore, Kuriki et al. established HMRG-based probe libraries that potentially targeted different enzymes achieved by substituting the acetyl group of Ac-HMRG with various amino acids (22) (Figure 1).
γ-Glutamyl (gGlu)-HMRG probe
Urano et al. applied gGlu-HMRG to human ovarian cancer cell lines (SHIN-3) and normal human umbilical vein endothelial cells, observing high fluorescence intensity and γ-glutamyltransferase (GGT) activity exclusively in SHIN-3 cells (23).gGlu-HMRG, reacting with GGT expressed on tumor cells, produces a highly fluorescent reaction product (24,25).In experiments with mice bearing SHIN-3 tumors, intraperitoneal injection of gGlu-HMRG led to distinct high fluorescence in tumor areas, visible to the naked eye, while normal mice without tumors showed no such fluorescence and had low GGT background activity.This fluorescence was also confirmed in vitro with many human ovarian cancer cell lines tested.Spraying gGlu-HMRG on the peritoneal surface of mice injected with six of these cell types resulted in strong fluorescence in four types.In further studies, SHIN-3 cells transfected with red fluorescent protein (RFP) and injected into mice revealed complete overlapping fluorescence with gGlu-HMRG 10 min post-injection.Notably, the detection of SHIN3-RFP cells using gGlu-HMRG showed 100% sensitivity and specificity (23).
Onoyama et al. developed peptidase probes for esophageal squamous cell cancer (30).Screening was performed using a series of HMRG-based aminopeptidase-activatable fluorescence probes such as γ-glutamyltranspeptidase, DPP-IV, fibroblast activation protein (AcGP-HMRG), cathepsin H (Arg-HMRG), against fresh biopsy samples.They discovered that glycine-prolyl-HMRG (GP-HMRG), targeting DPP-IV, exhibited a rapid, substantial, and specific increase in fluorescence in tumor cells.After assessing DPP-IV expression levels and enzymatic activity, they synthesized HMRG probes with various N-terminal amino acids, such as Glutamic acid, Lysine, Tyrosine, Leucine, and Proline, to determine their affinity.The EP-HMRG probe, showing the highest affinity and lowest Michaelis constant (Km) for DPP-IV, was selected.Validation of EP-HMRG involved measuring the increase in fluorescence intensity in tumor and normal samples over time, revealing a significant increase in tumor samples.
Proline-arginine (PR)-HMRG probe
Recently, we published our research on the development of fluorescent probes designed for glioblastoma, relying on enzymatic activity (31).We initially screened 320 fluorescent probes using homogenized tumor lysates from patients, selecting the top 10% of promising probes based on their ability to differentiate between glioblastoma and surrounded tissues.We further narrowed down these candidates in a secondary screening with fresh surgical specimens, identifying the top three probes that demonstrated the highest differential fluorescence intensities for glioblastoma detection.These results were comprehensively analyzed, and a tertiary screening involved computational, mathematical, and pathological analysis.The proline-arginine-HMRG (PR-HMRG) probe showed the highest reactivity with 79.4% accuracy for detecting glioblastoma.We also attempted to identify the enzymecleaving PR-HMRG using Diced Electrophoresis Gel (DEG) assay, followed by Liquid Chromatography/Tandem Mass Spectrometry (LC/MS) (32).Through LC/MS, we identified four potential enzymes, with calpain 1 (CAPN1) as the responsible one, confirmed by enzyme inhibition experiments and CAPN1 RNA expression analysis.In U87 glioblastoma cells, CAPN1 knockdown reduced PR-HMRG fluorescence (31).In a U87 orthotopic xenograft model, PR-HMRG displayed higher fluorescence in tumor areas, consistent with CAPN1 expression.Human surgical specimens also showed elevated CAPN1 expression by both immunohistochemistry and western blotting, indicating the potential of this probe for glioblastoma detection during the surgery in the future (31).PR-HMRG probe showed early fluorescence onset within 5 min of application (31) (Figure 2).
Takahashi et al. selected the promising fluorescent probe GP-HMRG for pancreatic cancer from our probe library (33).Dipeptidyl peptidase, or DPP-IV-like enzyme, was identified as the target enzyme.
Yamamoto et al. synthesized an avidin-conjugated fluorescent probe, the Avidin-Leu-HMRG (35).Avidin is a protein which has a high affinity for lectin on cancer cells.In a mouse model of peritoneal ovarian metastasis, this probe demonstrated high fluorescence intensity at tumor locations, attributable to the fluorescence activity of lysosomal leucine aminopeptidase.
HMRG-based fluorescent probes may be useful for various diseases other than cancers.Yamashita et al. evaluated the fluorescence intensity in pancreatic juice and intestinal juice discharged after the pancreatic ventricle or central pancreatectomy using glutamyl-phenylalanine-HMRG (gPhe-HMRG) (36).They showed that it is possible to measure protease (chymotrypsin) activity in drained pancreatic fluid samples.
2MeSiR and 2OMeSiR probes
Challenges with green HMRG probes include interference from tissue autofluorescence and attenuation related to blood absorption (37).To circumvent these limitations, researchers have identified alternative scaffolds that emit at longer wavelengths.Kushida et al. demonstrated that 2MeSiR600, a red fluorescent scaffold, could be used to design activatable probes targeting proteases, although it exhibited high background fluorescence due to its relatively high fluorescence quantum yield (38). Addressing this, Ogasawara et al. modified 2MeSiR600 to reduce background signals and synthesized 2OMeSiR600 probes for aminopeptidase activity detection, controlled by photoinduced electron transfer (39).QA-2MeSiR and QA-2OMeSiR are probes developed for detecting tumors in lung cancer.Kawashima et al. screened these probes, selecting those with the highest fluorescence intensity for lung cancer (42).They found QA-2OMeSiR to have a lower background than QA-2MeSiR, targeting enzymes like DPP-IV and PSA (42).
Other fluorescent aminopeptidase probes
Leucine aminopeptidase (LAP) is an enzyme that cleaves a type of amino acid from the end of a peptide.It has been confirmed that LAP's blood concentration increases due to bile stagnation, and LAP is present in various cancer cells (43).Gong Q et al. developed the fluorescent probe of the incorporating L-leucine into the skeleton of cresyl violet as a recognition moiety using confocal fluorescence imaging (44).They analyzed changes in LAP concentration using human liver cancer-derived HepG2 and human lung cancer-derived A549 cells under cisplatin treatment.A higher concentration increase of LAP was found in HepG2 cells.Inhibitor experiments of LAP expression with siRNA further reduced cell viability.This result indicated that LAP was highly resistant to cisplatin.LAP is known to be involved in detoxifying cisplatin in hepatoma cells and contributes to inherent drug resistance (44).He X et al. developed a specific and sensitive near-infrared fluorescent probe (HCAC) for in vivo imaging of LAP activity in liver disease models.HCAC showed acetaminophen-induced liver injury and upregulation of LAP in tumor mouse models (45).
Pyroglutamate aminopeptidase-1 (PGP-1) enzyme plays an important role in inflammation involving immune cells, blood vessels, and molecular mediators (46).Cao et al. designed a redemitting ratiometric fluorescence sensor (DP-1) that specifically detects PGP-1.They showed that PGP-1 expression was associated with inflammation using human liver cancer-derived HepG2 and mouse macrophage-like cell line RAW264 cells by imaging of the DP-1.Furthermore, imaging of mouse tumor models has shown that PGP-1 is closely associated with some inflammation and tumor disease (46).
Prolyl aminopeptidases (PAP) are often present in infectious disease bacteria, which is a potential biomarker and therapeutic target for pathogen infection (47).Liu X. et al. developed a nearinfrared fluorescent "turn-on" probe (NIR-PAP) for detecting and imaging the activity of PAP in vivo.They indicated that this probe exhibited high specificity and reactivity to PAP under physiological pH and temperature conditions in vitro (47).
APN is expressed in ovarian carcinoma cells and is an important biomarker for cancers such as osteosarcoma and hematopoietic tumors (48)(49)(50).NIR fluorescent probes have been developed for detecting APN activity.He X et al. have developed an NIP fluorescent probe detecting APN (51).Using confocal microscopy, they showed that hepatoma cells had higher APN content than normal cells.Additionally, APNs were imaged in cells and mice in vivo.CD3/aminopeptidase N is an ectoenzyme with multiple functions, including tumor growth, migration, angiogenesis, and metastasis.LiH et al. have developed the first two-photon NIR fluorescent probe for in vitro and in vivo tracking of APN (52).Hydrolysis of the amino group of the Nterminal alanyl moiety restored the intramolecular charge transfer effect, resulting in strong fluorescence.In addition, the probe DCM-APN distinguished normal cells (LD2 cells) from cancer cells (human liver cancer-derived HepG-2 and malignant melanoma B16/BL6 cells).
Discussion
The standard treatment for most brain tumors is surgical resection under a microscope, often accompanied by adjuvant radiotherapy and/or chemotherapy for malignant types (1-3).Maximal resection is attempted for prolonged tumor control and improved patient survival in most cases, except for certain tumors like malignant lymphoma and germinoma, which are sensitive to either radiotherapy or chemotherapy (53,54).The utilization of fluorescent probes in surgical procedures offers a significant advantage by enabling surgeons to accurately differentiate tumor from normal or surrounding tissues in real-time (55)(56)(57).This enhanced visualization, provided by the fluorescence of these probes, leads to increased resection rates-a critical factor in surgical success.Importantly, achieving a higher extent of resection, especially Gross Total Resection (GTR), has been independently associated with improved progression-free survival (PFS) and overall survival (OS) in patients with high-grade and supratentorial lowgrade gliomas.Therefore, by facilitating more precise and extensive tumor resections, fluorescent probes have the potential to further improve PFS and OS outcomes in brain tumor patients.
The development of aminopeptidase probes, particularly HMRG-based and 2MeSiR-based probes, presents promising advancements in cancer detection and monitoring as biomarkers.These probes offer unique advantages, such as rapid activation and reduced background signals.The HMRG probes react quickly, yielding results within minutes, a benefit already confirmed in esophageal cancer and brain tumor studies (30,31).In their studies, probes that target enzymes like g-glu, DPP4, CAPN1, LAP, PGP-1, and APN hold significant promise for detecting a variety of diseases, encompassing both cancer and infections (58,59).Present, three fluorescent agents that have been studied and utilized widely in human neurosurgical fields are fluorescein sodium, ICG, and 5-ALA (8,9).
ICG is a water-soluble molecule that is excited at a wavelength of approximately 780 nm and emits fluorescent within the 700-850 nm range, making it detectable only with a filtered scope.ICG is observed a few seconds after 0.2-0.5 mg/kg IV administration, reaching its peak around 10 min (15,60,61).ICG is widely used to confirm blood flow and patency in vascular surgeries for aneurysms, AVMs, and anastomoses.It can also be beneficial in assessing the circulatory status when tumors compress or infiltrate cerebral circulation (62,63).The Second Window ICG technique uses tumors' vascular permeability and poor clearance.Delivering substantial quantities of ICG allows neurosurgeons to locate tumors during surgery.However, it takes 19-30 h to visualize and does not accumulate in a tumor-specific manner (64, 65).Fluorescein demonstrates fluorescence, peaking at around 530 nm when excited at approximately 480 nm with high detection rate for glioma in multicentric prospective phase II trial (66).For lower concentrations, observation through a 560 nm filter is typically required to detect its fluorescence (67).Notably, at higher doses, specifically 20 mg/kg, fluorescein's fluorescence becomes visible to the naked eye (68).The technique of confocal endoscopy and endomicroscopy, which has been employed with fluorescein, is notable for its application in brain tumor imaging (69,70).ICG and fluorescein does not selectively activate fluorescence in malignant glioma cells.Instead, it tends to concentrate in areas where the blood-brain barrier is disrupted, a common characteristic of tumor sites (67,71).
5-ALA can be used to visually distinguish tumor tissues from normal ones (72, 73).5-ALA transforms into protoporphyrin IX (PpIX), which is a photosensitizer and precursor in heme synthesis.PpIX excites and emits at 405 nm (violet) and 633 nm, enabling broad-spectrum activation (74).PpIX accumulation results from increased 5-ALA levels, elevated 5-ALA synthase activity, or malfunctioning ferrochelatase (FECH) enzyme, facilitating its conversion into heme (75).Glioblastoma exhibits reduced FECH expression compared to normal brain tissue, contributing to PpIX accumulation (76).Instances of ventricular wall fluorescence, indicating false positives, are observed even in cases where magnetic resonance imaging (MRI) or macroscopic observation show no evidence of tumor involvement (77).Stummer et al. noted that 5-ALA was effective in increasing tumor resection rates to 65% and enhancing six-month progression-free survival to 41%, as opposed to lower rates without it.However, its fluorescence is stronger in high-grade gliomas but weaker in low-grade ones.The compound becomes fluorescent six hours after intake but loses potency over time as it is metabolized.Other disadvantages include the potential for false positives in cases of radio-necrosis or inflammation, and false negatives in low-density areas (78-82).
Recent advancements have led to the development of both flexible and rigid endoscopic systems that utilize 5-ALA fluorescence, thereby enhancing surgeons' capabilities in the diagnosis and resection of brain tumors.The flexible endoscope system is particularly adept at observing 5-ALA fluorescence, aiding in the accurate identification of tumor margins (83).Conversely, the rigid endoscope system, which has been commercially available and widely reported, demonstrates effectiveness in 5-ALA fluorescence-guided surgery, significantly contributing to surgical outcomes (84, 85).However, despite these significant advancements, the diagnostic utility of these endoscopic systems as adjuncts to microsurgery remains somewhat limited.The integration of confocal endomicroscopy with 5-ALA is proposed as a promising approach to overcome these limitations.This integration potentially allows for a more detailed and nuanced observation of brain tumors at the microstructural level, which could be particularly beneficial in cases of suspected low-grade gliomas (86,87).
Economically, PR-HMRG, as a fluorescent-guided surgery, may provide a cost-effective alternative compared to the acquisition of other supportive equipment such as navigation systems, intraoperative MRI, or intraoperative ultrasound sonography (88,89).This makes the initial cost relatively low, especially when integrated into existing systems designed for 5-ALA, leading to avoidance of the substantial initial investments associated with other advanced diagnostic imaging methods (88, 89).These microscopes are already fitted with the necessary light source and fluorescence display monitors.Utilizing the existing setup with a filter exchange avoids the significant costs associated with major equipment modifications.Integrating HMRG and 2MeSiR probes into neurosurgical microscopes equipped with 5-ALA systems involves switching the microscope's internal filters to match the specific excitation and emission profiles of these probes.HMRG requires blue light excitation at 488 nm for its green fluorescence emission at 524 nm, while 2MeSiR needs an excitation filter at 593 nm to enable its 613 nm red fluorescence emission (31,39).
Recent advancements in aminopeptidase probes, particularly those based on HMRG and 2MeSiR, are showing significant promise in improving cancer detection and monitoring as disease biomarkers (90, 91).These probes offer distinctive advantages, such as rapid activation and reduced background signals.Probes targeting enzymes like GGT, DPP-IV, CAPN1, LAP, and APN demonstrate potential in detecting various cancers and infections.Ongoing research aimed at enhancing their accuracy and minimizing false results is crucial.Systematic reviews and meta-analyses will likely play a key role in evaluating these newer probes as they transition from preclinical to clinical applications.In summary, fluorescent aminopeptidase probes represent a promising advancement in tumor visualization and image-guided surgery.
FIGURE 2 Fluorescence-guided resection using 5 -
FIGURE 2 Fluorescence-guided resection using 5-ALA and PR-HMRG in glioma surgery.(A) Corticotomy site visualized under white light.(B) Visualization of the tumor core and margins using 5-ALA-induced fluorescence at an emission wavelength of approximately 630 nm.(C) Post-resection view showing no residual fluorescence, suggesting complete removal of the tumor.(D) Schematic representation of PR-HMRG probe activation, depicting the transition from a colorless state to fluorescence upon enzymic reaction with the tumor. | 2024-03-12T15:54:25.119Z | 2024-03-07T00:00:00.000 | {
"year": 2024,
"sha1": "4334e0e5e005342cf967987042ea26f599ee374d",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fsurg.2024.1298709/pdf?isPublishedV2=False",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4e1d139a0a21792efb82626ae2d8d857d59e8a74",
"s2fieldsofstudy": [
"Medicine",
"Chemistry"
],
"extfieldsofstudy": []
} |
6064114 | pes2o/s2orc | v3-fos-license | Development of Atherosclerosis in Genetically Hyperlipidemic Rabbits during Chronic Fish‐oil Ingestion
The evidence for a reduction In cardiovascular mortality from fish oil is based on epidemlologic observations. To test whether fish-oil supplementation Influences the development of atherosclerosis, we treated Watanabe heritable hyperlipidemic rabbits (WHHL), an Inbred strain that spontaneously develops atherosclerosis, with 2.5 ml of MaxEPA flsh-oll concentrate dally and compared them to a control group fed un8upplemented rabbit chow. Serial cholesterol and trlglyceride levels were monitored as were plasma llpld hydroperoxldes. The animals were given fish oil from the time of weaning until 1 year of age, when they were sacrificed and their aortas were compared for the extent of atherosclerosis. No significant differences In the cholesterol or trlglyceride levels were noted between the two groups. Fatty acid hydroperoxlde levels were also similar and were noted to Increase from weaning (1.0±0.7/£M) to the time of sacrifice (1.8± 1.5/xM, p<0.01). Fish oil had no Influence on the extent of aortic atherosclerosis (25%± 14% surface area for controls vs. 28%± 19% for treated, p=NS), plaque thickness, or plaque volume after 1 year. We conclude that fish oil does not reduce the levels of serum cholesterol, llpld hydroperoxldes, or aortic atherosclerosis In WHHL rabbits. The hypothesis that fish oil protects against atherosclerosis was not supported by this study.
I n the past several years, there has been a heightened interest in the possible cardioprotective effects of dietary fish oil. Initiated by the observation of a low incidence in cardiac disease-related mortality in the Greenland Eskimo, 1 a population whose diet is high in fish consumption, this interest has been strengthened by studies suggesting that fish-oil ingestion may reduce atherosclerosis, possibly via a mechanism that relates to its influence on eicosanoid metabolism. 234 Other studies have suggested that circulating lipid hydroperoxldes represent a chronic oxidant stress that is capable of promoting atherogenesis directly or indirectly via the stimulation of intracellular hydroperoxides. 5 There is evidence, as well, that omega-3 fatty acids can decrease the rate of generation of hydroperoxides by cyclooxygenase 8 or decrease the amount of oxidants produced locally by leukocytes via a reduction in eicosanoid biosynthesis. 7 If the omega-3 fatty acids found in fish oil affect eicosanoid synthesis or lower peroxide levels, then increasing the consumption of fish oil in a population that is prone to atherosclerosis (such as the U.S. adult population) might prevent or retard the devel-opment of atherosclerotic heart disease, independent of serum cholesterol levels or genetic predisposition.
The protective effect of chronic fish-oil ingestion on naturally occurring atherosclerosis has been based on supposition, however. No necropsy studies have been published on the Greenland Eskimos, and the presumed low incidence of atherosclerosis attributable to fish oil has been extrapolated from observations comparing populations with high and low fish consumption. 8 To examine directly whether chronic fish-oil ingestion can alter the atherosclerotic process, we evaluated the effect of chronic fish-oil supplementation to the diet of Watanabe heritable hyperlipidemic rabbits (WHHL), a species that develops atherosclerosis spontaneously due to a genetic defect in functional tow density lipoprotein (LDL) receptor production. 9 Serum cholesterol and triglycerides and lipid hydroperoxide levels were determined periodically during the study. To assess the effects of fish oil on the development of atherosclerosis, we measured the amount of atherosclerotic plaque on the aortas of the rabbits, which were sacrificed after 1 year. breeders and were then mated with WHHL homozygous males. The subsequent offspring were either homozygous or heterozygous for the trait, a distinction made by obtaining a serum cholesterol level at the time of weaning (6 to 8 weeks of age). The cholesterol levels for the homozygous rabbits ranged between 669 and 1464 mg/dl and for the heterozygous rabbits between 72 and 306 mg/dl, thus allowing easy recognition of the homozygous group.
All rabbits were pasteurella-free and maintained at 25°C in a temperature-controlled, laminar-flow room. No deaths from infection or other cause occurred in any of the study animals. The rabbits were fed a diet of standard rabbit chow (Purina #5123) and limited to 120 g per day. The rabbits were randomly assigned to control or treatment groups. The treated group had 13 male and two virgin female rabbits, and the control group, 11 males and two virgin females. The offspring from each litter were as evenly divided as possible to try to keep all potential genetic influences similar between both groups of study animals.
The treated rabbits were given 2.5 ml Maxepa fish-oil concentrate (generously supplied by the R.P. Scherer Corporation, Troy, Ml) 6 days a week from the time of weaning until 1 year of age. This was accomplished by training the rabbits to remain stationary while the oil was slowly given via syringe into their posterior pharynx. This technique allowed the administration of a precise amount of fish oil to each rabbit without stress and prevented the possibility of oxidation of the fish oil if left standing in the rabbit food. Each rabbit consequently received 90 to 120 mg/kg/day of eicosapentaenoic acid (EPA), depending on the size of the rabbit. In addition, Maxepa contains 0.6% wt/wt cholesterol and one IU of vitamin E/ml, which resulted in the rabbits receiving approximately 15 mg/day cholesterol and 2.5 IU vitamin E/day. (This amount of vitamin E was previously shown not to affect hydroperoxide levels in the rabbit. 10 )
Biochemical Determinations
Serum lipid levels were obtained by puncturing the middle ear artery of each rabbit at the time of weaning and at 4, 8, and 12 months of age. Total cholesterol and triglyceride levels were measured by standard enzymatic methods 1112 ; the coefficient of variation for cholesterol was 3.2% and for triglyceride, 2.7%.
LJpid hydroperoxides were sampled at the time of blood sampling for lipid levels. Blood (5 ml) was collected into Vacutainer tubes (Becton Dickinson Labware) containing sodium citrate, and the plasma was separated by centrifugation for 15 minutes at 650 g at 4°C. Plasma was either kept briefly on ice for immediate assay or stored frozen at -15°C. The protein was partially removed from each plasma sample before assay by adding an equal volume of ethanol at 45°C and incubating it at 45°C for 20 minutes. The mixture was then chilled to -15°C for 20 minutes and centrifuged for 20 minutes at 650 g at -5°C. The supernatant was collected and assayed immediately for fatty acid hydroperoxide. 13 Briefly, prostaglandin H synthase was injected into a reaction chamber containing 100 /xM of arachidonic acid, 0.1 M Tris-HCL (pH 8.5), 1 mM phenol, 2.5 mM sodium cyanide, and 50 jil ethanol. Either standard hydroperoxide or the plasma sample was added to the assay in the ethanol immediately before the enzyme. The concentration of oxygen was monitored polarographically with an oxygen electrode and was recorded continuously; the reaction lag times were recorded and related to the pmol present in a 3-ml assay mixture. Samples were assayed in triplicate. The levels of nonesterified fatty acid hydroperoxide present in plasma samples were evaluated by reference to standard curves with 15-hydroperoxyarachidonate.
To ensure that the fish oil administered to the treated rabbits translated into an elevation in n-3 fatty acid circulating in the plasma, n-3 fatty acid levels were determined by random sampling of the rabbits by using gas chromatography. To 100 /xl of plasma, 100 nmol butylated hydroxytoluene and 1 jimol methyl 11,14-eicosadienoate were added. Samples were extracted with 2 ml chloroformmethanol (2:1 vol/vol) and were transesterified in 1.5 ml 6% H2SO 4 in methanol at 70°C for 4 hours. The methyl esters were extracted with hexane, and 500 nmol methyl tricosanoate was added as a further control to indicate the efficiency of the extraction of the internal standard. The solvent was evaporated under a nitrogen stream, and the residue was resuspended in 2 ml carbon disulfide for analysis. The gas chromatographic analysis was performed on a Packard Model 430 with a dropping needle injector and a 30 m Supelcowax 10 capillary column. Elution was isothermal at 230°C with the injector and detector at 260°C. Fatty acid methyl esters were identified by comparison of the retention times to those of standards. Quantities were determined by comparison of the peak area to that of the added methyl 11,14eicosadienoate internal standard.
Morphologic Determination of Extent of Atherosclerosis
The rabbits were sacrificed at 12 months of age by an intravenous administration of anesthesia. An inflow cannula was inserted into the left carotid artery, and the proximal end was attached to a system designed to perfuse the arterial tree under controlled pressure. 14 In brief, a head of pressure is maintained by a perfusion pump and monitored by a second catheter inserted into a femoral artery. The rate of flow from the pump can be regulated to maintain the desired pressure level. After fixation with 2.5% glutaraldehyde and Sorensen's buffer under a pressure of 100 mm Hg for 30 minutes, the entire aorta is excised, opened axially, pinned to a millimeter grid, and photographed. The preparation is then stained with Sudan IV and photographed again. For determination of the area covered by plaques, both the unstained and stained preparations were evaluated. Two standard regions of the aorta were utilized; the proximal descending thoracic aorta from the left subclavian artery to the celiac artery and the abdominal aorta from the celiac artery to the distal bifurcation. In each instance, photographic color transparencies were projected onto a digitizing plate coupled to a desk top computer. The outline of each lesion was traced, and the total surface area was determined. The quantitation system was programmed to provide the percent of total surface area covered by plaques. No statistically significant differences were observed between the groups at the various times for any of the determinations.
To take into account possible differences in lesion thickness and cross-sectional area, complete transverse samples for histologic study were taken at 0.5, 2.5, 5, and 7.5 cm distal to the left subdavian artery. Paraffinembedded sections of 7 (im were stained with hematoxylin and eosin and wfth the Gomori trichrome-aJdehyde fuchsin stain for connective tissue. The sections were projected onto the digitizing plate by means of a microprojector, and the cross-sectional plaque areas were determined as percent of total artery wall area. Measurement of plaque thickness was also made at the point of maximal thickness at each level. An index of total lesion volume was then assigned to each specimen by establishing the product of the average of the maximal thickness at the thoracic levels and the percent of the surface area covered by plaques. The same was done for the abdominal segment utilizing the thickness at the 7.5-cm level.
Statistical Analysis
The means and standard deviations for each of the measured variables were computed. Differences between the two groups were assessed with Student's f test for unpaired data. The relationships between variables were tested with the Pearson product moment correlation coefficient. Differences were significant if the p value was less than 0.05.
Serum Llplds
The total cholesterol and triglyceride levels of the two groups obtained at the time of weaning and at 4,8, and 12 months of age were nearly identical at each age ( Table 1). The significant downward trend of these lipids in the WHHL rabbits with age has been previously described. 15 Although the precise mechanism remains speculative, it could not be attributed to the fish oil.
Fatty Acid Hydroperoxldes
The level of hydroperoxides was determined for the arterial plasma of New Zealand White rabbits (n=10) maintained on the standard diet and was 0.48±0.06 nM, similar to that reported for healthy human volunteers (0.5 /iM). 10 The mean value for fatty acid hydroperoxides in the WHHL rabbits on the standard diet (1.5±1.3 was greater than that for the New Zealand White rabbits of similar age on the same diet (p<0.005) (see Table 1). The hydroperoxide levels for individual WHHL rabbits varied from month to month, ranging from 0 to greater than 2 ^M during the course of the study. The average amount of lipid hydroperoxide observed after 4 months in the group of WHHL rabbits fed fish oil was similar to those rabbits fed the control diet. The values of hydroperoxide for individual WHHL rabbits receiving fish oil also fluctuated monthly over a wide range. The mean value of peroxide for all WHHL rabbits increased with age from 1.0±0.7 fM to 1.8±1.5 fiM (p<0.01).
To exclude the possibility that the WHHL rabbits failed to absorb the fish oil, the fatty acid contents of the total plasma from six WHHL rabbits were randomly determined before and after the period of dietary supplementation wtth fish oil. Before the animals received fish oil, no EPA was detected in their plasma. After two months of fish-oil supplementation, the level of EPA in the plasma reached 23.5±8 mg/100 ml, representing 3.4± 1.6 molar percent of all plasma fatty acids. Similar concentrations of EPA have been reported in the plasma of Greenland Eskimos. 16
Pathologic Analysis
In both groups of animals, lesions were most severe in the proximal aorta, particularly in the aortic arch. No grossly discernible differences could be detected in the overall distribution of the lesions between control animals and animals treated with fish oil. The histologic studies revealed intJmal lesions consisting of accumulations of spherical foam cells and focal regions containing matrix fibers, both collagen and elastin. The lesions resembled those previously described for rabbits maintained on cholesterol-rich diets. 17 There were no differences in the light microscopic appearance of the lesions in the two groups.
On the basis of extent of disease as determined from the percent luminal aortic surface area involved, no significant differences were evident (see Table 2 and Figure 1). For the WHHL rabbits, 25% of the surface area was involved (range, 9.6% to 64.3%), while the experimental group fed fish oil had 28% surface involvement (range, 11.0% to 80.6%). In the thoracic aortic region where lesions were most evident, 33% of the surface showed plaques (range, 4.9% to 82%) in the control No statistically significant differences were observed between the groups for any of the measurements.
WHHL=Watanabe heritable hyperlipidemic rabbits. group, while in the fish-oil fed group, 33% of the surface was also involved (range, 12.1% to 93.0%). The less involved distal abdominal segment showed 17% involvement in the control animals (range, 5.5% to 36.5%) and 21 % involvement in the group fed fish oil (range, 3.8% to 63.3%). The average thickness of the lesions was similar for the two groups and was 0.14 mm (range, 0.03% to 0.27%) for the WHHL rabbits on normal chow and 0.18 mm (range, 0.03% to 0.43%) for the animals fed fish oil. The volume ing aorta was 6.1 mm for the animals fed fish oil and 4.4 mm for the control group. In the thoracic aorta where the lesions were most prominent, the volume index was 7.2 mm for the WHHL rabbits without treatment and 9.5 mm for those fed fish oil. None of the measurements of aortic atherosclerosis were significantly different between the two groups.
Importance of Sample Size
From the analysis of the luminal surface involvement of plaque in the aortas done during the study, it was apparent that there was wide variability in the amount of atherosclerosis in both groups of animals (see Figure 1). This variability has been reported, 18 but not addressed, in the published literature on the WHHL rabbit, although it is characteristic of the expression of atherosclerosis induced In other animals 19 and of atherosclerosis in humans with familial homozygous hyperiipidemia. 20 After two and 10 rabbits had been sacrificed, we noted a statistically significant effect of the fish oil (see Table 3) and reported a significant decrease in plasma hydroperoxide levels. 10 However, because of the degree of variability in the aortic lesions in these rabbits, it was decided at that time to increase the number of rabbits for study to better test the hypothesis. When this was done, the perceived protective effects of the fish oil were lost.
Discussion
The evidence for a possibly beneficial influence of diets high in sea food on atherosclerosis in humans is primarily based on epidemiologic observations on the diets of populations and the incidence of cardiovascular events. 21 The mechanism of this beneficial effect, like the mechanism of atherogenesis, remains speculative. The influence of omega-3 fatty acid on platelet function has been well studied 22 -23 and was not a focus of this project. Populations whose diets are high in fish oil and subjects on a Western diet supplemented with omega-3 fatty acids exhibit a prolongation in bleeding time and reduction in platelet aggregation. Recent studies indicate variable effects on neutrophil and macrophage function as well. 7 The effect of fish oil on serum cholesterol levels is less consistent. Although omega-3 fatty acids may lower serum cholesterol, primarily by lowering very low density lipoprotein (VLOL) cholesterol, this occurs primarily in patients who have type Il-B and type IV hyperiipidemias. 24 -25 Fish oil does not appear to be an effective cholesterol-lowering agent in the majority of people with elevated cholesterol and relatively normal triglycerides. 26 Neither was it effective in lowering the cholesterol in the WHHL rabbit. Triglyceride levels can be reduced by dietary omega-3 fatty acids, apparently by suppressing triglyceride synthesis in the liver and reducing apolipoprotein B synthesis. 25 We observed a modest but nonsignificant influence of fish oil on triglyceride levels in the WHHL rabbit, although VLDL lipoprotein metabolism in the WHHL rabbit differs from that in the human. 27 The influence of omega-3 fatty acids on lipid hydroperoxides is less well studied. Lipid hydroperoxides can damage vascular tissue and may be causal agents in the development of atherosclerosis. 28 It has been reported that patients with coronary heart disease have higher amounts of lipid hydroperoxides than do normal controls, 5 consistent with the concept that circulating hydroperoxides represent a chronic oxidant stress capable of either promoting atherogenesis directly or stimulating Increases in intracellular hydroperoxides to reach atherogenic levels. The mean levels of lipid hydroperoxides in the WHHL rabbits estimated by the enzymatic assay were higher than those in New Zealand White rabbits, consistent with the possible participation of hydroperoxides in this atherogenic animal model. It was observed that the mean level of lipid hydroperoxide increased with the age of the rabbit, also consistent with the observation that the amount of atherosclerosis in these rabbits increases with age. 16 We were unable, however, to detect an influence of the ingestion of omega-3 fatty acids in reducing hydroperoxide levels. Although it remains possible that atherogenesis could be diminished by agents that reduce peroxide levels, it does not appear that omega-3 fatty acids are among them. The values of lipid hydroperoxide in an individual rabbit fluctuated considerably over the course of the study, suggesting that these levels are not in a steady state but rather in constant flux, perhaps in concert with the development of vascular lesions.
We found no influence of fish oil on the amount of atherosclerotic plaque observed in the aortas of the WHHL rabbit, either on surface involvement, distribution, plaque thickness, or plaque volume. The microscopic appearance of these plaques was similar to those observed in rabbits in whom atherosclerosis has been induced by high-fat diets. 17 Other studies looking at the influence of fish oil in animals have reported both a reduction 2 -3 and an increase in atherosclerosis from fish-oil supplemented diets. 2930 Discrepancies in the results of these studies might have several explanations. Although a dramatic model to study naturally occurring atherosclerosis, the WHHL rabbit may not be comparable to other models of atherosclerosis with respect to the influence of n-3 fatty acids on the disease process. A similar discrepancy has been reported on the effects of calcium blockers on atherogenesis in WHHL versus cholesterol-fed rabbits. 31 ' 32 Species differences might also account for differences in the biologic effects of fish oil. Indeed, it has recently been reported that fish oil enhances monocyte adhesion and fatty streak formation in the rat. 30 Another important reason for the conflicting results of other studies, however, may relate to the number of animals studied. The observation of marked biologic variability in the amount of atherosclerosis in the rabbits that we studied is consistent with the disease process in all animal species. Thus, although the data from studies with small numbers of animals may turn out to be statistically significant, they may not necessarily be clinically meaningful. We, too, would have found a significant influence of fish oil on reducing atherosclerosis had we limited our study to 10 rabbits. The importance of sample size when reviewing and comparing other studies is underscored by our results.
We believe, however, that several aspects of this study relate to the human experience. The serum levels of omega-3 fatty acids attained in the rabbits receiving fish 011 were similar to those observed in the Greenland Eskimo. 16 In addition, the main source of the dietary fatty acids in the study rabbits were derived from marine animals, also similar to the Eskimo population. Finally, by waiting for 12 months to assess the influence of the fish oil, we were able to evaluate the long-term effect of omega-3 fatty acids.
Our results suggest that the notion that fish oil protects against the development of atherosclerosis remains speculative. Fish oil might influence cardiovascular mortality by its antithrombotic effects on platelets, however, which would tend to reduce the risk of acute myocardial infarction, as recently supported by the results of the Physicians Health Study. 33 We did not test for the presence of acute myocardial infarction but rather for chronic atherosclerosis. Similarly, it is a paucity of acute myocardial infarction, rather than chronic atherosclerosis, that was observed in the Eskimo population. In this respect, diets high in omega-3 fatty acids may create an antithrombotic effect similar to that with the ingestion of the small amounts of aspirin. However, the uniform prescription of fish-oil capsules to adults with the expectation that it will either reduce their cholesterol or their development of atherosclerosis is not supported by this study. | 2017-09-23T02:35:23.189Z | 1989-03-01T00:00:00.000 | {
"year": 1989,
"sha1": "19fba85307e0f8b7332386d61179ee9bf9fa6a15",
"oa_license": null,
"oa_url": "https://www.ahajournals.org/doi/pdf/10.1161/01.ATV.9.2.189",
"oa_status": "GOLD",
"pdf_src": "WoltersKluwer",
"pdf_hash": "d778468b8f7f0daca60e725cbf2fae6185ebe2fb",
"s2fieldsofstudy": [
"Biology",
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
257365733 | pes2o/s2orc | v3-fos-license | Topological rings and their groups of units
If $R$ is a topological ring, then it is well known that $R^{\ast}$, the group of units of $R$, with the subspace topology is not necessarily a topological group. This fact first leads us to a natural definition: By an \emph{absolute topological ring} we mean a topological ring such that its group of units with the subspace topology is a topological group. We prove that every commutative ring with the $I$-adic topology is an absolute topological ring. Next we show that for a given topological ring $R$ then $R^{\ast}$ with the subspace topology $\mathscr{T}$ is a topological group (or equivalently, $R$ is an absolute topological ring) if and only if $\mathscr{T}=\mathscr{T}_{f}$ where the topology $\mathscr{T}_{f}$ over $R^{\ast}$ is induced by the map $R^{\ast}\rightarrow R\times R$ which is given by $a\mapsto(a,a^{-1})$. If $G$ is a topological group then every monomial function $G^{n}\rightarrow G$ as well as if $R$ is a topological ring then every polynomial function $R^{n}\rightarrow R$ are continuous. In particular, the Boolean ring of every topological ring with the subspace topology is a topological ring. We prove that for the $I$-adic topology over a ring $R$, then $\pi_{0}(R)=R/(\bigcap\limits_{n\geqslant1}I^{n})=t(R)$ where $\pi_{0}(R)$ is the space of connected components of $R$ and $t(R)$ is the space of irreducible closed subsets of $R$. We show that if the identity element of a topological group is dense, then its topology is trivial. As a consequence, a normal subgroup of a topological group is dense if and only if the topology of the quotient group is trivial. Finally, we realized that the main result of Koh \cite{kwangil} as well as its corrected version \cite[Chap II, \S12, Theorem 12.1]{Ursul} are not true, then we corrected this result in the right way.
Introduction
In this article, we obtain new results on topological groups and commutative topological rings. The group of units of a given topological ring with the subspace topology is not necessarily a topological group. This leads us to the notion of absolute topological ring (see Definition 2.1). We prove that every commutative ring with the I-adic topology is an absolute topological ring (see Theorem 2.2). Next in Theorem 2.6, we show that the group of units R * of a given topological ring R by the topology T f induced by the map f : R * → R × R which is given by a → (a, a −1 ) is a topological group. This theorem gives us a characterization result (see Corollary 2.7) which asserts that R * with the subspace topology T is a topological group if and only if T = T f . If G is a topological group then it is shown that every monomial function G n → G given by (x 1 , . . . , x n ) → ax d1 1 . . . x dn n is continuous where a ∈ G and each d k ∈ Z. Similarly, if R is a topological ring then we show that every polynomial function R n → R is continuous. This observation has several consequences (especially it unifies various known results as particular cases).
In this article, we also give a special consideration to the I-adic topology. Especially by using the theory of topological groups and rings, we obtain the following theorem which is one of the main results of this article. First recall that for a given topological space X, by π 0 (X) we mean the space of connected components of X and by t(X) we mean the space of irreducible closed subsets of X. Theorem 1.1. Let I be an ideal of a commutative ring R. Consider the I-adic topology over R, then we have the following equalities of topological spaces: π 0 (R) = R/( n 1 I n ) = t(R).
In Theorem 2.22, we show that if the identity element of a topological group is dense, then its topology is trivial. As a consequence, a normal subgroup of a topological group is dense if and only if the topology of the quotient group is trivial (see Theorem 2.25). As an application, an ideal of a topological ring is dense if and only if the topology of the quotient ring is trivial.
While trying to understand the proof of the main result of Koh [3] we realized that this result as well as its corrected version [6, Chap II, §12, Theorem 12.1] are not true. Then after some efforts, we corrected this result in the right way (see Theorem 2.31). In Theorem 2.33 we also improve one of the main results of Ganesan [1, Theorem I] which asserts that a given nonzero ring is a finite nonfield ring if and only if its zerodivisors is a finite nonzero set.
In this article, all of the rings are assumed to be commutative. But some of the results (including Theorems 2.6, 2.14 and 2.31) can be generalized to noncommutative rings.
Main Results
If R is a topological ring, then its group of units R * = {a ∈ R : ∃b ∈ R, ab = 1} with the subspace topology is not necessarily a topological group. In fact, the group operation of R * is the restriction of the multiplication map of R and hence it is continuous. But the inverse map R * → R * which is given by a → a −1 is not necessarily continuous. For instance, the adele ring of a global field is a topological ring, but its group of units with the subspace topology is not a topological group (this is well known and can be found in algebraic number theory books with focusing on adele rings). This observation leads us to the following notion.
Definition 2.1. By an absolute topological ring (or, topological ring with continuous inverses) we mean a topological ring such that its group of units with the subspace topology is a topological group.
In the following result we will observe that every ring can be made into an absolute topological ring in a canonical and nontrivial way. First recall that if I is an ideal of a ring R, then there exists a unique topology over R such that the collection of a + I n with a ∈ R and n 1 a natural number forms a base for its open. This topology is called the I-adic topology. Theorem 2.2. Let I be an ideal of a ring R. Then R with the I-adic topology is an absolute topological ring.
Proof. The additive operation f : . It remains to show that the group of units R * with the subspace topology (induced by the I-adic topology) is a topological group. Indeed, the inverse map h : R * → R * which is given by Remark 2.3. In a correspondence with Pierre Deligne, he informed us that the other nice case arises in functional analysis: Every C * -algebra and more generally every Banach algebra is an absolute topological ring. Also note that using the above definition then a topological field means an absolute topological ring such that it is also a field. For example, the field of real numbers with the Euclidean topology is a topological field.
Recall that if (X, T ) is a topological space, S a set and f : S → X a map, then clearly the set T f = {f −1 (U ) : U ∈ T } is a topology over S and f is made into a continuous map. We call T f the topology induced by f . Remark 2.4. Let f : X → Y be a continuous map of topological spaces. If Im(f ) ⊆ Z ⊆ Y , then f induces a continuous map g : X → Z which is given by Lemma 2.5. Let (R k ) be a family of topological rings. Then the direct product ring R = k R k with the product topology is a topological ring.
Proof. It is well known and easy exercise.
For a given topological ring R, in order to make R * a topological group first we extend its topology as follows. Consider the map f : R * → R × R given by a → (a, a −1 ). Clearly the topology over R * induced by f is finer than the subspace topology, because Theorem 2.6. Let R be a topological ring and consider the map f : R * → R × R which is given by a → (a, a −1 ). Then R * with the topology induced by f is a topological group.
Proof. The inverse map g : R * → R * which is given by a → a −1 is continuous, Next we show that the group operation h : R * × R * → R * which is given by (a, b) → ab is continuous. By Lemma 2.5, the product ring S := R × R with the product topology is a topological ring. Hence, its multiplication g : S × S → S which is given by (a, b), (c, d) → (ac, bd) is continuous. Thus the map ϕ := g • (f × f ) : R * × R * → S is continuous and we have Z := Im(ϕ) = Im(f ). Then by Remark 2.4, ϕ induces a continuous map ψ : R * × R * → Z which is given by (a, b) → (ab, a −1 b −1 ). Then we show that f induces a homeomorphism θ : R * → Z onto its image which is given by a → f (a) where the topology of Z is the subspace topology. Clearly the map θ is bijective. By Remark 2.4, it is continuous. The map θ is also an open map, because Hence, θ is a homeomorphism. Thus its inverse θ −1 and so h = θ −1 • ψ are continuous.
Corollary 2.7. Let R be a topological ring and consider the map f : R * → R × R which is given by a → (a, a −1 ). Then R * with the subspace topology T is a topological group if and only if T = T f .
Proof. If R * with the subspace topology T is a topological group, then the inverse map g : This shows that T f ⊆ T . We also have T ⊆ T f . Hence, T = T f . The reverse implication follows from Theorem 2.6.
Remark 2.8.
Remember that if f, g : X → R are continuous functions with X a topological space and R a topological ring, then the pointwise addition f + g : X → R given by x → f (x) + g(x) and the pointwise multiplication f · g : X → R given by x → f (x)g(x) are continuous. Indeed, the map h : Thus f + g = α • h and f · g = β • h are continuous where α and β are the addition and multiplication of R, respectively. If f, g : X → G are continuous functions with G a topological group, then exactly like the above it can be seen that the pointwise multiplication f · g : X → G is continuous. The set of all continuous functions X → R is usually denoted by C(X, R). This set by the above operations is a ring. It is worth mentioning that the following two special cases of the ring C(X, R) are of particular interest in mathematics (especially in commutative algebra and mathematical analysis) which are including C(X) := C(X, R) and H 0 (A) := C(Spec(A), Z) where A is a commutative ring and Z is equipped with the discrete topology. For the second case see e.g. [4, Theorem 5.2].
The above remark leads us to the following result. Lemma 2.9. (i) If G is a topological group then every monomial function G n → G given by (x 1 , . . . , x n ) → ax d1 1 . . . x dn n is continuous where a ∈ G and each d k ∈ Z. (ii) If R is a topological ring then every polynomial function R n → R given by Proof. (i): For each k, the projection map π k : G n → G given by (x 1 , . . . , x n ) → x k is continuous, because G n is equipped with the product topology. The inverse map G → G is also continuous. Hence, the map G n → G given by ( Similarly to the above case, it can be seen that the monomial function R n → R given by (r 1 , . . . , r n ) → ar d1 1 . . . r dn n is continuous where a ∈ R and each d k 0. By Remark 2.8, the pointwise addition g+h : R n → R of every two continuous functions g, h : R n → R is continuous. Thus the map R n → R given by (r 1 , . . . , r n ) → f (r 1 , . . . , r n ) is continuous.
Recall that for any ring R by B(R) = {e ∈ R : e = e 2 } we mean the set of all idempotents of R which is a commutative ring whose addition is e⊕e ′ := e+e ′ −2ee ′ and whose multiplication is e · e ′ = ee ′ . We call B(R) the Boolean ring of R. For more information on this ring we refer the interested reader to [5]. We know that every subring of a topological ring with the subspace topology is a topological ring. But note that B(R) is not necessarily a subring of R. In spite of this, the property of being a topological ring is still preserved by Booleanization: Corollary 2.12. If R is a topological ring, then the Boolean ring B(R) with the subspace topology is a topological ring.
Proof. The multiplication of B(R) is the restriction of the multiplication of R and hence it is continuous. Consider the polynomial f (x, y) = x + y − 2xy in R[x, y]. By Lemma 2.9(ii), the map f * : R × R → R given by (a, b) → a + b − 2ab is continuous. The addition of B(R) is the restriction of f * and so it is continuous.
In the following results (Theorems 2.13, 2.14 and 2.16), the structure of the connected components of topological groups and rings are investigated.
Theorem 2.13. Let G be a topological group. If N is the connected component of the identity element e ∈ G, then N is a normal subgroup of G and the topological group G/N is the space of connected components of G.
Proof. For each x ∈ G the map G → G given by a → xa is a homeomorphism and hence xN is a connected component of G. If x ∈ N then the connected component x −1 N contains the identity element and so x −1 N = N , this shows that x −1 ∈ N and so xN = N . Hence, N is a subgroup of G. If g ∈ G then the connected component g −1 N g contains the identity element and so g −1 N g = N . Thus N is a normal subgroup of G. Finally, let C be a connected component of G. We known that G/N is a partition for G. Thus C ∩ xN = ∅ for some x ∈ G. It follows that C = xN .
A similar result holds for topological rings (which also can be found in [7, Theorem 4.5]): Theorem 2.14. Let R be a topological ring. If C ⊆ R is the connected component of the zero element, then C is an ideal of R and the topological ring R/C is the space of connected components of R.
Proof. We know that the additive group of R is a topological group. Thus by Theorem 2.13, C is the additive subgroup of R. If a ∈ R then the map R → R given by r → ar is continuous and hence aC is a connected subset of R. But aC contains the zero element and so aC ⊆ C. Similarly, Ca ⊆ C. Hence, C is a two sided ideal of R. In Theorem 2.13, we observed that the connected components of R are precisely of the form r + C with r ∈ R. The above result, in particular, tells us that if the ideal I is generated by a set of idempotents or more generally it is a pure ideal (i.e., the canonical ring map R → R/I is a flat ring map), then I is a connected component of R with respect to the I-adic topology.
If I is a proper ideal of a ring R, then by Theorem 2.16, R is not connected with respect to the I-adic topology.
Recall from [2, Chap II, §2, p. 78] that if X is a topological space, then by t(X) we mean the set of all irreducible and closed subsets of X. It can be easily seen that the set t(X) is a topological space whose closed subsets are precisely of the form t(E) where E is a closed subset of X. The canonical map X → t(X) given by x → {x} is continuous. If f : X → Y is a continuous map of topological spaces, then the map t(f ) : t(X) → t(Y ) given by Z → f (Z) is continuous. In fact, t(−) is a covariant functor from the category of topological spaces to itself. In this regard, we have the following result.
Theorem 2.20. Let I be an ideal of a ring R. Consider the I-adic topology over R, then the topological space t(R) and the quotient space R/( n 1 I n ) are the same.
Proof. If Z ∈ t(R) then Z is an irreducible and closed subset of R. Since Z is nonempty, we may choose some x ∈ Z and so {x} ⊆ Z. We know that in a topological space, every irreducible subset is connected. So Z is contained in the connected component of x. Then using Theorem 2.16 and Corollary 2.19, we have Z ⊆ x + The converse of the above result holds trivially.
Corollary 2.23. If G is a simple topological group, then its identity element is a closed point or its topology is trivial.
Proof. If e ∈ G is the identity element, then by Proof. It follows from Theorem 2.25.
Remark 2.27. By Theorem 3.2(i), every maximal ideal of a topological ring is either closed or dense. Similarly, by Theorem 3.1(i), every maximal subgroup of a topological group is either closed or dense. Also recall that a proper normal subgroup of a group is called maximal normal if it is a maximal element in the set of proper normal subgroups of that group. Again by Theorem 3.1(i), every maximal normal subgroup of a topological group is either closed or dense. Note that, in contrast to the maximal ideals in ring theory, maximal (even maximal normal) subgroups do not necessarily exist in a given infinite group.
By a compact space we mean a quasi-compact and Hausdorff topological space.
Remark 2.28. Remember that by a perfect map we mean a continuous map f : X → Y between topological spaces such that it is a closed map and for each y ∈ Y the fiber f −1 (y) is quasi-compact. For example, every continuous map from a quasicompact space into a Hausdorff space is a perfect map. It is well known and easy to check that the inverse image of every quasi-compact subset under a perfect map is quasi-compact.
We need the following well known and fundamental result in the next theorem. Thus by (ii), f is a closed map. Each fiber of f is of the form xH which is homeomorphic to H and hence it is quasi-compact.
Note that the converse of Theorem 2.29(iii) holds trivially: If the canonical map G → G/H is a perfect map for some subgroup H, then H is quasi-compact. The main result of Koh [3] is not true. Indeed, in a given topological ring R, the canonical bijective continuous map from the quotient space R/ Ann(x) onto the subspace Rx given by r + Ann(x) → rx with x ∈ R is not necessarily a homeomorphism, even if Rx (or more strongly, every principal ideal of R) is a closed subset of R. An example can be found in [6, Chap II, §12, Remark 12.1]. In the following result, we correct Koh's result in the right way.
Theorem 2.31. Let R be a topological ring which is Hausdorff and the map f : R → R given by r → rx is a closed map for some 0 = x ∈ Z(R). If Z(R) is a compact subset, then R is compact.
Proof. The induced map g : R/ Ker(f ) → Rx given by r + Ker(f ) → rx is bijective and continuous. It is also a closed map, because f is a closed map. Hence, g is a homeomorphism from the quotient space R/ Ker(f ) onto the subspace Rx. Since x = 0, so Rx ⊆ Z(R). Clearly Rx is a closed subset of R, since f is a closed map. Thus Rx is quasi-compact, because every closed subset of a quasi-compact space is quasi-compact. Hence, the quotient space R/ Ker(f ) is quasi-compact. Since R is Hausdorff, so the zero ideal is a closed point. Thus the fiber f −1 (0) = Ker(f ) is also a closed subset of R. Also Ker(f ) = Ann(x) ⊆ Z(R), since x = 0. Hence, Ker(f ) is quasi-compact. We know that the additive group of every topological ring is a topological group. Thus by Theorem 2.29(iii), R is quasi-compact.
Note that in the above result, Ker(f ) is a closed subset of R if and only if R is Hausdorff. Because if Ker(f ) is closed then its image under the closed map f is a closed subset which equals to the zero ideal, and so R is Hausdorff (for the reverse implication see the above proof). Hence, a corrected version of Koh's result [6, Chap II, §12, Theorem 12.1] is not true without the "Hausdorffness" assumption. Also note that in Theorem 2.31, f is a closed map if and only if the induced map R/ Ker(f ) → R is a closed map. Indeed, by Theorem 2.29(iii), the canonical map R → R/ Ker(f ) is a closed map.
Remark 2.32. Recall from the basic group theory that if I is an ideal of a ring R such that I and R/I are finite sets then R is a finite ring with |R| = |I| · |R/I|.
The following result improves [1, Theorem I].
Theorem 2.33. Let R be a nonzero ring. Then R is a finite nonfield ring if and only if Z(R) is a finite nonzero set.
Proof. The implication "⇒" is clear, because if Z(R) = {0} then R will be an integral domain which is a contradiction since every finite integral domain is a field. Conversely, suppose Z(R) is a finite nonzero set. Consider the discrete topology over R. Then by Theorem 2.31, R is compact and so it is finite. Also R is not a field, because Z(R) = 0. Motivated by the proof of [1, Theorem I], we provide a second proof for the reverse implication without using Theorem 2.31. Assume Z(R) is a finite nonzero set. So we may choose some 0 = x ∈ Z(R). Then clearly I := Ann R (x) ⊆ Z(R). Hence, I is a finite set. The map R/I → Z(R) given by r + I → rx is injective. Thus R/I is also a finite set. Then by Remark 2.32, R is a finite ring. Moreover, |R| = |I| · |R/I| n 2 where n := |Z(R)|.
Note that in the above result, the assumption Z(R) = 0 is vital. For example, the ring of integers Z has finitely many zerodivisors (the zero element is the only zerodivisor), but it is an infinite ring. Proof. Assume f is continuous. We know that J is an open subset of R ′ and so f −1 (J) is an open subset of R. But 0 ∈ f −1 (J). So there exists some a ∈ R and a natural number n 1 such that 0 ∈ a + I n ⊆ f −1 (J). It follows that a ∈ I n and so f (I n ) ⊆ J. To see the converse, it will be enough to show that f −1 (b + J d ) is an open subset of R where b ∈ R ′ and d 1. Take r ∈ f −1 (b + J d ). By hypothesis, f (I nd ) ⊆ J d and so r ∈ r + I nd ⊆ f −1 (b + J d ). Hence, f −1 (b + J d ) is an open set.
As an immediate consequence of the above result, if I and J are ideals of a ring R then the J-adic topology is contained in the I-adic topology (in other words, the I-adic topology is finer than the J-adic topology) if and only if I n ⊆ J for some n 1. In particular, if p and q are prime ideals of R then p-adic and q-adic topologies are the same if and only if p = q.
Remember that a subset E of a topological space X is called locally closed if for each point x ∈ E there is an open neighborhood U ⊆ X of x such that U ∩ E is a closed subset of U (clearly this notion is a generalization of the closed subset). We can generalize it a little further as: a subset E of a topological space X is called weak closed if there exists some open U ⊆ X such that U ∩ E is a nonempty closed subset of U . This notion enables us to reformulate a well known technical result in a more simple way: Corollary 2.36. Every finite weak closed subset of a topological group which is closed under the group operation is a closed subgroup.
Proof. It is well known and easy to check that every finite nonempty subset of a group which is closed under the group operation is a subgroup. Then by the above theorem, it is also a closed subset.
Appendix
In this section, we give alternative proofs to the following well known results (which can be found in [6] or [7]). The implication "⇒" is clear. Conversely, if a, b ∈ G are distinct elements then (a, b −1 ) ∈ (G × G) \ g −1 (e) where g : G × G → G is the group operation of G. Thus there are open subsets U and V in G such that (a, b −1 ) ∈ U × V ⊆ (G × G) \ g −1 (e). It follows that b ∈ f −1 (V ) where f : G → G is the inverse map. Clearly f −1 (V ) is an open subset of G, and we have U ∩ f −1 (V ) = ∅, because if x ∈ U ∩ f −1 (V ) then (x, x −1 ) ∈ U × V , but g (x, x −1 ) = e which is a contradiction. (iii): The implication "⇒" is obvious, since G is nonempty. Conversely, let a ∈ G be an isolated point. It suffices to show that each point b ∈ G is an isolated point. | 2023-03-07T06:42:11.116Z | 2023-03-05T00:00:00.000 | {
"year": 2023,
"sha1": "1f795ed32d6c3aa176ae00a348fd01631f8dd814",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "6ce9a94247b14e51136289d97e9dd5ff57356bec",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
257407800 | pes2o/s2orc | v3-fos-license | The median eyes of trilobites
Arthropods typically possess two types of eyes—compound eyes, and the ocellar, so called 'median eyes'. Only trilobites, an important group of arthropods during the Palaeozoic, seem not to possess median eyes. While compound eyes are in focus of many investigations, median eyes are not as well considered. Here we give an overview of the occurence of median eyes in the arthropod realm and their phylogenetic relationship to other ocellar eye-systems among invertebrates. We discuss median eyes as represented in the fossil record e.g. in arthropods of the Cambrian fauna, and document median eyes in trilobites the first time. We make clear that ocellar systems, homologue to median eyes and possibly their predecessors are the primordial visual system, and that the compound eyes evolved later. Furthermore, the original number of median eyes is two, as retained in chelicerates. Four, probably the consequence of a gene-dublication, can be found for example in basal crustaceans, three is a derived number by fusion of the central median eyes and characterises Mandibulata. Median eyes are present in larval trilobites, but lying below a probably thin, translucent cuticle, as described here, which explains why they have hitherto escaped detection. So this article gives a review about the complexity of representation and evolution of median eyes among arthropods, and fills the gap of missing median eyes in trilobites. Thus now the number of median eyes represented in an arthropod is an important tool to find its position in the phylogenetic tree.
www.nature.com/scientificreports/ and Collembola, and are innervated by the same inner part of the protocerebrum as are the median eyes. Functionally often they change during ontogeny to endocrine organs, for example such as the head glands of many myriapods (Gabe Organ), or the X-, Y-organ of crustaceans. As a result their relation to median eyes remains unclear 9,10 . There is a very detailed discussion about the complex and highly diverse situation of median eye-and frontal organ systems of recent arthropods in the work of Elofsson [11][12][13] , and Paulus 10 , a detailed discussion about structure and function of dorsal organs see Supplement 1.
If we follow Gehring´s conception 14 that as in the Cambrian lobopod Microdictyon sp. in the panarthropod ancestor each (proto-)segment was equipped not just with one pair of limbs, but also with a pair of compound eyes 15 , as well as paired internal organs, one may expect that after cephalization, when three protosegments fuse to a head, the cephalon of an advanced arthropod finally should possess at least six eyes, while the 'limbs' (Størmer, 1934), Eurypterida, Silurian, Estonia 47 , plate 1, arrow indicates (q) two median eyes. (r) Opabinia regalis Walcott, 1912, Middle Cambrian, Burgess Shale, Canada 99 . (s) Three median eyes (blue) and two lateral eyes (green) of (r). cc crystalline cone (part of the dioptric apparatus), L Lens, pc screening pigment cells, r rhabdom (light perceiving structure), rc receptor cells, re retina. www.nature.com/scientificreports/ change by diversified Hox genes to three pairs of differentiated mouth appendages. Most indicative here is the situation in the primitive brachiopod crustacean Triops sp. (Notostraca) which has survived virtually unchanged since the Triassic. In total it possesses 6 eyes (plus 4 frontal organs): 2 compound eyes, and 4 median eyes (plus 2 dorsal frontal organs, 2 ventral frontal organs 16,17 ). [The function of Microdictyon's, originally organic, and just secondarily phosphatic plates has been much discussed and is still contoversal 18 . These segmental sclerotic plates, here interpreted as compound eyes, also have been variously discussed as limb attachment points 19,20 or as protective devices 21 . Meanwhile it seems more or less accepted, however, that the segmental, lateral sclerits of Microdictyon sp. are homologue to those of other lobopodians, such as Onychodictyon sp. or Hallucigenia sp., where the sclerotic plates and spines surely were protective. They are comparable to very similar net-like sclerotic plates of some lower Cambrian palaeoscolecid worms, such as Cricocosmia sp. or Tabelliscolex sp., living in the ground. Here these plates also have been discussed as defensive organs 22 . Another function may be that the plates stabilize the shape of the hydro-skeleton of these small worm-like organisms mechanically like a Pfeffer´s Cell, against influences of quickly changing osmotic conditions, when hyperosmotic outer conditions may easily let them get flubby.]
Simple eyes, especially median eyes among arthropods living today
Phylogenetic retrograde view on median eyes. Median eyes are small cup-eyes (ocelli), floored by a more or less complex retina (Fig. 2e), often covered by a lens. In phylogenetically advanced arthropods such as insects for example, eponomously they lie between the compound eyes. Median eyes are plesiomorphic for panarthropods and not homologous to the larval stemmata of holometabolous insects, which actually are fused compound eyes 24 . The number of median eyes among euarthropods varies, reflecting evolutionary changes. In crustaceans this eye is the larval tripartite, so-called Nauplius eye, which in adults is only preserved in copepods, and in most ostracodes. In the latter they even form the main eyes ( Fig. 2m-o). Three median eyes are the most common type 10,25,26 , and it is only the Nauplius eyes of most Phyllopoda that consist of four median eyes 27 . Insects generally show three median eyes (Fig. 2c) 28 . It is only in Collembola that six ocelli are present 9,10 which are, however, visual organs of different types. Corresponding with crustaceans all Hexapoda show three median eyes, which presumably arose by fusion of the median ocelli of the original four 9,10 . Thus, these three ocelli may be understood as synapomorphies of Crustacea and Hexapoda, i.e., an autapomorphy of Tetraconata/Pancrustacea (sensu Dohle 2001 29 ) 30 . In total it seems that the first visual systems that equipped panarthropods were ocelli (see lobopodians), then compound eyes appeared. Ocelli, retained as median eyes, kept on co-existing with compound eyes during the course of evolution. The function of median eyes is diverse and not completely understood yet. Apart from dragonflies it seems that all median eyes of insects underfocus (the focal plane lies behind the light-perceiving layer of the retina), and although being equipped in some cases with a reflecting tapetum and iris, a field of view of 150° and sometimes as many as 10,000 photoreceptors, such underfocusing, forms a blurred image. There seems to be a more or less general consens that in flying insects the dorsal ocelli are horizon detectors supporting flight equilibrium 31 , p. 127. For marine organisms, however, this interpretation probably makes no sense. A good example which gives a conception of how these organs may have functioned in Palaeozoic arthropods is given by the well investigated xiphosuran Limulus sp. In the xiphosuran the situation is complex. It has one pair of median eyes [32][33][34] , and a fused pair of so-called endoparietal eyes underneath, which are considered to be rudimentary median eyes 35,36 . In larval stages there is a third pair of ocellar eyes close to the brain, merging later with the frontal organ. As a result there probably were 4 median eyes originally, and they all are innervated by the same center within the central body of the brain 37 . There are two ventral photoreceptors in early instars also, which, however later change to olfactory organs [38][39][40] .
The receptors of the median eyes are sensitive to visible, but also to ultra-violet radiation. The sensitivity of the lateral and median eyes is controlled by a clock in the anterior part of the brain, while signals from the median eyes enhance the degree of adaptation to darkness in the lateral eyes according to the amount of UV-radiation reflected by the moon at night 41 . Because UV-radiation attenuates sharply with water depth, it well may be that this function is used to control the residence of horseshoe crabs within a specific range of depth in the sea 42 . Furthermore, horseshoe crabs use their compound eyes to trace up mates 43 during the night, thus the median ocelli may enhance the spawning process (for an overview see Batelle 44 ). The function as a kind of 'setter' for the lateral eyes during the night can be imagined as a useful support for vision of ancient marine animals in the same way.
In Pancrustacea (Crustacea and Hexapoda) the cephalon is built by three segments, as is reflected by the tripartite brain which consists of the proto-, deutero-and tritocerebrum. Regardless, whether the head arises by fusion of three thoracic segments or whether it evolved from a duplicated single-segment head, not homologous with any thoracic segments 45 , there remains the problem that frontal organs and median eyes are innervated by neuropiles separate from those of the compound eyes. All centers lie within the protocerebrum and are not positioned serially according to the tagmata. Evolution, however, generated a great plasticity in forming brains, and it probably is of great functional advantage if the spatial distance between all these visual centers is as short as possible. Lev, Chipman and colleagues give a comprehensive review of the current discussion on cephalization in arthropods 45,46 .
Myriapodes do not possess median eyes at all, which, in the context of their generally reduced eye system, may be seen as an adaptation to their habitat (darkness, litter).
Within the Chelicerata the fossil eurypterids clearly show just 2 median eyes 47,48 as do all extant chelicerates 49,50 . The conservative Pycnogonida are equipped four median eyes 35 , and the fact that the four ocelli in pycnogonid larvae are innervated by a single one, but bifurcated nerve 51 may indicate the evolutionary pathway-it is likely that the chelicerates retained the original number of 2. In some spiders (Salticidae), the www.nature.com/scientificreports/ dorsal median eyes become the main eyes with a complex optic and retinal system [52][53][54] , while the compound eyes decay and single ommatidia fuse and build own camera eyes ('side eyes') of different numbers 10 .
Onychophorans are ecdysozoans (invertebrates moulting a chitinous exoskeleton), and are generally considered as closely related to arthropods and tardigrades, together forming the taxon Panarthropoda. As their Lower Cambrian relatives the lobopodians, onychophorans possess one pair of small (0.2-0.3 gm) camera eyes, with clearly a distinct lens 23,30,53,[55][56][57] . The ocellar eyes lie at the dorsal base of the 'antennae' (the latter are probably not homologous with the antennae of arthropods, or chelicerae of the Chelicerata, but may find an equivalent homologue on the frontal-filaments of some crustaceans, such as Remipedia, Cirripedia or Branchiopoda 58 , p. 454.) These eyes consist of pigment cups 30,55,56 . The ocelli are each filled with a gelatinous lens, and the entire structure is covered by a translucent epidermis. In onychophorans the cephalization has not proceeded as far as it has in insects for example. The brain does not consist of three neuromeres, only two of them are present 59 . By contrast with compound eyes these ocelli develop from an ectodermal groove corresponding to the median eyes of euarthropods. They are associated with the central part of the brain rather than the lateral region where compound eyes innervated 30 . This central part is very similar to the arcuate body of chelicerates (sensu Strausfeld 60,61 ), differing from the central body of pancrustaceans by their internal neuroarchitecture, their consistent cell-types, and the position of the neuropils within the brain. Both centers, however, are connected to the median eyes of crustaceans and insects, rsp. chelicerates, and at most indirectly with the compound eyes [60][61][62] . Immunohistochemical experiments have recently confirmed the validity of the hypothesis that onychophoran eyes are homologous to the arthropod median ocelli 63 .
Functionally, the eyes of velvet worms underfocus, (as do most lens-equipped ocelli with a small retina directly below the lens). This means that the focal plane lies behind the light-perceiving layer of the retina, and thus the received image is blurred. It makes a low-pass filter, where just the rough patterns of the environment can be recognized, while details cannot be resolved. This may be a good adaptation for a poorly differentiated brain, such as is possessed by many of these small invertebrates 23,53 .
Probably close relatives of the velvet worms (Onychophora) and arthropods are the tardigrades, with which they form the taxon Panarthropoda. Many Eutardigrada and some Arthrotardigrada, namely the Echiniscidae, possess inverse pigment-cup ocelli, which are located in the outer lobe of the brain, and comprise one or a few rhabdomeric (microvillous) and ciliary sensory cells 64 . Erlanger reports that Macrobiotus macronyx Dujardin, 1851 (Eutardigrada, Parachela) possesses a 2 µm in diameter pigmented ocellus which even has a gelatinous hemispherical lens 65 , documented by Kristensen 66 also.
The tardigrades possess a brain with distinctly paired regions (lobes), most authorities agree with the existence of a pair of outer, and a pair of inner lobes. Because the brain combines the connectives of 3 ½ segments, the whole brain may be considered as homologous to the protocerebrum of arthropods 64 , p. 466. The elongation of the prominent outer lobes extends in the caudal region, innervating a sensory area (temporalia) and the ocellar eyes 67,68 , p. 385.
Annelids relation to the rest. Lastly, annelids are equipped with a chitinous outer membrane, but they do not moult. Annelids possess three types of photoreceptors-rhabdomeric, ciliary photoreceptors and phaosomous. The rhabdomeric type occurs mostly together with supportive pigment cells, while both of the other types do not 69 . Annelid eyes range from diminutive structures of one or two receptor cells up to large camera eyes with a vitreous body, elaborate lenses and multicellular retinas [69][70][71] . Mostly these eyes sit close to the cerebrum, they can be everse or inverse and all are ectodermal. Most sophisticated eye systems occur among sabellid and serpulid polychaetes [72][73][74] and the pelagic predatory polychaetes of the genus Vanadis 70,75,76 . The innervation of these eyes occurs through the middle part of the simple brain 77 , p. 364.
Many forms of annelids possess light-perceiving organs all over the body. Typical are so-called phaosomes 77 , p. 411. In its head-region Erpobella octooculata (Linnaeus, 1758), (Hirudinida) for example possesses eight pairs of eyes close to the brain, each consisting of a pigment cup, open to the front, filled with 24-35 receptor cells. These receptor cells have an inverted membrane with a rhabdomeric rim protruding into a gel-like cavity (phaosome) 78 . The nerve leaves the cup on the opening side (everse). The nerves are connected directly with the middle region of the oesophageal ganglion, as are the nerves of the antennae (e.g. Saccocirrus sp., Saccoricidae) 79 . The most complicated eyes in annelids are shown by fan worms, sabellid and serpulid polychaetes. On their feeding appendages they build compound-eye-like arrays of sensory organs, sometimes with sophisticated optics [72][73][74] .
Because of their segmentation, formerly grouped with the onychophorans as articulates 80 , the annelids are now regarded as Lophotrochzoans based on the formation of a Trochophora-larva and molecular-biological investigations 81 . Annelids show a concentration of light receptors in the head area, but the further development of more complex light-sensing organs is convergent to the panarthopoda.
Thus in total one can observe a consistent phylogenetic lineage of the ocellar median eyes from the ocelli of onychophorans to the median eyes of euarthropods. Although the segmental composition and evolutionary development of the arthropod brain is complex and not yet understood completely 82 , the innervation of the ocelli is provided by corresponding parts of the brains. It is always from the anterior part of the protocerebrum, respectively by corresponding homologous parts of the central ganglia, while compound eyes always are innervated by laterally and posteriorly positioned nerves. A molecular characterization of the embryonic origin of median and compound eyes in the common house spider (Parasteatoda tepidariorum (Koch, 1841)) shows that within the eye-antennal domain both visual organs are determined in non-overlapping domains 50 . The primordia of both visual organs are formed in non-neurogenic ectoderm at different places, developing largely independently. Those of the median eyes start in an anterior median position in the developing head while the lateral eyes start from a lateral position 50 . This principle also is well known from the fruit-fly Drosophia melamogaster Meigen, 1830 83 www.nature.com/scientificreports/ been in place already in the last common ancestor of Chelicerata and Pancrustacea/Tetraconata 50 . Trilobites today are seen as a separate branch in the phylogenetic tree between Chelicertata and Mandibulata 85 . There are strong arguments, however, to assign them to tetraconats, because they possess a crystalline cone 4,85-89 , and consequently, there should have been median eyes in trilobites, too. Following our retrograde view through the representation of ocellar median eyes and their homologuous predecessors through phylogeny based on representatives of today-living organisms, it seems to be probable that the median eyes had been present even earlier than trilobites. So-where are the median eyes of trilobites, and are they present anywhere in the fossil record?
Fossil median eyes. The high diversity of elaborate compound eyes in the fossil record, especially of trilobites, but also of radiodonts, megacheirans and other arthropods of uncertain assignment, such as Isoxys, has given rise to numerous reports about their structure and function [90][91][92][93][94][95][96][97][98] , and many aspects of their structure and function are well understood. Median eyes, however, as mentioned, a second probably plesiomorphic visual organ of arthropods, have received less attention. Here may demonstrate some examples of their early existence, sometimes clarifying their uncertain documentation.
Recent descriptions of the Cambrian megacheirans Leanchoilia sp. and Alalcomenaeus sp. have proved highly controversial.
Tanaka and colleagues 94 describe 4 compound eyes for Alalcomaneus sp. and interpret the multiplicity of eyes as typifying chelicerates, but no median eyes were described as such. [More recent analyses show that leanchoilids and alacomeneuids are not chelicerates but megacheirans (in the tree they are before the split between chelicerates and mandibulates)] 85 . The authors apply this concept also to Leancholilia superlata (Walcott, 1912) and Leanchoilia persephone Simonetta, 1970, seemingly based on the works of Garcia-Bellido and Collins 90 , and Haug and colleagues 98 alone. Specimens of L. superlata, then newly described and illustrated by Haug 98 , show the pedunculate eyes clearly in the lateral view (there Fig. 2B). The authors describe them as lateral eyes, each with short stalks arising from the antero-ventral region of the head (there Fig. 3D,F-H), and consisting of two lobes. Fig. 3G 98 clearly indicates 4 median eyes, identical to those as described by Garcia Bellido and Collins 90 , but here only referred to as ' eyes' with no closer discrimination. Garcia-Bellido and Collins give a comprehensive review of the history of the discussion of Leanchoilia's eyes. Walcott's original account reported on "a large pedunculated eye comparable to that of Opabinia regalis Walcott, 1912 99 , p. 171. Raymond reported on large, reniform depressions 'likely the remains of very large, sessile compound eyes' although 'no lenses are visible' 100 , p. 213. These were very probably the ocellar median eyes. None of the later descriptions showed any eyes at all [101][102][103] , and consequently the famous reconstruction of Marianne Collins in Gould 104 showed a blind Leanchoilia sp. Garcia-Bellido and Collins 90 point out, that the four median eyes were difficult to find under the microscope, and only showed up under bright sun light, or transverse light, because these eyes lie near the front of the ventral underside of the head shield. The authors suggest that this position led to the fact that these eyes had not been previously recorded. Because these eyes, the outer pair being larger than the inner, have no facets, the authors interpret them correctly to be median ocelli. This interpretation is in accordance with the accounts of Hou and Bergström 105 , and Schoenemann and Clarkson 4 . The latter described penduculate compound eyes, and four median eyes.
Thus one may conclude that the genus Leanchoilia possessed penducuIate compound eyes, in L. superlata they may have been even bilobate 98 , and may have possessed four ocellar median eyes. Bilobate, penduculate compound eyes were described for Alalcomenaeus sp. 94 also. There is, however, an excellent figure of Alalcomenaeus cambricus Simonetta, 1970 given by Briggs and Collins 106 , Figs. 4 and 5.4, showing a large, club-shaped stalked compound eye, which clearly is not bilobate. In consequence the question of bilobate or 'mono-lobate' penduculate compound eyes in Leanchoilia sp. and Alalcomenaeus sp. remains somewhat enigmatic.
Clearly, however, L. superlata and L. persephone from Burgess Shale additionally possess four ocellar median eyes, the outer pair larger than the inner. They are not documented for Leanchoilia illecebrosa (Hou, 1987) from the Maotianshan, China, probably because delicacy of the structure and different mode of preservation. A. cambricus possesses three median eyes 106 , pointing the way towards the pancrustacea. There is one interesting specimen of L. superlata, shown by Butterfield 107 , revealing the fusion of the inner median eyes (Fig. 2k,l), perhaps a transition to the typical number of 3 median eyes as typical for pancrustaceans also 108 .
The mandibulate Waptia fieldensis Walcott, 1912 (Burgess Shale) 109 , and the crustacean Odaria alata Walcott, 1912 (Burgess Shale) possess three median eyes, comparable with those of Pancrustacea 96 . Most ostracodes (Podocopia) of today possess a single visual apparatus consisting of three median eyes. The ocellar cups are situated near the anterior end of the hinge, just above the base of the antennules. The Silurian Hermannina sp., (Leperditiidae, Ostracoda), Lickershamn, Gotland, Silurian possesses a visual apparatus consisting of three median eyes (Fig. 2m-o). Some ostracod groups such as myodocopids, additionally display a pair of stalked compound eyes situated laterally below a translucent cuticle.
As mentioned eurypterids possessed two median eyes, and it is likely that the chelicerates retained the original number of 2.
Even the enigmatic situation in Opabinia regalis Walcott, 1912, (Burgess Shale) with its five eyes, now becomes understandable, for they can probably be interpreted as 3 median eyes and 2 lateral compound eyes. Whether the latter are compound or ocellar eyes needs further consideration (Fig. 2r,s). At least, due to their lateral position, they seem to be homogues to the compound eyes of euarthropods.
One of the most clear and meaningful examples in this context is given by the median eyes of Cindarella eucalla Chen et al., 1996 (Fig. 2f-j), classified within the stem group of trilobites, as an element of the arachnate diversity 110 . Here we find four median eyes in the middle of the cephalon, which clearly show the typical shape of ocelli, namely a distinct cup-like structure 101 www.nature.com/scientificreports/ Median eyes in trilobites. Median eyes as such have never been documented in the trilobite literature.
The only previous author who reported them was Ruedemann 111 in 1916, but he did not illustrate them, and there is no distinction between dorsal organs and median eyes in any former reports. Both authors of the present article worked for several decades on trilobites, including their sensory organs; Euan Clarkson had described many of them in great detail, but had never observed median eyes. There are two possible explanations for why this is so. The first is that median eyes, by comparison with the situation of most adult crustaceans, were never present in trilobites. The second is that they may have been overlooked because they are inconspicuous. Median eyes normally are very small, just some tenths of micrometres in size. As compared to compound eyes they have hardly any structure by which they could be distinguished from other dark structures in a petrified fossil, and, in the worst cases, as comparable to Leanchoilia sp. or ostracods, they may be hidden under the cuticle. If any were found, one might expect, however, structures regularly arranged in numbers of two, three or four, more or less round or oval. Median eyes in living arthropods contain pigments. Consequently, when median eyes were observed externally, one would expect dark structures which are, among other components, the relicts of melanine or related pigments, stable over a long time period 112,113 , showing in total a cup-like, round or oval shape as the remains of an ocellus. Such structures would be expected to lie anteriorly to the compound eyes.
New evidence on median eyes in trilobites. Aulacopleura koninckii (Fig. 1a). A slightly abraded cephalon of Aulacopleura koninckii (Barrande, 1846) shows, at the front of the glabella, three almost identically shaped dark, unconspicious tiny oval spots of equal (~ 30 µm wide, ~ 50 µm long) size (Fig. 1e-h). These three structures are lined up in parallel, slightly fanning out on the underside. All of the three spots are characterised by a smooth, clear outline, and an equal, homogenous dark brownish colour. This clear, regular appearance distinguishes this structure from accidental formations resulting from decay or finally fossilisation, but matches perfectly the characteristics, as explained above, to be expected for median eyes. Even if this is an isolated discovery, it supports the concept that median eyes were originally present in trilobites. The slight abrasion of the cuticle opens a clearer perspective, indicating that the median eyes in trilobites lie, as in Leanchoilia sp. or ostracodes, below the cuticle, in the fossil invisible from outside (Fig. 1m). The cuticle in vivo probably was translucent. The median eyes were found in a specimen at an early stage in development, and because they had never been observed before, one may assume that as in crustaceans, it is quite possible that only the early developmental stages of trilobites possessed median eyes-a reason why they have not been detected previously.
Cyclopyge sibilla. Another possible example is shown in Cyclopyge sibilla Šnajdr, 1982 ( Fig. 1i-l, n-p). Here we find on the glabella three slightly squeezed, formerly probably cup-shaped dark structures on the glabella, which we interpret as median eyes. These structures by their distinct, and trifold repeated form are very different from other undifferentiated dark spots in the fossil following the surface-irregularities of the fossil (Fig. 1l insert). All of the presumed relicts of median eyes here consist of a group of about six cells with a central element, presumably a lens. So these median eyes of this pelagic trilobite seem to have been more complex than those of the benthic Aulacopleura sp., and probably had more distinct functions-perhaps similar to those of Limulus sp.
Because the upper part of the specimen is covered by a part of a larger trilobite of the same species, it is reasonable to assume that the median eyes here pertain also to a larval stage.
Conclusions
In summary one may conclude that median eyes were really present in trilobites. That the described structures of trilobites, and of the other palaeozoic arthropods analysed here, are indeed median eyes is concluded by structural comparison to median eyes of to trilobites related extant arthropods. Their median eyes also consist of small retinal layers or cup-like ocelli, sometimes equipped with a simple lens, and lie in a median position of the cephalon. In trilobites, there were three such eyes, as typical for euarthropods, not four in number, as in some earlier forms. These median eyes consisted of cup-like ocelli, also typical for euarthropods. In Aulacopleura sp. they lie at the front of the glabella, oriented anteriorly, in Cyclopyge sp., which swam upside down, they are positioned on top of the glabella, consequently directed downwards. The median eyes of the pelagic trilobite (Cyclopyge sp.) seem to be more elaborated than those of the benthic trilobite (Aulacopleura sp.), because they seemed to have possessed a lens (Fig. 1n-p). Both median eye-systems were found in earlier instars of trilobites, and not in adult individuals. An occurrence, as comparable to many modern crustaceans, only in larval stages, would explain why the median eyes had been overlooked so far-the adults, historically more fully investigated, probably do not have them. Because we found both systems in slightly abraded specimens, one may assume that, as comparable to Leanchoilia sp. and ostracodes the median eyes lay below a translucent cuticle. When fossilised this cuticle becomes opaque, and thus makes the structures below invisible.
It seems evident that median eyes are homologous to the eyes of the ecdysozoan onychophorans 30,63 . Eurypterids, as most other chelicerata show two median eyes 10,47,58 , p. 500, (Fig. 2p,q), and it is likely that the chelicerates retained the original number of 2.
The xiphosuran Limulus sp. with its high diversity of eyes (two lateral compound eyes, two median eyes, one endoparietal eye (= two rudimentary ocellar median eyes), a third pair of ocellar eyes close to the larval brain, later merging with the probably chemosensory frontal organ, and numerous photo-sensors along the tail 114 ), indicate that Limulus sp. represents an exception with numerous irregular and incomparable neoplasms. The conservative pycnogonids (Chelicerata) possess four ocellar median eyes, but in larval stages two of them each are innervated by a bifurcated nerve 51 , indicating that here a duplication or a splitting might have taken placestarting from two median eyes, and converging to four. This may have happened several times independently, or in one ancestor in common. The Cambrian "trilobitomorph" C. eucalla possessed four median eyes also, as did the leanchoiliids, and also phylogenetically old groups such as the crustacean phyllopods 115,116 and some | 2023-03-09T15:02:08.454Z | 2023-03-08T00:00:00.000 | {
"year": 2023,
"sha1": "ccb29bd3574d770097ceeacf226f1d117f611b1d",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "ccb29bd3574d770097ceeacf226f1d117f611b1d",
"s2fieldsofstudy": [
"Art"
],
"extfieldsofstudy": [
"Medicine"
]
} |
53210735 | pes2o/s2orc | v3-fos-license | At the confluence of vicariance and dispersal: Phylogeography of cavernicolous springtails (Collembola: Arrhopalitidae, Tomoceridae) codistributed across a geologically complex karst landscape in Illinois and Missouri
Abstract The processes of vicariance and dispersal are central to our understanding of diversification, yet determining the factors that influence these processes remains a significant challenge in evolutionary biology. Caves offer ideal systems for examining the mechanisms underlying isolation, divergence, and speciation. Intrinsic ecological differences among cavernicolous organisms, such as the degree of cave dependence, are thought to be major factors influencing patterns of genetic isolation in caves. Using a comparative phylogeographic approach, we employed mitochondrial and nuclear markers to assess the evolutionary history of two ecologically distinct groups of terrestrial cave‐dwelling springtails (Collembola) in the genera Pygmarrhopalites (Arrhopalitidae) and Pogonognathellus (Tomoceridae) that are codistributed in caves throughout the Salem Plateau—a once continuous karst region, now bisected by the Mississippi River Valley in Illinois and Missouri. Contrasting phylogeographic patterns recovered for troglobiotic Pygmarrhopalites sp. and eutroglophilic Pogonognathellus sp. suggests that obligate associations with cave habitats can restrict dispersal across major geographic barriers such as rivers and valleys, but may also facilitate subterranean dispersal between neighboring cave systems. Pygmarrhopalites sp. populations spanning the Mississippi River Valley were estimated to have diverged 2.9–4.8 Ma, which we attribute to vicariance resulting from climatic and geological processes involved in Mississippi River Valley formation beginning during the late Pliocene/early Pleistocene. Lastly, we conclude that the detection of many deeply divergent, morphologically cryptic, and microendemic lineages highlights our poor understanding of microarthropod diversity in caves and exposes potential conservation concerns.
Genetic isolation is a primary driver of molecular divergence and ultimately speciation, but determining the factors that promote or constrain genetic diversity remains a significant challenge in evolutionary biology. Patterns of diversity in caves are often attributed to vicariance or dispersal, but the relative influence these processes have on the evolution and contemporary distributions of cave fauna has been widely debated (see Culver, Pipan, & Schneider, 2009;Porter, 2007). However, it is generally accepted that patterns of diversity in caves are likely shaped by a complex interaction of intrinsic factors (e.g., species-specific differences in ecology, life history, or biology) that can influence dispersal capacity and extrinsic factors (e.g., geographic barriers or climate change) that can enhance or limit dispersal opportunity Porter, 2007).
Phylogeography, the study of processes that influence the contemporary geographic distributions of species' populations by utilizing genetic data can provide insights into the relative influences of evolutionary factors driving patterns of genetic isolation and divergence in biological communities (Avise, 2000;Avise et al., 1987). For instance, phylogeographic congruence among codistributed species can implicate vicariance caused by "hard" geographic barriers or environmental changes affecting entire communities (Lapointe & Rissler, 2005), whereas conflicting phylogeographic patterns may be attributable to intrinsic differences that can affect species dispersal capacity across "soft" potential genetic barriers (e.g., Goldberg & Trewick, 2011;Hodges, Rowell, & Keogh, 2007;Hurtado, Lee, & Mateos, 2013). With cave organisms, the majority of research studies have been limited to single species (e.g., Dörge, Zaenker, Klussmann-Kolb, & Weigand, 2014;Faille et al., 2015) or cryptic species complexes with allopatric distributions (e.g., Gómez et al., 2016;Rastorgueff, Chevaldonné, Arslan, Verna, & Lejeusne, 2014). The arthropod class Collembola (springtails) offers a nearly unparalleled opportunity for elucidating the interplay of factors that affect speciation and molecular diversification in subterranean ecosystems. These small, wingless, insect-like arthropods are among the most abundant, diverse, and well-adapted organisms in caves (Christiansen, 1965;Thibaud & Deharveng, 1994), and are considered important subterranean examples of adaptive radiations (Christiansen & Culver, 1969) and parallel speciation (Christiansen, 1961(Christiansen, , 1965Christiansen & Culver, 1968). Their small size (body length often less than 1 mm), low vagility, and close associations with cave habitats facilitate their isolation, resulting in a high degree of endemism (Niemiller & Zigler, 2013) and cryptic species (Juan & Emerson, 2010). For example, the springtail genus Pseudosinella alone contains more than 100 species found in caves worldwide, many of which are known only from a single cave system (Hopkin, 1997). Most importantly, cave-dwelling springtails have varying levels of ecological specificity to, and dependence upon, cave habitats.
Although surface species are commonly found in caves as accidentals (i.e., they may fall or get washed into caves, but cannot maintain populations in caves), the majority of collembolans occurring in caves can maintain permanent subterranean populations and are either classified as troglobionts (i.e., obligate cave-dwellers that are never encountered on the surface and often have conspicuous troglomorphic adaptations associated with cave habitats) or eutroglophiles (i.e., facultative cave-dwellers that also occur in surface habitat and usually lack apparent troglomorphy) (see Sket, 2008 for current ecological classifications of subterranean animals). Because troglobiotic and eutroglophilic springtails can be codistributed (Katz et al., 2016;Soto-Adames & Taylor, 2013), extrinsic evolutionary processes are likely exerting similar selective pressures upon them. Therefore, opposing patterns of genetic structure among these species distributed across the same geographic area can reflect intrinsic factors, such as differences in the degree of ecological association with cave habitats (cave dependence) that can affect a species' capacity to disperse across geographic barriers (Pérez-Moreno et al., 2017;Weckstein et al., 2016). Disparate geographic distributions among closely related surface springtails provide some indirect evidence that varying dispersal capacity may be associated with differences in species-specific traits (Costa et al., 2013;Katz, Giordano, & Soto-Adames, 2015), and Christiansen and Culver's (1987) biogeographic study of cave springtails revealed that more pronounced troglomorphy can be correlated with smaller geographic ranges.
Long-term local persistence and small geographic ranges are typical for troglobionts, and by definition, these species cannot maintain surface populations to facilitate dispersal between discontinuous subterranean habitats. Therefore, patterns of genetic differentiation in troglobionts are likely driven primarily by isolation due to physical barriers and reflect vicariance. On the contrary, we expect isolation by distance (IBD) to be the primary driver of genetic variation in eutroglophiles owing to their propensity to disperse across surface habitats.
To test these predictions, we incorporate a suite of molecularbased approaches to (a) delimit cryptic species in the focal complexes, (b) detect molecular signatures of isolation to identify potential genetic barriers, and (c) estimate evolutionary relationships and divergence times to elucidate the roles of vicariance and dispersal in shaping patterns of cave-dwelling springtail diversity throughout the Salem Plateau-a major cave-bearing karst region of the Ozark Plateau that spans the Mississippi River Valley in Illinois and Missouri. Recent molecular-based biogeographic investigations of Ozark cave biodiversity have been useful for addressing evolutionary hypotheses for salamanders (Phillips, Fenolio, Emel, & Bonett, 2017) and fish broadly distributed across the Mississippi River Valley (Niemiller et al., 2012). However, the phylogeography of cave invertebrates has yet to be evaluated for the Salem Plateau. Fine-scale phylogeographic patterns of cave springtails distributed across the Mississippi River may be used to investigate the impact of intrinsic and extrinsic factors (e.g., the degree of cave dependence and geographic barriers) on the evolution of cave organisms, broaden our limited understanding of subterranean microarthropod diversity, and assess biogeographic interpretations that may help clarify the complex, yet poorly understood, geological history of the Salem Plateau.
| Study system, focal taxa, and field collections
The complex geological landscape of the Salem Plateau (Figure 1) provides the ecological context for testing biogeographic hypotheses of vicariance and dispersal. This once continuous karst region, now bisected by the Mississippi River Valley, is located south of St.
Louis and covers just eight counties but contains thousands of sinkholes and includes the largest cave systems in Illinois and Missouri F I G U R E 1 Salem Plateau cavebearing karst spanning the Mississippi River border of Illinois and Missouri (gray) (adapted from Panno et al. (1997Panno et al. ( , 1999 (Panno, Weibel, & Li, 1997 Notes. a Refers to high-density sinkhole areas in the Salem Plateau karst study area defined for Illinois (Panno et al., 1997(Panno et al., , 1999Venarsky et al., 2009) and Missouri (Burr et al., 2001;Panno et al., 1999) (see Figure 1). abundance on organic debris and rock surfaces in cave entrances and twilight zones, and less frequently and in smaller numbers in cave dark zones. preparation, but the fragile cuticles were easily damaged when handled and small individuals were nearly invisible making them difficult to recover. Therefore, the heads of specimens, which include important diagnostic morphology (e.g., the arrangement and morphology of setae), were dissected and stored separately prior to DNA extraction as back up vouchers for those cases where the now-translucent bodies were not recovered.
COI and 16S are particularly useful for evaluating population-level variation as they exhibit high levels of genetic variation and have been used extensively for species-and population-level phylogenetic research in springtails (Hogg & Hebert, 2004). Collembola are generally characterized by extremely high levels of molecular diversity (Katz et al., 2015); therefore, more slowly evolving loci, 28S and histone-3, were included to provide stronger phylogenetic signal among more distantly related taxa. Histone-3 and 28S D1-3 were excluded for Pogonognathellus due to inconsistent amplification.
See Supporting information Appendix S1 for list of all taxa included in this study, including sample information and all sequences with corresponding GenBank (Benson et al., 2013) accession numbers.
See Supporting information Appendix S2 for PCR and sequencing primers, including a description of the PCR protocol and sequence alignment methods used in this study. The outgroup taxa listed in Supporting information Appendix S3 were chosen based on their affinities with the target taxa and availability of sequences in GenBank.
| Detecting and delimiting cryptic diversity
The presence of cryptic diversity was detected by incorporating a number of different tests. First, we calculated uncorrected pairwise COI distance frequencies for all sampled specimens with PAUP* 4.0a build 159 (Swofford, 2002) and plotted distance frequency histograms to detect the presence of interspecific variation within each targeted morphospecies. A gap between the greatest putative intraspecific and smallest putative interspecific pairwise distances can be interpreted as the boundary between species-and populationlevel variation (Meier, Zhang, & Ali, 2008).
To determine how interspecific variation was geographically distributed, we performed a hierarchical analysis of molecular variance (AMOVA) for COI, 16S, and 28S using all taxa sampled for each target morphospecies using Arlequin v. 3.5.2.2 (Excoffier & Lischer, 2010). Haplotypes were grouped within samples, among samples in caves, and among caves with 50,000 permutations performed to assess significance. The presence of strong genetic structuring within samples or among samples in caves can be an indicator of cryptic diversity because sexual isolation is typically required to maintain high levels of genetic variation occurring in sympatry.
We also delimited putative species boundaries using a General Mixed Yule Coalescent (GMYC) analysis (Pons et al., 2006). This method uses ultrametric gene trees to identify the interface between population-and species-level branching patterns and demarcates genetically cohesive clades as independent evolutionary units known as operational taxonomic units (OTUs). The GMYC analysis was performed on COI gene trees using the single threshold delimitation method implemented in the splits package (Ezard, Fujisawa, & Barraclough, 2009)
| Tests for genetic structure
The relative role of cave dependence and its influence on springtail dispersal capacity remain unclear, in part, because the identities of genetic barriers are not known for cave-dwelling springtails. To identify barriers to Pygmarrhopalites and Pogonognathellus dispersal, we evaluated and compared levels of genetic structure across cave boundaries and the Mississippi River Valley. In addition, we also included sinkhole area boundaries in the genetic structure analyses.
The most sampled OTUs for each target morphospecies, identified by the GMYC analysis, were chosen as focal OTUs for population analyses to avoid attributing deeply divergent and structured lineages to population-level variation, rather than to species-level variation (Fouquet et al., 2007). Hierarchical AMOVAs were performed independently with Arlequin for COI and 16S for both focal OTUs by grouping haplotypes within samples, among samples within barriers, and among samples across barriers. Significance was assessed with 50,000 permutations.
Patterns of population structure resulting from dispersal and genetic drift, rather than of vicariance across geographic barriers, are common in animals with low mobility and can usually be attributed to a model of IBD (Costa et al., 2013;Timmermans et al., 2005). To determine whether geographic distance is significantly correlated with genetic distance, we performed a Mantel test (Mantel, 1967;Sokal, 1979; but see Diniz-Filho et al., 2013;Legendre, Fortin, & Borcard, 2015) for each locus. We also evaluated the significance of genetic structure across barriers while controlling for geographic distance using a partial Mantel test (Smouse, Long, & Sokal, 1986), which allows for the comparison of two variables (i.e., pairwise genetic distances and position relative to geographic barrier) while controlling a third (i.e., geographic distances Templeton-Crandall-Sing (TCS) haplotype networks (Clement, Snell, & Walker, 2002) for COI and 16S were estimated with PopART (Leigh & Bryant, 2015) to visualize and compare phylogeographic structure across genetic barriers for Pygmarrhopalites and Pogonognathellus focal OTUs.
| Phylogenetic inference, divergence time estimation, and topology tests
To further investigate the interplay of vicariance and dispersal capacity on cave springtail diversity, we conducted a Bayesian phylogenetic analysis using BEAST 2 to infer evolutionary relationships and to estimate divergence times for all sampled lineages of Pygmarrhopalites and Pogonognathellus. Two independent datasets were analyzed and compared: the Pygmarrhopalites dataset (COI, 16S, 28S D1-3, 28S D7-10, histone-3; 3,358 total bp) and the Pogonognathellus dataset (COI, 16S, 28S D7-10; 2,059 total bp).
External rates were used for molecular clock calibrations rather than fossil information because springtails lack an adequate fossil record and phylogenetic framework for calibrating molecular clocks.
Katz (2018) Following guidelines proposed by Kass and Raftery (1995), a twice logarithm BF difference (2 × log e BF) of higher than 6 was considered strong evidence against the null hypothesis.
| Evidence for cryptic diversity
Uncorrected pairwise COI distance frequency histograms revealed extraordinarily high genetic distances among sampled specimens within each morphospecies: up to 35% for Pygmarrhopalites and 18% for Pogonognathellus ( Figure 3). COI distances above 8%-15% in springtails are typically recognized as interspecific when used in combination with independent evidence (Katz et al., 2015). Moreover, COI distances form bimodal distributions for both morphospecies, each separated by a 10% gap (Figure 3) which can be interpreted as a boundary between intra-and interspecific genetic variation (Meier et al., 2008), providing preliminary support for the presence of cryptic diversity within both target morphospecies.
The results of the initial AMOVA that incorporated all sampled taxa identified high levels of genetic structure within caves and within samples, supporting the presence of sympatric cryptic species (Table 2): Between 40% and 60% of genetic variation in COI, 16S, and 28S was structured among samples within the same cave for both genera. Genetic variation in COI, 16S, and 28S (24%, 21%, and 29%, respectively) was also structured within samples for Pygmarrhopalites, but this pattern was not recovered for Pogonognathellus (COI, 2%; 16S, 0%; 28S, 41%).
The GMYC analyses revealed 14 putative species: 10 Pygmarrhopalites OTUs (A1-10) and four Pogonognathellus OTUs (T1-4) ( Figure 6). Figure 4). Pygmarrhopalites A10 and Pogonognathellus troglomorphy (e.g., elongated antennae and thread-like unguiculus) and was most similar to Pygmarrhopalites pavo (Christiansen & Bellinger, 1996), a troglobiont reported from caves in Virginia (Christiansen & Bellinger, 1996), West Virginia (Fong, Culver, Hobbs, & Pipan, 2007), Tennessee (Lewis, 2005), and Missouri (Zeppelini, Taylor, & Slay, 2009). We believe the unique differences in morphology, in combination with molecular evidence, support the recognition of all Pygmarrhopalites OTUs as distinct and potentially new species. Because some cryptic lineages may be of higher conservation concern, it is imperative to identify and describe these lineages for potential management initiatives (Delić, Trontelj, Rendoš, & Fišer, 2017;Niemiller, Graening, et al., 2013). However, we chose to refrain from giving OTUs formal species names at this time because a comprehensive taxonomic review is required to describe new species and to clarify the status of existing species-a task beyond the scope of this study.
| Phylogeny, divergence times, and topology tests
The rate-calibrated phylogenetic analysis based on the multilocus dataset produced trees with high support for all OTUs identified by the GMYC analysis, and molecular divergence time estimates revealed that all OTU diversification predated the Pliocene (Figure 6). The multilocus phylogeny also shows that two additional OTUs (Pygmarrhopalites A3 and A4) contain both Illinois and Missouri lineages, but did not form monophyletic groups by region relative to the Mississippi River. All other OTUs were short-range endemics, from a single cave (A1, A2, A6, A8, A9, T1-3) or from neighboring cave systems within the same sinkhole area (A5, A7) ( Figure 6).
TA B L E 5
Mantel test results (a, COI; b, 16S) to identify isolation-by-distance (IBD) patterns and correlations between genetic distance and geographic barriers after controlling for geographic distance in Pygmarrhopalites A10 and Pogonognathellus T4 The topology tests (
Mantel tests confirmed geographic distance to be a significant driver of genetic isolation for both taxa (Table 5), suggesting springtails are weak dispersers regardless of ecological classification. After controlling for geographic distance using partial Mantel tests, we still recovered significant positive correlations between genetic distance and sinkhole areas and between genetic distance and position relative to the Mississippi River for Pygmarrhopalites, but not for Pogonognathellus (Table 5). The haplotype networks ( Figure 5), phylogenetic trees (Figures 6 and 7), and topology tests (Table 6) also corroborate these findings providing similar patterns of ge- Relative to the Mississippi River Valley and sinkhole area boundaries, we observed a very different pattern when genetic variation was partitioned among caves: Cave boundaries were identified as significant genetic barriers for Pogonognathellus only, whereas patterns of genetic structure among caves identified by the AMOVA for Pygmarrhopalites A10 (Table 4a) were not supported after accounting for geographic distance (Table 5). In this case, patterns of genetic structure among caves are driven by IBD for Pygmarrhopalites A10 (not Pogonognathellus T4) suggesting that troglobiotic Pygmarrhopalites are capable of dispersing between caves. Although this finding appears to contradict the hypothesis that troglobiotic species are less capable of dispersal across geographic barriers, it can still be explained by differences in cave habitat preferences.
Aquatic interstitial subterranean connections joining neighboring cave systems may enable subterranean dispersal during flooding events for Pygmarrhopalites A10. This is supported by shared 16S haplotypes between neighboring cave systems (PAC, HSC, and STC) ( Figure 5b). Groundwater connections (e.g., alluvial aquifers, epikarst systems) have been implicated as "interstitial highways" that can provide subsurface dispersal pathways for a wide range of subterranean arthropods (e.g., Lefébure et al., 2006;Ward & Palmer, 1994), but Collembola are not normally considered members of the interstitial groundwater community as they cannot complete life cycles while submerged (Deharveng, D'Haese, & Bedos, 2008).
However, growing evidence suggests that they are not only present in these habitats, but can occur in abundance and comprise diverse communities (Bretschko & Christian, 1989;Deharveng et al., 2008; Cave-to-cave subterranean dispersal is unlikely or infrequent for Pogonognathellus because species in this genus do not occur in interstitial habitats and prefer floor or wall surfaces near cave entrances rather than dark zone habitats. This is supported by strong genetic structuring among caves for Pogonognathellus T4 indicating that cave-to-cave dispersal is extremely rare for this species despite having naturally occurring surface populations that could presumably F I G U R E 6 Time-calibrated trees for (a) Pygmarrhopalites and (b) Pogonognathellus inferred by Bayesian phylogenetic analysis. Clade posterior probabilities are indicated at each node. Divergence times are represented by blue bars at each node with their length corresponding to the 95% HPD of node ages. OTUs identified by the GMYC analysis are indicated to the right of each clade (A1-A10 and T1-T4). Focal OTUs chosen for population structure analyses (A10 and T4) are highlighted in gray boxes (see Figure 7 for close-up of A10). Single-site endemic OTUs are labeled in red. Taxon labels correspond to cave name abbreviation, sample #, state, specimen # (see Table 1 for cave abbreviations and Appendix 1 for sample information). Scale bars represent substitutions/site/Ma To assess the effect of cave dependence on patterns of molecular variation, we were required to make informed assumptions about species ecology, including the classification of Pygmarrhopalites A10 as a troglobiont. For many small cave-dwelling animals, such as springtails, it is often impossible to ascertain with certainty that a species only occurs in caves (Christiansen, 1962); a species reported only from caves could also be a common soil species, having yet to be reported from surface habitats; the distinction between cavernicolous habitats and other subsurface microhabitats may be weak or nonexistent for small animals; and troglobionts often lack obvious troglomorphy. Despite these concerns, we are confident that the combination of troglomorphy, close morphological affinities to known troglobiotic species, and their exclusive occurrence in dark or deep twilight cave zones (Supporting information Appendix S1) provides sufficient evidence that Pygmarrhopalites A10 is a troglobiont.
The degree of cave dependence is certainly a major factor influ- (Christiansen & Bellinger, 1996), and males have also been reported for P. pavo, a species that is morphologically similar to Pygmarrhopalites A10 (Christiansen & Bellinger, 1996).
| Biogeography: evidence for vicariance across the Mississippi River Valley
The climatic and geological changes during the Pleistocene and their impacts on the distribution and diversity of North American cave F I G U R E 7 Close-up of clade Pygmarrhopalites A10 from Figure 6a illustrating timing information from estimates of molecular divergence and geological evidence supporting vicariance across the Mississippi River Valley: (a) 2.90-4.76 Ma (95% HDP) (blue bar) divergence time between Missouri and Illinois lineages (separated by gray dashed line), posterior probabilities at each node lower than 1 are not displayed; (b) late Pliocene/ early Pleistocene timing (dashed arrow) of initial Mississippi River entrenchment (Cupples & Van Arsdale, 2014) and increased river discharge (Cox et al., 2014); (c) 3.25 ± 0.26 Ma (green column) timing of initial Green River karst incision and excavation (Granger et al., 2001); (d) 2.41 ± 0.14 Ma (orange column) timing of first glacial melt (Balco et al., 2005 fauna have been well documented (Porter, 2007). For example, the modern course of the Ohio River, formed by changing climate during the Pleistocene, bisects a major cave-bearing karst region along the Indiana-Kentucky border. Niemiller, McCandless, et al. (2013) demonstrated that this river is a major biogeographic barrier, facilitating the divergence and subsequent isolation and speciation of troglobiotic cavefish populations. Like the Ohio River, the Mississippi River has also been implicated as a "hard" geographic barrier to dispersal for many surface species (e.g., Soltis, Morris, McLachlan, Manos, & Soltis, 2006), but its influence on the evolutionary history of cavedwelling organisms has yet to be evaluated, in part, because the geological history of the Mississippi River and its influence on regional cave-bearing karst remain poorly understood.
Molecular divergence times of Pygmarrhopalites A10 populations spanning the Mississippi (Figures 6 and 7), patterns of genetic structure (Tables 4c, 5; Figure 5), and topology tests ( shortly after, this process took place for the Mississippi River. The corroboration of timing information derived from both biological and geological data (Figure 7) supports the hypothesis that climatic and geological events beginning in the late Pliocene initiated and maintained genetic isolation between troglobiotic springtail populations in Illinois and Missouri, but the exact mode of gene flow across the preglacial Mississippi River and tributaries, prior to their genetic isolation, is not known. It is plausible that sections of karst were periodically isolated and rejoined by shifting meanders and periods of low flow, later removed by Plio-Pleistocene entrenchment and excavation, providing intermittent subterranean passage for cave organisms until the late Pliocene or early Pleistocene.
The lack of genetic structure across the Mississippi River (Tables 4c and 5; Figure 5) and nonmonophyly ( Over-reliance of mtDNA can produce misleading phylogenetic and biogeographic conclusions due to introgression, hybridization, paternal inheritance, and incomplete lineage sorting (Funk & Omland, 2003), but none of these processes have been reported for Collembola, except hybridization (Deharveng, Bedos, & Gisclard, 1998;Skarzynski, 2004). The development of more sensitive, nondestructive DNA extraction and genomic sequencing methods will certainly help alleviate these issues, improve the precision and accuracy of divergence time analyses, and bring springtail genetics into the big data era.
| Cryptic diversity, short-range endemism, and implications for conservation
Recent discoveries of cryptic species have challenged our current understanding of biological diversity (Fišer, Robinson, & Malard, 2018), and this paradigm shift is particularly evident in subterranean habitats where ideal conditions have fostered widespread cryptic speciation, including examples of recent divergence in cavefish (Niemiller, McCandless, et al., 2013), morphological stasis in amphipods (Trontelj et al., 2009), and morphological convergence in springtails (Christiansen, 1961). Therefore, it was important in this study to detect the presence of cryptic diversity and delimit OTUs prior to phylogeographic comparisons, to avoid interpreting interspecific variation as population-level genetic structure. Large gaps in genetic distance frequencies ( Figure 3) and the presence of strong interspecific genetic structure within caves (Table 2) The detection of short-range endemics, genetic isolation, and apparent cryptic diversity has major conservation implications.
Reduced dispersal capacity observed for Pygmarrhopalites can increase their susceptibility to human disturbances such as land use practices, climate change, pollution, and invasive species-all of which pose major threats to fragile cave ecosystems (Culver & Pipan, 2009a;Taylor & Niemiller, 2016). In fact, growing concerns of karst groundwater contamination (Panno, Krapac, Weibel, & Bade, 1996) prompted Pygmarrhopalites madonnensis (Zeppelini & Christiansen, 2003), a troglobiotic springtail known from a single cave in Monroe Co., Illinois, to be listed as state endangered (Mankowski, 2010). This is concerning considering that our data indicate that single-site endemics are not only extremely common but may also comprise a large majority of troglobiotic springtail diversity throughout this region. Lastly, unrecognized cryptic species complexes with allopatric ranges, presumed to be a single widely distributed species, may lead to misguided biodiversity conservation and management decisions.
| CON CLUS IONS
Salem Plateau caves and their springtail inhabitants provide a model system for comparative phylogeographic studies addressing important questions in evolution and subterranean biogeography. We characterized and compared patterns of molecular diversity between species in the genera Pygmarrhopalites and Pogonognathellus, which led to three important findings. First, conflicting phylogeographic patterns between troglobiotic and eutroglophilic species distributed across the same geographic barriers suggests that different degrees of cave dependence can have major impacts on the dispersal capacity and genetic connectivity of cave organisms. Second, estimates of genetic structure and molecular divergence indicate that climatic and geological processes during the late Pliocene/early Pleistocene were major factors driving isolation between populations of troglobiotic cave organisms in Salem Plateau karst spanning the Mississippi River in Illinois and Missouri. Lastly, the large number of deeply divergent lineages and high rates of short-range endemism detected in this study exposes a major knowledge gap in our understanding of cave microarthropod diversity and highlights potential conservation concerns under growing threats to cave biodiversity. Additional phylogeographic research and the development of genomic datasets for cave springtails will further contribute to our understanding of how and why organisms occupy, persist in, and adapt to cave environments-information critical for the development and implementation of conservation strategies needed to manage and protect cave biodiversity (Porter, 2007). Institute. We would also like to thank the faculty, staff, and graduate students of the Department of Entomology at the University of Illinois at Urbana-Champaign for their support.
AUTH O R CO NTR I B UTI O N S
A.D.K. contributed to research design, collected and analyzed data, and wrote the manuscript. S.J.T. conceived of the project, contributed to research design, provided access to cave sites, and assisted in data collection and manuscript writing. M.A.D. assisted in writing the manuscript and provided substantial molecular laboratory resources that contributed to data collection.
DATA ACCE SS I B I LIT Y
All DNA sequence data from this study have been submitted to GenBank and are available under accession numbers MH269419-MH269696 and listed in Supporting information Appendix S1. | 2018-11-15T16:59:43.137Z | 2018-09-24T00:00:00.000 | {
"year": 2018,
"sha1": "701ca774ea7e369467e49620ba1fc39f823621a7",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/ece3.4507",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "701ca774ea7e369467e49620ba1fc39f823621a7",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Geography",
"Medicine"
]
} |
258517858 | pes2o/s2orc | v3-fos-license | Recovery efficiency and characteristics of long core by CO 2 flooding in low permeability sandstone reservoirs
—CO 2 flooding is an economical and efficient enhanced oil recovery technology, however, it is difficult for conventional short core experiments to provide accurate parameters for the later on-site optimization scheme. In this paper, on the basis of long core flooding experiments and applying NMR technology, we quantitatively evaluated the recovery efficiency and remaining oil distribution characteristics by CO 2 in low-permeability sandstone reservoirs from a microscopic perspective, and explored the potential of CO 2 flooding in low-permeability sandstone reservoirs. Results showed that with the CO 2 injection pressure increase, the recovery efficiency of the long core increased and the ultimate recovery was 65.3%. Recovery efficiency of short cores at the inlet, middle and outlet decreases successively, but the change range was not large, and all of them are around the recovery efficiency of long core. In addition, at low injection pressure, almost all CO 2 went into the larger pores to replace oil. As increased pressure, the oil started to be produced from the smaller pores, but at 22 MPa, the recovery efficiency in larger pores (76.88%-83.38%) was still higher than that of smaller pores (68.73%-72.74%). Which provided a guide for optimizing the CO 2 enhanced oil recovery method in the field.
Introduction
In recent years, low permeability resources have become a major part of global crude oil supply. As a result of the poor physical properties, thin pore throats, with the proposed start pressure gradient, etc., there are some problems in conventional waterflooding development of low permeability reservoir, such as high waterflooding pressure, rapid rise of water content, high cost of waterflooding, serious reduction of permeability and low productivity, which make it very difficult to stabilize and increase oil production [1].
CO2 gas has advantages of strong injection capacity, large expansion coefficient, good miscibility with crude oil, etc., and can significantly enhance oil recovery [2][3][4]. At present, the influential factors and characteristics of CO 2 flooding have been extensively studied. However, it is difficult for conventional short core experiments to provide accurate parameters for the later on-site optimization scheme, so some scholars use long core experiments to evaluate the oil recovery effect [5,6].
In recent years, nuclear magnetic resonance (NMR) technology is applied frequently in oil and gas fields to evaluate recovery efficiency and residual oil distribution characteristics [7,8], but few studies have combined NMR technology with long core flooding experiments. In this paper, on the basis of long core flooding experiments and applying NMR technology, we quantitatively evaluated the recovery efficiency and remaining oil distribution characteristics by CO2 in lowpermeability sandstone reservoirs from a microscopic perspective, and explored the potential of CO 2 flooding in low-permeability sandstone reservoirs, which provided a guide for optimizing the CO 2 enhanced oil recovery method in the field.
Preparation of long core
In order to complete the single pipe core flooding experiment, the conventional short core is arranged by harmonic average method to prepare long core [6]. Each core is connected to each other with filter paper for eliminating the end effect of rock. The information of the spliced cores is shown in Table 1.
The steps of long core preparation are as follows: ① Long core permeability can be calculated from equation (1). K is long core permeability, K 1 , K 2 …K n are short core permeability, respectively; L is long core length, L 1 , L 2 …L n are short core length, respectively. ② By comparing the permeability of conventional short core with that of long core, the short core placed first at the outlet end is the core whose permeability is the closest to that of long core.
③ Continue to repeat steps ① and ②, the remaining short cores were sequentially arranged into the core holder.
Preparation of experimental oil, formation water and CO2 gas
The experimental oil was prepared with crude oil samples from Chang-2 reservoir of Dingbian Oilfield and kerosene in proportion to the volume of 1:1, which has a viscosity of 7.1 mPa•s and a minimum miscibility pressure (MMP) of 17.8 MPa with CO 2 . The experimental formation water was prepared based on the actual water quality monitoring data of Chang-2 reservoir in Dingbian Oilfield, and the salinity is 25000mg/L. The purity of CO 2 gas is 99.9%.
Experimental setup
As shown in Figure 1, the power is provided by the syringe pump, which is capable of continuously providing high accuracy, constant speed or constant pressure fluid for a long time, and it has a maximum working pressure of 150 MPa and an accuracy of 0.001 mL/min. The experimental temperature is maintained by the thermostatic oven, which can provide a maximum temperature of 200°C with an accuracy of 0.1°C. The maximum core length that the long core holder can hold is 120 cm and the maximum pressure it can hold is 35 MPa. The produced fluid is separated by the oil and gas separator and then enters the gas flow meter, which has an accuracy of 0.001 mL.
Experimental steps
① Core preparation. The selected core were deeply cleaned with petroleum ether and benzene for 120h, and the cleaned core were dried in the thermostatic oven at 120 ℃ for 24 h. Then the measurements of length, diameter and permeability of the cores were taken, as shown in Table 1. Dry the core after the test (120 ℃, 24 h). The short cores were spliced into long cores by harmonic average method, and cores were renumbered in order of arrangement, as shown in Table 1.
② Saturated formation water. The simulated formation water is configured according to reservoir formation water component. In order to completely saturate the core with simulated formation water, a constant flow rate of 0.05 mL/min was used to inject formation water into the core until the injection volume exceeded twice the volume of the core. Also, the core porosity was calculated, as shown in Table 1. Then, the NMR T2 spectrum were acquired for the core saturated with formation water.
③ Saturated Mn 2+ solution. The Mn 2+ solution was configured with concentration of 15000 mg/L, and a constant flow rate of 0.05 mL/min was used to inject Mn 2+ solution into the core until the injection volume was 3-4 PV. Then, the NMR T 2 spectrum were acquired for the core after Mn 2+ solution flooding to ensure the elimination of the water signal.
④ Saturated experimental oil. The original oil-water distribution was established by injection of the experimental oil into the core at a constant flow rate of 0.05 mL/min until the oil content of the outlet liquid was 100%. Then, the NMR T2 spectrum were acquired for the core saturated with experimental oil.
⑤ CO 2 flooding. The injection pressure was stabilized at 7.2 MPa, 12 MPa, 17 MPa and 22 MPa by controlling the backpressure valve, respectively, and a constant flow rate of 0.05 mL/min was used to inject CO 2 into the core to flooding oil until there was no crude oil in the outlet liquid. Then, the NMR T 2 spectrum were acquired for the core after CO 2 flooding.
⑥ After flooding, the cores were re-cleaned and dried according to step ①. Then, change the injection pressure and repeat steps ② -⑤.
Recovery efficiency
The recovery efficiency of long core (core L#) and short core at the inlet (core S1#), middle (core S8#) and outlet (core S16#) were calculated respectively, where the recovery efficiency of the short cores were calculated by NMR T 2 spectrum and the recovery efficiency of the long core was calculated by the volume of produced fluid.
As shown in Figure 2, with the CO 2 injection pressure increase, the recovery efficiency of core L# increases. At 7.2MPa, the recovery efficiency is only 6.4%. As the pressure increases to MMP and continues to increase, the potential of the core L# is activated, and the recovery efficiency increases sharply to 65.3% at 22 MPa. This is because injection pressure increases, the solubility of CO2 in crude oil increases, which makes crude oil viscosity decrease, capillary pressure decrease, and crude oil flow more easily. Moreover, the injection pressure reaches MMP, the CO 2 flooding mechanism is transformed due to its greatly enhanced extraction capacity and greatly reduced interfacial tension with crude oil, resulting in significantly higher recovery efficiency.
As shown in Figure 3, with the CO 2 injection pressure increase, the recovery efficiency of core S1#, S8# and S16# increases, which is the similar as that of core L#. In addition, the recovery efficiency of core S1#, S8# and S16# decreased successively, but the change range was not large, and all of them are around recovery efficiency of core L#. This is because CO 2 replaced the crude oil in core S1#, S8#, and S16# successively during the flooding process. The flooding time was long enough for CO 2 to fully replace the crude oil in the core L#. Therefore, there is little difference between the final recovery efficiency of inlet and outlet. Figure 3. The relationship between recovery efficiency of core S1#, S8#, S16# and L# and pressure.
Remaining oil distribution characteristics
The T 2 spectrum of short core at the inlet (core S1#), middle (core S8#) and outlet (core S16#) were selected for analysis recovery efficiency and remaining oil distribution characteristics in largeer and smaller pores, as shown in Figure 4-6. The variation trend of T 2 spectrum and recovery efficiency in larger and smaller pores for core S1#, S8# and S16# are similar. At 7.2 MPa, basically all CO 2 entered the larger pores, the T 2 amplitude decreased significantly in larger pores, the recovery efficiency is 6.71%~8.73%. While the T 2 amplitude changed little in smaller pores, the recovery efficiency is only 1.71%~2.1%. When the pressure increased to 12 MPa, the T 2 amplitude decreased significantly in smaller pores, indicating that the crude oil also started to be producted in the smaller pores at this stage, but the recovery efficiency in the smaller pores (11.07%-14.33%) was still significantly lower than that in the larger pores (18.97%-21.63%). This is because CO 2 will preferentially enter the larger pores with low resistance, and only when the pressure accumulated in the larger pores reaches enough to overcome the capillary pressure will it enter the smaller pores and replace the crude oil. As the pressure increases to MMP and continues to increase, the T 2 amplitude significantly reduced both in larger and smaller pores, and the recovery efficiency is significantly enhanced. This is because at this stage, the extraction capacity of CO 2 is significantly enhanced and the interfacial tension disappears, making it easier for CO 2 to enter the smaller pores with higher original resistance and replace crude oil, which greatly improves the recovery efficiency. In addition, it also can be seen that at 22 Mpa, the recovery efficiency in larger pores (76.88%-83.38%) was still greater than that in smaller pores (68.73%-72.74%). This is mainly because even if at miscible stage, CO 2 will preferentially enter larger pores, and the miscible flooding is formed in larger pores, then which diffuse to the surrounding pores.
Conclusion
In this paper, on the basis of long core flooding experiments and applying NMR technology, the recovery efficiency and remaining oil distribution characteristics by CO 2 flooding in low-permeability sandstone reservoirs were quantitatively evaluated from a microscopic perspective, and the potential of CO 2 flooding in low-permeability sandstone reservoirs was explored, which provided a guide for optimizing the CO 2 enhanced oil recovery method in the field. The conclusions are obtained as below: (1) The recovery efficiency of long core increases with the CO 2 injection pressure increase, and the ultimate recovery was 65.3%.
(2) The recovery efficiency of short core at the inlet, middle and outlet decreased successively, but the change range was not large, and all of them are around recovery efficiency of long core.
(3) At low injection pressure, almost all CO 2 went into the larger pores to replace oil. As increased pressure, the oil started to be produced from the smaller pores, but at 22 MPa, the recovery efficiency in larger pores (76.88%-83.38%) was still higher than that of smaller pores (68.73%-72.74%). Which provided a guide for optimizing the CO 2 enhanced oil recovery method in the field. | 2023-05-06T15:18:52.158Z | 2023-01-01T00:00:00.000 | {
"year": 2023,
"sha1": "4232c8551d1bd76e70b779b66871525ef877d844",
"oa_license": "CCBY",
"oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2023/22/e3sconf_isesce2023_01027.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "199a54a20e8f03d97b16b1015027484a43709eb2",
"s2fieldsofstudy": [
"Geology"
],
"extfieldsofstudy": []
} |
261577146 | pes2o/s2orc | v3-fos-license | The Effectiveness of Meloxicam Adjuvant Therapy against Negative Symptoms and Neutrophil Lymphocyte Ratio (NLR) in Schizophrenic Patients
of
Introduction
Schizophrenia is a psychotic disorder signed by psychotic symptoms accompanied by impaired cognitive and social functioning. The symptoms are divided into positive and negative symptoms. Although the use of antipsychotics is effective for positive symptoms, there are still many limitations to the response and resistance to the treatment of negative symptoms. 1 Some works suggest that the pathophysiology of schizophrenia is related to the abnormality of cytokines and immune systems. Those take an important role in schizophrenia management. The etiology of schizophrenia remains unclear, but a shred of evidence has been found to support the hypothesis.
Neuroinflammation-induced glutamate and N-methyl-D-aspartate receptor dysfunction (NMDARs) may contribute to the etiology of schizophrenia. 2 The role of neuroinflammation in schizophrenia has been elaborated, and it is postulated that microglia activation responds to small pathological changes in the brain by releasing proinflammatory cytokines. Persistent microglial hyperactivity causes neuronal apoptosis, neuronal degeneration, and brain damage. If anti-inflammatory regulators cannot balance the pro-inflammatory reaction, the inflammation persists and coexists with neuropsychiatric symptoms. 3 Neutrophil-lymphocyte ratio (NLR) is a simple and affordable inflammation marker obtained from complete blood counts. Its etiology role has been studied in a variety of diseases. NLR is the value obtained by dividing the absolute number of neutrophils by the absolute number of lymphocytes within normal in healthy adults between 0.78 to 3.53. NLR is an inexpensive disease marker calculated from complete blood counts, and its pathogenetic role has been investigated in a broad spectrum of diseases. Increased NLR has been associated with increased cytokines and C-reactive protein (CRP) and is widely used in the literature as a process in systemic inflammation. 3 The elevated NLR has been associated with cytokines and CRP elevation. NLR testing is commonly used and is considered a marker of the occurrence of a systemic inflammatory process. High NLR positively correlates with elevated IL8 and IL-6 in patients with liver cirrhosis, 4 laryngeal cancer, 5 and ovarian cancer. 6 A meta-analysis that evaluated the effectiveness and tolerability of non-steroidal anti-inflammatory drugs (NSAIDs) as adjuvant therapy in treating schizophrenia showed that adjuvant NSAID therapy outperformed placebo concerning positive symptoms, negative symptoms, total psychopathology, and general psychopathology scores. 2 Another meta-analysis that aimed to explore the effects of anti-inflammatory agents in schizophrenic patients comprehensively demonstrated a significant reduction in negative symptoms with antiinflammatory augmentation therapy. The overall antiinflammatory agent significantly improves general function.
Total PANSS score and disease duration were identified as moderate factors in evaluating antiinflammatory augmentation's effect on improving psychiatric symptoms. 1 Non-pharmacological therapies such as cognitive behavioral therapy (CBT) and motivation and engagement training (MOVE) could optimize the treatment efficacy. 7 Other therapies, such as cognitive enhancement therapy (CET), could improve the quality of life of schizophrenic patients which can also be achieved with pharmacotherapy combinations. 8 Family models could also be applied in schizophrenia management. The concept aids families in managing their stress by reducing burden and stigma. Hence, the patients can survive, rise, become stronger, and provide better care for schizophrenic patients. 9 There needs to be a long-term cure for schizophrenia. High treatment adherence may reduce the symptoms of schizophrenia and guard against relapse. Family support is essential to ensure that the patients continue to take their medication regularly. 10 This study aimed to determine and analyze meloxicam as an adjuvant therapy to improve negative symptoms and changes in the NLR in schizophrenic patients.
Methods
This was a quasi-experimental study using a singleblind, pretest-post-test design conducted to determine the effect of additional therapy with meloxicam on NLR levels and negative subscale PANSS scores in schizophrenic patients receiving risperidone-chlorpromazine therapy in the Inpatient Unit of the Psychiatric Department Dr. Arif Zainuddin Regional Mental Hospital (RSJD), Surakarta, from May to July 2020.
The sample size in this study was 34 subjects, randomly divided into two groups, namely the treatment group, which consisted of 17 subjects, and the control group, which consisted of 17 subjects who had met the inclusion and exclusion criteria. Inclusion criteria include a) schizophrenic patients who were hospitalized at RSJD from May to July 2020, b) patients who received treatment with a combination antipsychotic risperidone-chlorpromazine, and/or intramuscular haloperidol injection, c) patients aged 18-40 years old, and d) patients who got caregiver approval. Exclusion criteria include a) schizophrenic patients with organic disorders such as epilepsy, stroke, and mental retardation or head injury with a history of decreased consciousness, b) substance and alcohol abuse, c) taken anti-inflammatory drugs or steroids or taken anti-inflammatory drugs or steroids for less than one month, and d) gotten electroconvulsive therapy (ECT) or transcranial magnetic stimulation (TMS).
Subjects in the treatment group received additional therapy of meloxicam 15 mg per day for 4 weeks, in addition to antipsychotic therapy with risperidonechlorpromazine combination, while subjects in the control group received antipsychotic therapy with the combination of risperidone-chlorpromazine. The research subject data was obtained by examining NLR levels and the negative subscale PANSS score assessment results. After all research data had been collected, data analysis was performed using the Statistical Package for the Social Sciences (SPSS) 25.0 program. From the study, data were obtained about the demographic characteristics of the research subjects and the results of the NLR examination and the PANSS negative subscale pre-and post-test assessment. Table 2. The effect of Meloxicam on changes in the PANSS score on the negative subscale of the treatment and control groups pre-and post-therapy Source: Research data, processed Table 3. The effect of Meloxicam on NLR changes in the treatment and control groups pre-and post-therapy Source: Research data, processed
Demographical Characteristics of Research Subjects
The demographic assessment of the research group consisting of age, gender, and education level showed equality, as shown in Table 3. Overall, based on the results of statistical tests, it can be said that the research subjects came from a homogeneous sample of gender (p = 0.225), age (p = 0.097), education (p = 0.668), and pre-test NLR scores (p = 0.091). Still, the negative subscale PANSS pretest score (p = 0.002) was not homogeneous. Gender is also one of the risk factors for other mental illnesses, such as anxiety. 11 Based on the normality test, it was found that the distribution of PANSS scores on the negative subscale of the treatment group and the control group of this study had abnormal initial data. Thus, the sample of this study was not homogeneous. However, homogeneous results were obtained for the distribution of the initial NLR values in the treatment and control groups based on the normality test.
This study found that both the treatment and control groups experienced a decrease in the negative subscale PANNS score in both groups. In the treatment group, there was a decrease in the negative subscale PANNS score which was lower than in the control group. After analyzing with statistical calculations, there was an insignificant difference in the decrease in the negative subscale PANSS score. Therefore, it can be said that administering additional therapy with Meloxicam 15 mg/day in reducing the negative subscale PANSS score in schizophrenic patients undergoing hospitalization at RSJD was not statistically significant but clinically significant.
The results of this study are contradictory to a previous study conducted by Purwono (2018), which stated that the addition of Meloxicam therapy was effective in reducing Hs-CRP levels and improving PANSS scores. 12 A similar study by Müller, et al. (2008) also suggested that administering NSAID adjuvant therapy improved negative symptoms in schizophrenic patients. 13
Discussion
Schizophrenia is a complicated and severe brain disorder, with a reported median incidence of 15.2 per 100,000 persons. Its prevalence in China ranged from 0.39% (0.37% -0.41%) in 1990, 0.57% (0.55% -0.59%) in 2000, and 0.83% (0.75% -0.91%) in 2010. 14 Meanwhile in Southeast Asia, the number of schizophrenic patients has increased from about 2 million people in 1990, to almost 4 million people in 2016. This figure has almost two times the fold. 15 NLR is currently known as a marker, which is calculated from complete blood counts. NLR takes a pathogenetic role and has been investigated in a broad spectrum of diseases. The increased of NLR has been associated with increased cytokines and CRP. It is widely used as a process in the presence of systemic inflammation. 3 The adjunctive antiinflammatory therapy may be beneficial for schizophrenic patients, specifically for those in the early stages of the disease. 16 NSAID therapy gives promising results and shows a more beneficial treatment effect when standard antipsychotic therapy is given together with antiinflammatory drugs compared to treatment using a single antipsychotic to improve negative symptoms in schizophrenic patients. This finding may reflect a complex interaction between anti-inflammatory effects and modulation of glutamatergic and dopaminergic systems by a COX-2 inhibitor. 17 Various other reasons were also 18 Also, adjunctive anti-inflammatory and anti-oxidant therapy may increase the benefits in schizophrenic patients, who are still in the early stages of the disease. 19 There was a change in the mean value for the NLR values in the two research groups. In the treatment group, there was a decrease in the NLR value, while in the control group, there was an increase in the post-test NLR value. After analyzing with statistical calculations, there were no significant differences in the decrease in NLR values. Thus, it can be said that the addition of Meloxicam 15 mg/day in reducing the NLR value in schizophrenic patients who were hospitalized at RSJD was not statistically significant but clinically significant. This is in accordance with several previous studies. However, a study revealed the association between NLR and the positive symptoms of schizophrenia. It can be shown that the NLR will be elevated in schizophrenic patients. 20 Elevated NLR can reflect the immunology process, which can be found in psychiatric patients. 21 The NLR level also determined the severity level of schizophrenic patients. In other words, the NLR level could be used as a biomarker. 22 The other parameter that also increased in schizophrenic patients was the monocyte-lymphocyte ratio (MLR) level. 23 In an RCT study conducted by Jaehne, et al. (2015), there was a decrease in the inflammatory response in schizophrenic patients, especially in periods of remission who received antipsychotic treatment. 24 On the other hand, the chronic inflammatory process in schizophrenia is related to the fact that inflammatory markers such as NLR do not return to normal in the remission phase, and the inflammatory process will continue, even during periods of remission. 25 The increase in serum cortisol levels in chronic schizophrenic patients causes a decrease in the number of lymphocytes compared to periods of relapse and remission. 26 In addition, theoretically, NLR will not be a reliable marker in patients who have a history of treatment with clozapine associated with agranulocytosis. This finding may be due to the generalized inflammatory response reported in antipsychotic-treated patients resulting in granulocytosis.
The patients in this study were not evaluated for confounding factors, such as previous history of antipsychotic medication, inflammatory disease that may have occurred between periods of relapse or remission, or how long and how often the patients had relapses. A metaanalysis conducted by Karageorgiou, et al. (2019) examined the association between NLR and schizophrenia in ten studies (804 schizophrenic patients and 671 controls). 3 In schizophrenic patients, the NLR increased by 0.65. Several studies on schizophrenia and its relationship with NLR, both moderate and high quality, showed a significant increase in NLR in schizophrenic patients (heterogeneity = 0%).
A multicenter cross-sectional study in 156 schizophrenic patients and 89 healthy control subjects and complete blood counts assessed its clinical pathological severity using Psychiatric Brief Rating Scales. 27 The results of this study showed that the NLR of schizophrenic patients was significantly higher than healthy controls (2.6 ± 1.1 vs 1.9 ± 0.6, respectively, p < 0.001). NLR did not significantly correlate with the severity and duration of schizophrenia (r = 0.065. p > 0.05). The occurrence of aggressive behavior in schizophrenic patients could be a sign of its severity. In this case, NLR can be used as a biomarker to evaluate the risk of aggression quickly. 28 The elevated mean of NLR was also observed in a patient with the first episode of psychosis (FEP). This can be helpful in order to identify inflammatory imbalance through NLR as a biomarker. 29 A study that involved 52 schizophrenic patients and 53 healthy subject groups revealed that the number of neutrophils, leukocytes, NLR values, and monocytes was higher in schizophrenic patients than in the control group. However, the NLR values did not show a significant relationship with the illness duration, the disease severity, or number of hospitalizations. 30 A literature review also revealed the potential of NLR means in other psychological disorders, such as suicidal willingness. After controlling some variables, such as sex, age, and the severity of depression in 393 patients, the NLR was significantly associated with suicidal behavior. NLR might be costeffective, accessible, and easily reproducible for daily practice. 31 Suicidal thoughts were more prevalent in those who were younger in age, had illnesses that lasted longer, scored lower on the Global Assessment of Functioning (GAF) scale, were largely female, were unemployed, made less money, and had less education. 32 Schizophrenic patients can be classified into two groups, FEP and chronic disease. Even though NLR mean can be used as a biomarker, it might be difficult to analyze its accuracy due to the antipsychotic use. Therefore, using NLR in clinical practices requires the standard of its normal values in the general population. 3 A scoping review revealed an interesting fact. Albeit an individual uses antipsychotic therapy, the NLR value seems to be increased and significantly correlates with schizophreniapositive symptoms. 33 The classification of biomarkers of schizophrenia are central and peripheral biomarkers. However, some biomarkers collected from post-mortem brains are changed to be found in blood-based biomarkers, which may lead to the useful and important value of bloodbiomarkers to deciphered the process in the brain. 34 As a serious mental disorder or mental illness, that hit as many as 20 million worldwide, many factors supported or contributed to this disease. Untreated patients can lead to frequent hospital admissions, decreased life quality, decreased social function, and decreased life expectancy. A psychiatrist will treat the patient who reported their behavioral abnormalities and can be improved. However, many patients are still untreated due to being underreported. Thus, a potential biomarker that will be applicated may be important to reduce the possibility of untreated patients. 34 Any biomarker that already exists should be evaluated regularly to increase its accuracy. Hence, biomarkers may still have various error levels. The heterogeneity of schizophrenia can increase the potential of applying multiple biomarkers as diagnosis tools. 35 Combining different markers or complex multimarker panels can differentiate patients with different underlying diseases and better classify more homogeneous groups. 36 Schizophrenia is a chronic and severe disabling neurodisorder with various genetic and neurobiological histories. Symptoms of schizophrenia can differentiate into two groups, negative and positive symptoms. Clinical manifestations of positive symptoms include a) delusions, b) hallucinations, and c) disorganized behaviors. Meanwhile, positive symptoms can be shown with various symptoms, including a) lack of language, b) decreased effect, and c) loss of interest and motivation, but cognitive symptoms are present. 37 This study has some limitations, including a) the number of samples was relatively small, which similar study requires a larger sample size; b) the duration of history and the number of relapses of schizophrenic patients were not mentioned in this study, which can lead to the unknown effectivity of the adjuvant meloxicam on the degree or stage of the severity of schizophrenia. This is associated with the negative subscale PANSS scores and NLR scores; c) the assessment or the evaluation of the negative subscale PANSS score was only performed at the beginning and end of treatment. Of these, it is not known exactly when the negative subscale of the PANSS score began to decline; and d) between the evaluation process, there was no special monitoring regarding the occurrence of secondary infectious processes that may occur during hospitalization that can affect the effectivity of meloxicam and influence the NLR value.
Strength and Limitations
The strength of this study was that the sample selection was performed randomly, and interrater assessors performed the PANSS assessment. The limitation of this study was that the length of history of schizophrenia was not included. Hence, it was not known about the effectiveness of the adjuvant meloxicam on the degree or severity of schizophrenia.
Conclusion
There was a difference in the decrease in the negative subscale PANSS score in the treatment and control groups, whereas the control group showed a greater mean decrease in the negative subscale PANSS score. There were differences in changes in NLR values in the treatment group and the control group, where the treatment group showed an average decrease in the post-test NLR value, while the control group experienced an increase in posttest NLR. It was concluded that adjuvant therapy meloxicam 15 mg/day effectively improved negative symptoms and NLR values of hospitalized schizophrenic patients at RSJD clinically, but not statistically effective. | 2023-09-07T15:12:24.402Z | 2023-08-10T00:00:00.000 | {
"year": 2023,
"sha1": "3f970c2e7d0602ab53099d1366be7f7711712eca",
"oa_license": "CCBYSA",
"oa_url": "https://e-journal.unair.ac.id/JUXTA/article/download/37369/26009",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "932a308c51b6f9d2af4791725a4a08ca0e69e456",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
225062410 | pes2o/s2orc | v3-fos-license | Probabilistic Zero Forcing on Grid, Regular, and Hypercube Graphs
Probabilistic zero-forcing is a coloring process on a graph. In this process, an initial set of vertices is colored blue, and the remaining vertices are colored white. At each time step, blue vertices have a non-zero probability of forcing white neighbors to blue. The expected propagation time is the expected amount of time needed for every vertex to be colored blue. We derive asymptotic bounds for the expected propagation time of several families of graphs. We prove the optimal asymptotic bound of $\Theta(m+n)$ for $m\times n$ grid graphs. We prove an upper bound of $O \left(\frac{\log d}{d} \cdot n \right)$ for $d$-regular graphs on $n$ vertices and provide a graph construction that exhibits a lower bound of $\Omega \left(\frac{\log \log d}{d} \cdot n \right)$. Finally, we prove an asymptotic upper bound of $O(n \log n)$ for hypercube graphs on $2^n$ vertices.
Introduction
Zero-forcing is a widely studied coloring process on a graph. Initially, some vertices in a graph G are colored blue, while other vertices are white. At each time step, each blue vertex u connected to a white vertex v changes the color of v to blue if v is the only white neighbor of u. When this happens, we say that u forces v. At a given time step, a white vertex v may be forced by one or more blue vertices, in which case it will become blue. Every vertex that is colored blue will always remain blue, and it may force other white vertices to blue in future time steps. The concept of zero-forcing has been used to attack the maximum nullity problem of combinatorial matrix theory in [1], [3], [6], and [12]. Zero forcing is also related to power domination [2] and graph searching [16]. Viewing zero forcing through the lens of dynamical processes, Fallat et al. [8] and Hogben et al. [11] have studied the number of steps it takes for an initial vertex set to force all other vertices to blue, assuming that all the vertices will eventually become blue under the zero forcing rule. This is known as the propagation time of a set. Zero forcing can potentially model certain real world propagation processes such a rumor spreading. That being said, the deterministic nature of zero-forcing presents an obstacle for simulating the seemingly random process of rumor spreading in the real world.
The probabilistic color change rule is a probabilistic modification of the classic zero-forcing coloring rule introduced by Kang and Yi [13]. For every blue vertex u connected to a white vertex v, u forces v with probability where C[u] denotes the number of blue vertices in the closed neighborhood of u including u itself, and deg u is the total number of vertices connected to u. When some blue vertex has exactly one white neighbor, note that the probabilistic color change rule corresponds to the classical color change rule because that white neighbor is forced blue with probability 1. For a random probabilistic zero-forcing process on a graph G which initially starts with a set of vertices S colored blue, the propagation time of S, taking values in N ∪ {∞}, is defined as the number of time steps until all vertices in G are colored blue. The expected propagation time of S, denoted by ept(G, S), is the expected value of the propagation time of S. We also define the expected propagation time of a graph G, denoted by ept(G), as the minimum expected propagation time over all single-vertex subsets S = {v}, v ∈ G.
As noted in [7], probabilistic zero-forcing is very similar to the well-studied push and pull models for rumor spreading from theoretical computer science [5,14]. For the push model, one starts with a set of blue vertices, and at each time step, each blue vertex chooses one neighbor independently and uniformly at random and forces that vertex blue, if that vertex is white. For the pull model, at each time step each white vertex chooses a neighbor independently and uniformly at random, and the white vertex turns blue if the chosen neighbor is blue. The two models can also be combined to create a push and pull model in which at each time step, blue vertices choose a random neighbor to force and white vertices choose a random neighbor to try to become blue. As with probabilistic zero-forcing, the primary parameter of interest is the expected propagation time.
Returning now to probabilistic zero forcing, recent work has been done to compute bounds on propagation time for many families of graphs, including paths, cycles, complete graphs, and bipartite graphs [4,9]. In [15], it is shown for a connected graph G with order n and radius r that ept(G) ≤ O r log n r , and the authors construct an example to show tightness of the asymptotic. They also prove for a connected graph G with order n that ept(G) ≤ n 2 + o(n). Proving further bounds for expected propagation times of different families of graphs has been proposed as an area of study in [9] and [15]. In [7], the authors have established high probability results for the expected propagation time for the Erdős-Renyi graph G(n, p), where p is a function of n.
In this paper, we prove bounds on the expected propagation times of several other well-known families of graphs. Our main results are as follows:
Preliminaries
We review some well-known tools from probability theory that will be used in the paper. Theorem 2.1 (Chebyshev's Inequality). Given a random variable X, for all λ ≥ 0 we have . Let X 1 , X 2 , . . . , X n be independent random variables taking on Theorem 2.3 (Edge Isoperimetric Inequality for Hypercubes). Let S be any set of vertices in a dimension n hypercube graph with 2 n vertices. Then the number of edges in G between a vertex in S and a vertex not in S is at least |S|(n − log 2 |S|).
We will also use a coupling argument proven in [15]. Lemma 2.14). Suppose that initially in some graph G, some set of vertices S is colored blue. We follow a modified probabilistic process where at the t-th point in time, for any connected blue vertex u and white vertex v, the probability Pr t [u → v] that u converts v to become blue at the t-th step, is some function of G, u, v, and B t−1 , the set of blue vertices after the (t − 1)th step. In addition, suppose that for all u, v, deg u . Then the expected propagation time of S under this modified probabilistic color change rule is less than or equal to the expected propagation time of S under the original probabilistic color change rule.
Grid graphs
In this section we prove Theorem 1.1. For a grid graph G m×n on m × n vertices, given any initial vertex, one of the four corner vertices of G m,n is a distance of at least 1 2 (m + n − 2) away from the initial vertex. Hence it must take at least this amount of time to color it blue, and the lower bound ept( We consider a modified color change rule, in which a vertex that is currently white and has a blue neighbor becomes blue with probability exactly 1 4 . Note that each vertex v in G m×n has deg v ≤ 4, so the probability of a blue vertex forcing any adjacent white vertex is at least 1 4 . By Lemma 2.4, the expected propagation time under this modified process, which we denote by ept ′ , is at least as large as that of the original process. (1))(m + n) for any of the four corner vertices v corner ∈ G.
Proof. Without loss of generality, assume m ≤ n. Assign Cartesian coordinates (i, j) ∈ [0, m − 1] × [0, n − 1] to the vertices of G m×n . Without loss of generality, suppose that v corner = (0, 0). Starting from v corner = (0, 0), let T 1 denote the time at which all vertices on the x-axis are blue. Starting from a configuration in which all vertices on the x-axis are blue, let T 2 denote the amount of time it takes for all vertices to be blue. Note that ept The expected time needed for all vertices on the x-axis to be blue is at most the expected propagation time for a modified probabilistic color change process on a line, starting with an end vertex blue and such that every blue vertex forces a white neighbor with probability ] is bounded above by the expected time it takes to color m independent paths of length n blue, starting from an end vertex of each path. Fix any constant ε > 0. We claim that with exponentially small probability in n, the propagation time on any one path is more than (4 + 4ε)n. The probability that the propagation time on a path of length n is more than (4 + 4ε)n is at most the probability that the sum of (4 + 4ε)n independent Bern( 1 4 ) random variables is at most n. By the Chernoff Bound for µ = (1 + ε)n and δ = ε/(1 + ε), we have that this is bounded above by 2(1+ε) n . By a union bound over all m ≤ n paths, the probability that not all paths are completely blue after (1 + ε)n steps is at most me − ε 2 2(1+ε) n . Using the relationship between the number of trials and expected amount of time to success, we have that for every ε > 0, for sufficiently large n. We conclude that Note that Lemma 3.1 implies Theorem 1.1 because ept ′ (G m×n , {v}) ≤ ept(G, {v}) ≤ ept(G). For each 2 ≤ m, n ≤ 14, we run a program to simulate the probabilistic zero-forcing process 1000 times on an m × n grid graph. Figure 1 shows the average propagation time for 1000 trials for each pair (m, n) with 2 ≤ m, n ≤ 14. Remark 3.2. Experimentally, for the family of m × n rectangular graphs, ept(G m×n ) appears to grow asymptotically as 1 2 (m + n).
Regular graphs
In this section, we prove Theorem 1.2. Let k denote the diameter of G. Suppose that u and v are two vertices in G with shortest distance equal to k. Let S i be the set of vertices with distance exactly i from u, so that S 0 = u, v ∈ S k , and S i = ∅ for i > k.
since S i is nonempty and any vertex in S i has degree d but can only be connected to vertices in S i−1 , S i , or S i+1 . Thus Lemma 4.2. Let H be a star graph with n leaves, with its center blue and all other vertices colored white. If t > 12 log(n + 1), the blue vertex will propagate to all leaves in t steps with probability at least 1 − (0.97) t .
We will now show the upper bound on expected propagation time for a d-regular graph. More precisely, we prove the below lemma. Proof. Let G be a d-regular graph and choose some vertex v. Let w = v be some vertex such that the shortest path from v to w has length s. By carrying out the computations in Lemma 3.5 of [15], letting α = 0.97, C = 11.2, β = 0.985, C ′ = 2.4, C 2 = 20, if there exists a vertex u such that all vertices in G are at distance at most r from u, ept(G) ≤ 20(r log n r ) + O(log n). Taking r = 3n d+1 , we have Now we construct a family of d-regular graphs with expected propagation time Ω log log d d · n . We assume that d ≥ 5.
Lemma 4.4. Assume that d + 1 | n. Start with n d+1 copies C 1 , C 2 , ..., C n d+1 of K d+1 . For each copy C i , designate two distinct vertices v (i, 1) , v (i,2) ∈ C i and delete the edge connecting v (i,1) and v (i,2) . Then, for all 1 ≤ i ≤ n d+1 , insert an edge connecting v (i, 1) and v (i+1,2) , where indices are taken mod To prove this theorem, we begin with the following lemma.
Lemma 4.5. Suppose that for some i, v (i−1,1) and v (i,2) are blue and v (i, 1) and v (i+1,2) are white. Then, given that v (i+1,2) remains white through the entire process, the expected amount of time for vertex v (i,1) to be blue is Ω(log log d).
Proof. Let p(b) be the probability, starting from a state in which v (i+1,1) is white and there are b blue vertices in C i , including v (i,2) but not v (i,1) , that in one time step at most 2b 2 additional vertices are colored blue and v (i,1) remains white. 1) has an independent probability of of remaining white in one unit of time, where the factor 1 − b+1 d comes from the fact that v (i,2) is connected to an additional blue vertex v (i−1,1) ∈ C i−1 . Therefore where the factor 1 − b d b−1 comes from the fact that each of the b − 1 blue vertices has a probability b d of forcing v i,1 blue, and q(b) is the probability that, among d − b independent events that happen with probability at least 1 − 1 − b+1 d b , less than 2b 2 events occur. Denote a random variable X for the number of such events that occur.
So by Chebyshev's inequality, This means that when b ≤ 4 √ d, we have p(b) ≥ 1 − 9 b 2 , so 1 − p(b) ≤ 9 b 2 . We now use a similar argument as in Proposition 2.8 of [9]. Since starting with log d ≤ b ≤ 4 √ d blue vertices and coloring at most 4b 2 additional vertices blue means there are at most 5b 2 ≤ b 3 blue vertices after the round for b ≥ 5, the probability that there are at most b (3 r ) blue vertices after r rounds is at least ). Thus, starting from a state in which less than log d vertices are blue, v (i,2) is blue, and vertices v (i, 1) and v (i+1,2) are white, the expected amount of time until at least 4 √ d vertices are blue or vertex v (i,1) is blue is Ω(log log d).
We now finish the proof of 4.4.
Proof of Lemma 4.4. In order for all vertices in G to become blue, the process described in Lemma 4.5 must occur at least n 2(d+1) − 1 times. The expected amount of time for the process to occur once is Ω(log log d). Thus for fixed d, we have ept(G) = Ω log log d d · n .
The proof of Theorem 1.2 follows by Lemma 4.3 and Lemma 4.4.
Hypercube graphs
In this section, we prove Theorem 1.3. We prove the following key lemma: Lemma 5.1. Let G be an n-dimensional hypercube graph. At a given time in the probabilistic zero forcing process, let S denote the set of blue vertices in G, and let 0 ≤ k < n − 1 be the integer such that 2 k ≤ |S| < 2 k+1 . Then the expected amount of time steps to go from |S| blue vertices to at least 2 k+1 blue vertices is at most 131n c i independent trials, which we denote by µ, for n ≥ 1 by the Edge Isoperimetric Inequality, noting that n i=1 ic i is equal to the number of edges between a blue vertex and a white vertex. While there are 2 k ≤ |S| < 2 k+1 blue vertices, the probability that fewer than 1 2 µ white vertices are colored blue is at most e − 1 8 µ ≤ e − e−1 16e < 25 26 . Thus, the probability that at least 1 2 µ white vertices are colored blue is at least 1 26 . The probability that there remain between 2 k and 2 k+1 blue vertices in 52 2 k µ/2 units of time is at most the probability that among 52 2 k µ/2 events that each independently happen with probability 1 26 , fewer than 2 k µ/2 occur. By the Chernoff bound, this probability is at most Therefore, if 2 k ≤ |S| < 2 k+1 , with probability at least 0.63, after 52 2 k µ/2 units of time the number of blue vertices will be greater than 2 k+1 . We conclude that the expected time to go from |S| blue vertices to at least 2 k+1 blue vertices is at most Lemma 5.2. Let G be an n-dimensional hypercube graph. At a given time in the probabilistic zero white vertices in G, and let 0 ≤ k < n − 1 be the integer such that 2 k < |S| ≤ 2 k+1 . Then the expected amount of time steps to go from |S| white vertices to at most 2 k white vertices is at most 131n 1 n−k−1 .
Proof. The proof proceeds similarly to the proof of Lemma 5.1. By the Edge Isoperimetric Inequality, there are at least |S|(n − log 2 |S|) ≥ 2 k (n − k − 1) edges between a white and a blue vertex.
Let c i be the number of white vertices in G that are connected to exactly i blue vertices. Then note that the expected number of white vertices colored blue is at least the expected number of successes in n i=1 c i independent trials in which c i trials have probability 1 − ( n−1 n ) i of success. The expected number of successes among these n i=1 c i independent trials, which we denote by µ, satisfies e − 1 2e ≤ µ ≤ 2 k+1 n for n ≥ 1. While there are 2 k < |S| ≤ 2 k+1 white vertices, the probability that fewer than 1 2 µ white vertices are colored blue is at most e − 1 8 µ < 25 26 . Thus, the probability that at least 1 2 µ white vertices are colored blue is at least 1 26 . The probability that there remain between 2 k and 2 k+1 blue vertices in 52 2 k µ/2 units of time is at most the probability that among 52 2 k µ/2 events that each independently happen with probability 1 26 , fewer than 2 k µ/2 occur. This probability is at most 0.37 as in Lemma 5.1.
Therefore, if 2 k < |S| ≤ 2 k+1 , with probability at least 0.63, after 52 2 k µ/2 units of time the number of white vertices will be at most 2 k . We conclude that the expected time to go from |S| white vertices to at most 2 k white vertices is at most 1 0.63 · 52 · 2 k µ/2 ≤ 131 n n − k − 1 .
We are finally ready to prove Theorem 1.3.
Proof of Theorem 1.3. By Linearity of Expectation, the expected amount of time, starting from 1 initial blue vertex, for there to be at least 2 n−1 blue vertices is at most the sum over i = 0, 1, . . . , n−2 of the expected amount of times to go from 2 k ≤ |S| < 2 k+1 blue vertices to at least 2 k+1 blue vertices. By Lemma 5.1, this sum is at most n−2 k=0 131n · 1 n − k − 1 ≤ 131n · (1 + log n).
Once there at least 2 n−1 blue vertices, there are at most 2 n−1 white vertices. The expected amount of time to go from at most 2 n−1 white vertices to at most one white vertex is, by Lemma 5.2, at most n−2 k=0 131n · 1 n − k − 1 ≤ 131n · (1 + log n).
When there is at most one white vertex remaining, this vertex has a probability of 1 − n − 1 n n ≥ 1 − 1 e of being colored blue at any time step, so the expected amount of time steps until it is blue is e e−1 . We conclude that the expected propagation time of the entire hypercube is O(n log n). Remark 5.3. Experimentally, for this family of graphs, the expected propagation time appears to approximate n + 0.8 for small n.
For each 1 ≤ n ≤ 16, we run a program to simulate the probabilistic zero-forcing process 1000 times on a hypercube graph with dimension n and 2 n vertices, starting from a single blue vertex. Figure 2 shows the average propagation time over 1000 trials for each value of n. | 2020-10-26T01:00:11.180Z | 2020-10-22T00:00:00.000 | {
"year": 2020,
"sha1": "8c35335f180064f8d1df54e4e23f143d4883a044",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "8c35335f180064f8d1df54e4e23f143d4883a044",
"s2fieldsofstudy": [
"Mathematics",
"Computer Science"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
23881425 | pes2o/s2orc | v3-fos-license | Oral treatment with Eubacterium hallii improves insulin sensitivity in db/db mice
An altered intestinal microbiota composition is associated with insulin resistance and type 2 diabetes mellitus. We previously identified increased intestinal levels of Eubacterium hallii, an anaerobic bacterium belonging to the butyrate-producing Lachnospiraceae family, in metabolic syndrome subjects who received a faecal transplant from a lean donor. To further assess the effects of E. hallii on insulin sensitivity, we orally treated obese and diabetic db/db mice with alive E. hallii and glycerol or heat-inactive E. hallii as control. Insulin tolerance tests and hyperinsulinemic-euglycemic clamp experiments revealed that alive E. hallii treatment improved insulin sensitivity compared control treatment. In addition, E. hallii treatment increased energy expenditure in db/db mice. Active E. hallii treatment was found to increase faecal butyrate concentrations and to modify bile acid metabolism compared with heat-inactivated controls. Our data suggest that E. hallii administration potentially alters the function of the intestinal microbiome and that microbial metabolites may contribute to the improved metabolic phenotype.
INTRODUCTION
The prevalence of obesity and type 2 diabetes mellitus is expected to rise to 1 in 3 adult subjects having type 2 diabetes mellitus in 2050. 1 The pathophysiology of these metabolic disorders is complex, involving both environmental (dietary) and genetic factors affecting altered intestinal microbiota composition. 2 Insulin resistant subjects are characterised by reduced levels of shortchain fatty acid (SCFA)-producing bacteria. 3,4 Moreover, daily oral supplementation with the SCFA butyrate exerts beneficial effects on insulin resistance and dyslipidemia in diet-induced obese mice. 5 Transplantation of lean healthy microbiota in both murine and human models of insulin resistance has been shown to significantly improve insulin sensitivity and to increase levels of butyrate-producing bacteria in the gut. 6,7 With regards to the latter, we identified a specific increase in the butyrate-producer Eubacterium hallii in small intestinal biopsies of human obese and insulin resistant subjects upon lean donor faecal transplantation, 7 which was associated with improved (peripheral) insulin sensitivity.
E. hallii is an anaerobic, Gram-positive, catalase-negative bacterium belonging to the Lachnospiraceae family of the phylum Firmicutes that is present in both murine and human faeces. 8 E. hallii is a butyrate-producing species. Interestingly, in contrast to other intestinal bacterial isolates like Roseburia and Faecalibacterium that produce butyrate from monosaccharides, E. hallii has the capacity to also produce butyrate from lactate and acetate in a low pH environment such as the proximal small intestine. 9 However, in vivo treatment with oral E. hallii has never been performed. We therefore performed a study in obese and insulin resistant db/db mice to investigate whether oral administration (by gavage) of E. hallii would have beneficial effects on insulin sensitivity. Upon identification of the optimal E. hallii treatment dose of 10 8 CFU per day, we used this dose to subsequently investigate the effect of active and heat-inactivated E. hallii treatment on insulin sensitivity and energy metabolism using hyperinsulinemic-euglycemic clamp and metabolic cage approaches.
We found that oral treatment with active E. hallii improved insulin sensitivity in severely insulin resistant db/db mice and significantly increased energy expenditure. Furthermore, our data indicate that E. hallii mildly modifies SCFA production and bile acid composition, which potentially contributes to the beneficial effects of E. hallii treatment on insulin sensitivity in obese and diabetic db/db mice.
RESULTS
E. hallii treatment dose-dependently improves insulin-mediated glucose clearance Oral butyrate supplementation has been previously reported to regulate insulin sensitivity. 5 As E. hallii is a butyrate-producing bacterium, we assessed whether administration of E. hallii could have beneficial effects on insulin sensitivity in a mouse model for diabetes. We therefore explored the effects of oral administration of increasing dosages of E. hallii on basal parameters (i.e., body weight and food intake) and insulin responsiveness in severely obese and diabetic db/db mice. We found a dose-dependent increase in caecal E hallii concentrations upon treatment with 100 μl of active 10 6 , 10 8 and 10 10 CFU E. hallii (once daily for four weeks) (Figure 1a). Nevertheless, global analysis showed no major effect on the intestinal communities (data not shown). Importantly, body weight remained stable in all treatment groups compared with glycerol-treated controls (10 6 CFU: 38 ± 1.5 g, 10 8 CFU: 40 ± 0.3 g and 10 10 CFU: 41 ± 0.3 g versus placebo: 39 ± 1.3 g. NS; Figure 1b). We then set out to assess insulin responsiveness by performing intraperitoneal insulin tolerance tests (ITT) in all treatment groups. Interestingly, E. hallii-treated groups displayed significantly improved insulin-mediated reduction in blood glucose levels (10 6 CFU: − 32 ± 7%, 10 8 CFU: − 39 ± 9% and 10 10 CFU; − 34 ± 7% P o0.05) after 4 weeks of treatment compared with glycerol-treated controls (−2 ± 7% P o0.05; Figure 1c). Altogether, these data indicate that E. hallii treatment improves insulin-mediated reduction in glucose levels without affecting food intake and body weight in severely obese and diabetic mice. The 10 8 CFU E. hallii-treated mice exhibited the most remarkable response to insulin at all time points (t = 60, 90 and 120 min). In addition, 10 8 CFU E. hallii administration resulted in significantly reduced epididymal fat pad weight ( Figure 1d) and hepatic triglyceride levels (Figure 1e), which was also reflected in the expression pattern of genes involved in lipogenesis (Fasn and Acc1 were significantly reduced, (Figure 1f) and gluconeogenesis (trend towards reduction of G6Pc, Pk, Pck1 were noticed), (Supplementary Figure S1). This to us suggested that 10 8 CFU of E. hallii would be the optimal dosage to perform further investigations.
E. hallii treatment improves insulin sensitivity and increases energy expenditure On the basis of the results from the dose-response study, we chose 10 8 CFU E. hallii as daily therapeutic dose and repeated the study using active and heat-inactivated E. hallii as control. In line with observations from the dose-response study, body weight and food intake (Figure 2a,b) were similar in active and heatinactivated E. hallii-treated mice. In addition, lean and fat mass (as % of body weight) were similar in both treatment groups ( Figure 2c). Considering the effects of E. hallii treatment on insulinmediated reduction in glucose levels as assessed by ITT, we moved forward with an in-depth assessment of insulin sensitivity by performing hyperinsulinemic-euglycemic clamp experiments in conscious, unrestrained mice. We assessed the ability of insulin to suppress endogenous Ra (endogenous rate of appearance, a marker of hepatic glucose production) and whole-body glucose disappearance (Rd; Supplementary Table S1). Although endogenous glucose production was not significantly altered (active: − 33.9 ± 3.7% versus heat-inactivated: − 41.1 ± 5.4%, P = 0.299), treatment with E. hallii led to a close-to-significant increase in the ability of insulin to stimulate Rd (active: 136% versus heatinactivated: 109%, P = 0.060; Figure 3a). Considering the fact that db/db mice are severely insulin resistant, the improved Rd following 4 weeks of E. hallii treatment is of significant biological relevance.
Butyrate supplementation has previously been shown to improve energy expenditure in diet-induced obese mice. 5 Altogether with our data on the beneficial effects of E. hallii, a butyrate producer, on insulin sensitivity in db/db mice, this motivated us to assess the effect of E. hallii on energy expenditure in this mouse model. Energy expenditure, oxygen consumption and CO 2 production were monitored in metabolic chambers. Interestingly, active E. hallii treatment significantly increased total energy expenditure (active: 214 ± 4 kcal/kg/min versus heat-inactivated: 191 ± 9 kcal/kg/min, P o 0.05; Figure 3b), oxygen consumption (active: 44.1 ± 0.9 ml/min/kg versus heat-inactivated: 39.6 ± 1.8 ml/min/kg, P o0.05; Figure 3c) and CO 2 production (active: 38.0 ± 1.0 ml/min/kg versus heat-inactivated: 33.4 ± 1.8 ml/ min/kg, P o 0.05; Figure 3d) in the dark cycle. Respiratory quotient (expressed as VCO 2 /VO 2 ) was not significantly altered ( Figure 3e). In addition, to assess potential E. hallii-mediated changes in energy absorption, we analysed genes involved in glucose and lipid absorption in proximal part of the intestine. E. hallii treatment reduced intestinal genes involved in glucose (Sglt1 and Glut2) transport and lipid absorption (Cd36 and Fatp4; Supplementary Figure S2).
To assess whether treatment with E. hallii increased SCFA levels, potentially providing insight into E. hallii-mediated effects on energy metabolism, we collected faeces (24 h) and measured concentrations of the SCFA's butyrate, acetate and propionate. Active E. hallii treatment moderately increased faecal butyrate concentrations compared with heat-inactivated controls while propionate and acetate concentrations remained unaffected ( Figure 4a).
Alterations in gut microbiota composition have significant impact on bile acid levels and bile acid composition. 10 In addition to their role in solubilising food and uptake of food-soluble vitamins, bile acids are also important regulators of glucose and energy homeostasis. 11 We therefore assessed whether active E. hallii treatment affected plasma and faecal bile acid levels and composition. Plasma primary and secondary bile acid levels were similar in active versus heat-inactivated E. hallii-treated mice (Figure 4b, pie chart). Further analysis of primary and secondary bile acid species revealed that the concentration of the secondary bile acid tauroconjugated deoxycholic acid was significantly increased (Figure 4b, bar graph). Faecal primary and secondary bile acid levels remained unaffected by active E. hallii treatment (Figure 4c, pie chart). Levels of the primary bile acid β-MCA and the secondary bile acid ω-MCA, however, were significantly reduced in active versus heat-inactivated E. hallii -treated mice (Figure 4c, bar graph).
We then assessed expression levels of genes involved in bile acid metabolism and transport in liver and small intestine. Bile Data are mean ± s.d. Statistical analysis was performed using Student's t-test *Po 0.05.
Eubacterium hallii treatment and insulin sensitivity S Udayappan et al acid synthesis is tightly regulated by the bile acid receptor Farnesoid X receptor (Fxr) in liver and intestine. Hepatic Fxr exerts negative feedback control on cholesterol 7 alpha-hydroxylase (Cyp7al), the rate-limiting enzyme in hepatic bile salt synthesis. 12 Expression levels of hepatic Fxr and Cyp7a1 were similar in active and heat-inactive E. hallii-treated db/db mice (Figure 4d). Expression of genes encoding bile acid-synthetic genes such as (Cyp7a1, Cyp8b1, Cyp7b1 and Cyp27a1) and bile acid transporters (Ntcp, Oatp1, Mrp3, Bsep and Mrp2) remained unaffected by active E. hallii treatment. Although expression of Fxr in the small intestine was not altered by active E. hallii treatment, levels of fibroblast growth factor 15 (Fgf15), an FXR-target gene, were significantly reduced, which is suggestive of reduced activation of FXR in the intestine (31). We investigated the effect of active E. hallii treatment on genes regulating intestinal bile acid absorption by analysing the transcription factor (Gata4), apical sodiumdependent bile acid transporter (Abst), apical organic solute transporter (Ostα) and ileal lipid binding protein (Ilbp). 13 Active E. hallii treatment significantly reduced and increased the expression of Gata4 and Ostα, respectively, compared with heatinactive E. hallii treatment. Nevertheless, expression levels of Ilbp and Abst remained unaffected (Figure 4e).
DISCUSSION
The current study demonstrates that daily oral administration of E. hallii improves insulin sensitivity and increases energy metabolism in severely obese and diabetic db/db mice. Our observations that administration of increasing dosages of E. hallii did not affect body weight or food intake indicate that E. hallii treatment might be a safe and effective new probiotic strain to improve insulin sensitivity.
In the dose-response study, we found that E. hallii treatment improved insulin sensitivity, yet the highest treatment dose had less effect on insulin sensitivity than the lower dosages. This phenomenon was also found in a human intervention trial using B. infantis and might be explained by the fact that these high concentrations (410 10 CFU of bacterial strains) induce a crowding effect resulting in less efficient dispersion of the bacteria over the (small) intestine. 14 Moreover, oral supplementation of heatinactivated E. hallii had no effect on murine metabolism, which is in line with the previous data studying the role of specific microbial strains on insulin sensitivity. 15 Moreover, as we did not see any effect on body weight and the fact that we do not have the data on locomotor activity upon 4 weeks of E. hallii treatment, further studies will have to elucidate the long-term effects of E. hallii on all these parameters.
It has long been recognised that intestinal bacteria affect SCFA concentrations. 8,9 Bacterial fermentation of indigestible fibres in the intestine, for example, by the butyrate-producer E. hallii, results in the production of SCFAs such as butyrate. Oral SCFAs administration to mice fed a high-fat diet reduced body weight and improved insulin sensitivity without changing food intake or levels of physical activity. 5 SCFAs have been suggested to act on food intake through G-protein-coupled receptors such as GPR41 and GPR43, which subsequently increase release of the satiety hormones PYY and GLP-1. Furthermore, butyrate has been implicated in regulation of intestinal gluconeogenesis thereby improving glucose and energy homeostasis. 16 Although oral E. hallii treatment had only minor effects on intestinal E. hallii abundance, levels of the SCFA butyrate, a metabolite of E. hallii, were doubled (~217%, NS) in active E. hallii-treated mice compared with heat-inactivated E. hallii-treated controls. Increased butyrate levels might potentially mediate the observed beneficial effects on peripheral insulin sensitivity and energy expenditure in active E. hallii-treated db/db mice. However, this hypothesis would require further analysis. After release into the duodenum, bile acids travel the length of the small intestine and are reabsorbed and transported back to the liver mainly in the distal ileum. 17,18 Ruminococcaceae and Lachnospiraceae families of the Firmicutes phylum (such as E. hallii) can mediate primary bile acid conversion to secondary bile acids. 19 Furthermore, modulation of intrinsic bacterial bile acid hydrolysis significantly impacts bile acid composition and subsequent metabolic processes in the host. 20 Although we found only a small effect of E. hallii treatment on intestinal bile acid metabolism, it is tempting to speculate that E. hallii indeed affects energy metabolism and insulin sensitivity via bile acids.
Indeed, the E. hallii L2-7 genome contains 2 complete functional bile salt hydrolase (BSH) genes (W.M.d.V., personal communication) and their role in bile acid metabolism is currently under detailed investigation. Although total plasma and faecal secondary bile acid levels were similar in active and heatinactivated E. hallii-treated db/db mice, active E. hallii treatment increased levels of the secondary bile acid tauro-conjugated deoxycholic acid. Interestingly expression of other FXR targets such as Ilbp and Abst remained unaltered, but expression of the transcription factor Gata4 decreased significantly. A similar association between Gata4 and Fgf15 was recently reported by Out et al. 21 and might be a direct interaction of microbiota with Gata4 expression as also suggested in (ref. 13). Furthermore, changes in intestinal bacteria have been shown to primarily affect Fxr targets in the small intestine and not the liver. 19,22,23 This is in line with our observation of decreased expression of the intestinal Fxr target gene Fgf15 but not the hepatic Fxr target Shp after active E. hallii treatment. Microbiota modifications using probiotics have been reported to facilitate changes in intestinal bile acid transport, 24 which is in line with the appreciable elevation of the bile acid transporter Ostα in the present study. Eubacterium hallii treatment and insulin sensitivity S Udayappan et al In conclusion, we show that daily treatment for 4 weeks with E. hallii L2-7 has no adverse effects and exerts beneficial effects on metabolism, potentially via alterations in butyrate formation and bile acid and metabolism. 25,26 Our data thus underscore the therapeutic potential of replenishing missing intestinal bacterial strains for the treatment of human insulin resistance. 27 Further research to confirm optimal dose and long-term effects of E. hallii on human insulin sensitivity and bile acid metabolism is urgently awaited.
MATERIALS AND METHODS E. hallii culture E. hallii strain L2-7 was cultured under anaerobic conditions as described previously. 8,9 Purity was identified by cellular morphology and 16S RNA gene sequence analysis. Cultures were grown to the end of the exponential phase, concentrated by anaerobic centrifugation, washed with phosphate-buffered saline, diluted in a solution containing maltodextrin and glucose in 10% glycerol until final concentrations of 10 6 colony forming units (CFU), 10 8 CFU and 10 10 CFU in 100 μl were reached. Viability was assessed by most probable number analysis by dilution to extinction and confirmed by microscopic analysis. Samples were stored at − 80°C and used within 6 months during which viability was not noticeably affected.
Animals
All animal experiment were conducted in accordance with the principles of the 'Guide to the Care and Use of Experimental Animals' and were approved by the local Animal Ethics Committee, Academic Medical Center-University of Amsterdam, and the University of Gothenburg Animal Studies Committee.
The methods were carried out in accordance with the approved guidelines. Male C57Bl6/J db/db mice (12 weeks old) were purchased from the Jackson Laboratories USA. Animals were housed at AMC SPF vivarium in groups of 5 animals/cage and fed ad libitum with regular chow diet (Research Diets, Inc.) and water. Mice were housed under constant temperature and a 12-h light/dark cycle. At 16 weeks of age, the animals were daily given an oral 100 μl gavage of comprising 10 6 , 10 8 and 10 10 CFU of E. hallii in 10% glycerol stock for 4 weeks (n = 8 mice per group). As a control, an oral 100 μl gavage of 10% glycerol in phosphate-buffered saline was used (n = 8 mice). Twenty-four-hour faeces were collected after 4 weeks of treatment (24 h collection) for bile acid composition analysis. In the last week of treatment and after an overnight fast, mice (n = 8 per group) received an intraperitoneal insulin bolus (Actrapid 0.75 U/kg body weight) and blood glucose was measured (Ascensia Elite glucose meter, Bayer, Leverkusen, Germany) at t = 0, 60, 90 and 120-min post injection for determination of insulin sensitivity. Thereafter, animals were sacrificed using 100 mg/kg pentobarbital and faeces and caecum were collected.
Hyperinsulinemic-euglycemic clamp
Male C57Bl6/J6 db/db mice (12 weeks old) were purchased from the Jackson Laboratories, Bar Harbor, ME, USA. Animals were housed at University of Gothenburg SPF vivarium and fed ad libitum with regular chow diet (Research Diets, New Brunswick, NJ, USA) and water. Mice were housed under constant temperature and a 12-h light/dark cycle and underwent daily oral 100 μl gavage for 4 weeks with 10 8 CFU active or heat-inactivated E. hallii (15 min at 70°C) as control (n = 7-10 per group). In the last week of treatment, at least 4 days before the clamp a catheter was surgically placed in the jugular vein for infusion of insulin and glucose under isoflurane anaesthesia. Prior to the clamp, mice were fasted for 4 h and placed in individual plastic containers. Basal blood glucose (Countour Next blood glucose meter, Bayer AB, Solna, Sweden) was used from tailblood measurements. A bolus injection of [3-3 H] glucose (5 μCi; PerkinElmer, Waltham, MA, USA) was given through the jugular vein catheter (t = − 80 min prior insulin infusion), followed by a continuous infusion of 0.05 μCi/min for assessment of basal glucose turnover rate. Three consecutive blood samples were taken at steady state (t = − 20, − 10 and 0 min prior insulin infusion) for the determination of both plasma [3-3 H] glucose and glucose concentration. At t = 0, a priming dose of insulin (178 mU/kg; Actrapid Penfill, Novo Nordisk, Bagsvaerd, Denmark) was given, followed by a continuous insulin infusion rate of 20 mU/min/kg. The infusion of [3-3 H] glucose was increased to 0.1 μCi/min during clamp to minimise changes in specific activity during insulin infusion. Blood glucose was measured at 10-min intervals, via tail-blood sampling, to adjust the glucose infusion rate (GIR; 30% glucose Fresenius Kabi, Bad Homburg, Germany) to maintain blood glucose concentration at the basal level. At steady state, defined by stable glycemia and GIR (approximately at t = 120 m interval), three consecutive blood samples were taken at 10 min intervals to determine whole-body glucose utilisation (Rd) and hepatic glucose production (Ra) under hyperinsulinemic-euglycemic-conditions. Plasma insulin was measured at t = − 10 min (basal) and 120 min (clamp). Animals were killed by an overdose of pentobarbital (Apoteket Farmaci AB, Stockholm, Sweden) and tissue was collected. The blood samples were deproteinised, evaporated and resuspended in deionised water for the determination of radioactivity (Beckman LS6500 Multipurpose Scintillation Counter, Providence, RI, USA). Whole-body glucose appearance (Ra) and endogenous glucose production (endogenous Ra), a measure of hepatic glucose production, were calculated as published as previously described. 28 Metabolic chamber experiments and body composition During a parallel experiment, male db/db mice (aged 12 weeks, n = 7-10 per group) were treated orally with 100 μl of 10 8 CFU active or heat-inactivated E. hallii (15 min at 70°C) as control for 4 weeks. Thereafter, mice were individually housed in Somedic INCA metabolic cages (Somedic AB, Hörby, Sweden) to study total energy expenditure and respiratory quotient. Oxygen consumption (VO 2 ) and CO 2 production (VCO 2 ) were recorded every 2 min for 23 h. Temperature in the metabolic chamber was kept constant at 21°C and animals had free access to food and water. Data from the first hour was discarded to account for animal acclimatisation. The average total energy expenditure per hour was determined using Weir's equation: (3.9 × VO 2 )+(1.1 × VCO 2 ) and respiratory quotient was calculated as the VCO 2 /VO 2 ratio. Also, magnetic resonance imaging scanning for body composition was performed as previously described. 25 Intestinal microbiota analysis Abundances of E. hallii were determined in caecal content by using the Mouse Intestine Tract Chip as previously reported. 15,29 Total genomic DNA was extracted from the frozen caecum with the QIAamp DNA stool mini-kit (Qiagen, Valencia, CA, USA) according to the manufacturer's protocol. 16S rRNA gene amplification, in vitro transcription and labelling, and hybridisation were carried out as described. 30 The data were normalised and analysed using a set of R-based scripts in combination with a customdesigned relational database, which operates under the MySQL database management system. For the microbial profiling, the Robust Probabilistic Averaging signal intensities of 2667 specific probes for the 94 genus-level bacterial groups detected on the MITChip were used. 31 Diversity calculations were performed using a microbiome R-script package (https://github.com/microbiome). Multivariate statistics, redundancy analysis, and principal response curves were performed in Canoco 5.0 and visualised in triplots or a principal response curves plots. 32
SCFA and bile acid profiling
Twenty-four-hour faecal samples (pooled from each cage) were collected and stored for later analysis. SFCA content was analysed by gas liquid chromatography following conversion to t-butylmethylsilyl derivate as previously described. 9 Concentrations of different bile acids were measured twice in 24-h faecal samples collected in week 4 and in plasma. An internal standard was added before extraction with 0.2 mol/l NaOH at 800°C for 20 min. Bile salt were trimethylsilylated with pyridine, N,O-Bis(trimethylsilyl) trifluoroacetamide and trimethylchlorosilane. Faecal bile acid profile was measured using capillary gas chromatography (Hewlett-Packert gas chromatograph; HP 6890, Mountain View, CA, USA) equipped with a FID and a CP Sil 19 capillary column; length 25 m, internal diameter 250 μm and a film thickness of 0.2 μm (Chrompack BV, Middelburg, The Netherlands). Plasma bile acids were determined using liquid chromatography tandem mass spectrometry as described previously (17). The primary bile acids cholic acid (CA), taurocholic acid, muricholic acid (MCA), tauroalpha muricholic acid and taurobeta muricholic acid as well as the secondary bile acids taurohyodeoxycholic acid, deoxycholic acid, taurodeoxycholic acid and omega murocholic acid were analysed in plasma and 24 faeces. 21 The total amount of primary and secondary bile acids was calculated as the sum of the individually quantified bile salts.
Eubacterium hallii treatment and insulin sensitivity S Udayappan et al Quantitative real-time PCR Liver and intestinal tissues were homogenised with tissue-magnaLyzer (Roche, Basel, Switzerland). Total RNA was extracted using Tri-pure reagent (Roche). Complementary DNA was prepared by reverse transcription of 1 μg total RNA using reverse transcription kit (BioRad, Hercules, CA, USA). Hepatic genes involved in lipogenesis (Srebp1c, Fasn, Acc1, Acc2 and Dgat) and gluconeogenesis (Gck1, G6Pc, Pk and Pck1) were examined. Genes involved in bile acid metabolism and transport were tested in liver (Cyp7a1, Cyp8b1, Cyp7b1 and Cyp27a, Ntcp, Oatp1, Mrp3, Bsep and Mrp2) and proximal to distal intestinal segments (duodenum, jejunum and ileum) for (Tgr5, Fxr, Gata4, Asbt, Ilbp Ostα and Fgf15). 33 Real-time quantitative PCR was performed using Sensifast SYBR master mix (GC biotech, Alphen a/d Rijn, the Netherlands). Gene-specific intron-exon boundary spanning primers were used and all the results were normalised with the house keeping gene 36B4. All samples were analysed in duplicate and data were analysed according to the 2 ΔΔCT method. Primer sequences are presented in Supplementary Table S2.
Statistical analysis
On the basis of distribution of the clinical data, Student t-test or Mann-Whitney tests (two-sided) were used to analyse the difference between clinical groups. Microbiota analyses were done as described above. *P valueo0.05 or **Po0.01 were considered statistically significant. | 2017-11-08T01:23:52.353Z | 2016-07-06T00:00:00.000 | {
"year": 2016,
"sha1": "9e38fc3790c4c5bc955db907031ee6bb9de35fa7",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/npjbiofilms20169.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "57e4feba69b16346ebfc68a98dc58c54bc511b2e",
"s2fieldsofstudy": [
"Biology",
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
49406672 | pes2o/s2orc | v3-fos-license | Should Let Them Go? Study on the Emergency Department Discharge of Patients Who Attempted Suicide
Objective The purpose of this study was to analyze the characteristics and factors of voluntary discharged patients after suicide attempt and analyze the effectiveness of follow-up measures. Methods Total 504 adult patients aged 14 years and over, who visited a local emergency medical center from September 1, 2013 to December 31, 2015 were enrolled and retrospectively reviewed. We analyzed the relationship with voluntary discharge group (VDG) among basic characteristics, suicidal attempt variables, outcome variables related to suicide attempts, and treatment related variables comparing with normal discharge group (NDG). Results Of the total 504 suicide attempts, three hundred eleven (61.7%) patients were VDG and 193 (38.2%) were NDG. The proportion of patients who completed the community service linkage were 18.7% (36/193) in NDG, compared with 7.7% (24/311) in VDG (p<0.05). In addition, the ratio of the patients who visited psychiatric outpatient department in NDG were 57.0% (110/193), more than four times as likely as 14.5% (45/311) in VDG (p<0.05). Conclusion Over sixty percent of suicide attempters discharged against medical advice. Further various aspects of national supportive measures including strengthening case management service should be considered.
INTRODUCTION
A suicide attempter is a patient who has survived a suicide attempt. Persons with suicidal thoughts are estimated to be approximately 25 times more likely to attempt suicide than the general population. 1,2 Both individual and social factors have been implicated in suicidal tendencies, as explained by Emile Durkheim in the late 19th century. 3 Many suicide attempters come to the emergency department due to special circumstances, such as psychiatric problems, substance abuse (e.g., alcohol), and physical injury from previous suicide attempts. Since 2013, the Suicide Trial Case Management Team has been deployed as a national emergency center of the emergency management system of the Republic of Korea. However, although most suicide attempters are treated by a physician, most of them are still discharged voluntarily after completing their paperwork, contrary to medical advice. Voluntary discharge of suicidal patients is a problem in terms of the patient's protection, given that the percentage of patients involved in re-entry with the same diagnosis is higher than that of other patients. 1 In addition, the domestic mental health service has adopted a policy that minimizes forced hospitalization and treatment of severely mentally ill patients; this policy is based on the recommendations of the United Nations Convention on the Rights of Persons with Disabilities that emphasizes the elimination of forced hospitalization and treatment. It is therefore difficult for emergency medical physicians to prevent voluntary discharge of suicide attempters. 1 Among the Organization for Economic Co-operation and Development (OECD) countries with the highest suicide percentages from 1985-2001, Finland and Japan succeeded in decreasing their suicide percentages by conducting govern-ment suicide prevention projects and suicide comprehensive measures centered on "psychological autopsy". 1 The government of the Republic of Korea also enacted relevant laws in 2012, established the Korean Suicide Prevention Center, and started an emergency department-based suicide attempt management project in 2013. 1 In addition, the Korean Psychological Autopsy Center has been established for the purpose of identifying the cause of suicide death. 1 Despite these efforts, the suicide prevalence in Korea is 27.3/100,000 of the general population, which is twice the OECD suicide average of 12.0/ 100,000 of the general population in 2014. 1 Domestic efforts to reduce the suicide percentage have been initiated by the Korea government; however, as these programs are at their beginning, domestic studies on the discharge of suicide attempters are rare.
The purpose of this study was therefore to investigate the characteristics and factors related to the discharge of patients and the effectiveness of state-led case management for suicide attempts by hospital emergency departments.
Periods and subjects
Among 506 adult patients ≥14 years of age who visited the Soonchunhyang University Hospital Emergency Medical Center at Bucheon City, which is located in Gyeongi-do province in southern Korea, from September 1, 2013 to December 31, 2015, two patients were excluded from the study due to missing data about their medical severities. The medical records of 504 patients were then reviewed retrospectively.
Key variables
The statistical relationship between the major variables of the four categories (basic characteristics, suicide attempt-related variables, outcome variables related to suicide attempts, and treatment-related variables) and the types of discharges were investigated. Independent variables were defined as variables that were necessary to predict a voluntary discharge. The type of discharge of the attempted suicide victim admitted to the emergency department was classified into the voluntary discharge group (VDG) and normal discharge group (NDG). Dependent variables were defined as VDG and NDG. The methods of attempted suicide were categorized as ingested poisoning, briquette gas poisoning, hanging, wrist cutting/puncture wounds, death leaps, and other causes. The state of consciousness at presentation was classified as alert, verbal, painful, and unresponsive.
Medical severity was defined as the extent of a serious injury. The medical severity was based on the medical records prepared by the emergency department nurse and interviews of two case management service team members. We classified cases where intensive care was required due to the severity of the injury, even after emergency department treatment, as "high. " After emergency department treatment, the patients who were admitted to the general ward or who needed medical attention even after discharge were classified as "moderate. " If there was no physical damage or very mild (abrasion) physical damage usually treated with a simple dressing, the case was classified as "low. " After emergency department discharge, the researcher completed the type of insurance by checking the medical records and classifying them as "health insurance, " "ordinary type, " "medical aid, " and "others. " Patients with a history of psychiatric disease were covered by health insurance and classified as "health insurance. " Other suicide attempters in the emergency department were recorded as "ordinary type" and were not covered by insurance. However, if the patient agreed to psychiatric treatment in the emergency department and conducted a psychiatric consultation, "ordinary type" was changed to "health insurance. " This was confirmed by reviewing the medical records of the authors after discharge of the patient and confirming the changed parts of the records. Even if the psychiatric consultation occurred in the emergency department, it was classified as "ordinary type" when the patient was not cooperating with the interview, when it was difficult to eliminate a diagnosis, or if psychiatric treatment was refused after discharge.
The classification of independent variables and dependent variables are listed below.
Independent variables
1) Baseline characteristics (demographic variables): sex, age, level of education, marital status, housemate, physical status, and the type of insurance.
2) Variables related to suicide attempts: drinking, a history of psychiatric disease, the method of the suicidal attempt, acknowledgment of the suicide attempt, and plans about a future suicide attempt.
3) Suicide attempt-related outcomes per type of discharge: awareness condition, medical severity, and plans for future suicide attempts. 4) Treatment modalities per type of discharge: medical request for neuropsychiatry (NP), case management service, links with community services, psychiatric treatment after the discharge, location of the discharge.
Research methods and statistical analyses
We used descriptive statistics to determine how these four categories of independent variables listed below were related to the voluntary discharge.
(1) Baseline characteristics (demographic variables) (2) Variables related to suicide attempts (3) Variables of suicidal attempt-related outcomes (4) Treatment-related variables The chi-square test was used to compare the main variables of the above four categories from (1) to (4) according to the type of discharge.
(5) Frequencies of emergency department voluntary discharge of patients according to visit and discharge time differences Simple descriptive statistics (frequency and percentage) were applied for category (5).
(6) Voluntary discharge outcomes according to univariate logistic regression analyses Univariate logistic regression analyses were performed for all variables included in the four categories in the suicide attempt group. The variables associated with voluntary discharge and the extents of association with statistical significance were also later examined.
(7) Voluntary discharge outcomes according to multivariate logistic regression analyses After constructing an initial model of multivariate logistic regression using statistically significant factors in univariate logistic regression, we selected variables by backward selection and examined the variables associated with voluntary discharge and their associated strengths of statistical significance.
The collected data were analyzed using the R 3.1.3 program (codename, "Smooth Sidewalk"), (Comprehensive R Archive Network at http://cran.r-project.org/). The chi-square test, and univariate and multivariate logistic regression analyses were used as the statistical methods. In the table, the percentages in parenthesis of the frequency of each variable, including the subgroups, are expressed as a percentage of the frequency of variables corresponding to total discharge, voluntary discharge, and normal discharge using the chi-square test. In the table of univariate and multivariate logistic regression analyses, the odds ratio (OR) with a 95% confidence interval was expressed with a p-value for the voluntary discharge. Percentages were rounded off to two decimal places. A value of p<0.05 was defined as statistically significant.
Institutional review board
This study was supported by Soonchunhyang University and approved by the Institutional Review Board Committee of Soonchunhyang University Bucheon Hospital (IRB No. 2017-08-019-002).
Baseline characteristics (demographic variables)
Age, level of education, marital status, and physical status were statistically significant in terms of baseline characteristics (Table 1). Of the total 504 suicide attempts, 311 (61.7%) involved VDG patients and 193 (38.2%) involved NDG patients. Although there was no statistically significant relationship between sex and the VDG, 211 (67.8%) out of a total of 311 (100%) VDG patients were female, which was more than twice as many as the 100 (32.2%) males. The percentage of VDG patients (104/311, 33.4%) was higher than that of NDG patients (48/193, 24.8%) for patients <20 years of age, and the percentage of NDG patients (73/193, 37.8%) was higher than those of VDG patients (86/311, 27.7%) for patients >50 years of age (p< 0.05). Regarding the level of education or physical status, the percentages of unmeasured (non-responders) patients were 74.6% (232/311) and 44.4% (138/311), respectively, with the percentage of VDG patients higher than those of 37.3% (72/193) and 13.0% (25/193), respectively, in the NDG (p<0.05). The percentage of patients in the non-responder group with a level of education below high school was at least five times higher than that of university graduates in both the VDG and NDG (p<0.05). Regarding physical status, the percentage of the healthy group was the highest and the percentage of the acute disease group was the lowest for both the VDG and NDG, respectively (p<0.05). Regarding marital status, the percentage of the married group was the lowest compared with unmarried or married but without a spouse group in the VDG, whereas the percentage of the married group was the highest compared with those in the NDG (p<0.05).
Variables related to suicide attempts
Variables related to suicide attempts such as drinking, a history of psychiatric disease, the method of the suicidal attempt, acknowledgment about the suicidal attempt, and a plan for a future suicide attempt were all significantly associated with voluntary discharge (p<0.05) ( Table 2). In the VDG, the percentage of drinking patients was the highest compared with the percentage of non-drinking patients in the unmeasured group (p<0.05) ( Table 2). Except for the non-response group, patients with a psychiatric history were more likely to voluntarily discharge than those who were not in the VDG (p<0.05). In addition, the percentage of ingested poisoning was 51.8% (161/311), which was higher than other methods of suicide attempts in the VDG (p<0.05). The percentage of patients with a plan for a future suicidal attempt of the impulse group, except for the unmeasured group, was 46.6% (145/311) compared with those with a plan for a future suicide attempt [6.8% (21/311)] in the VDG, which was more than seven times higher than those who planned to commit suicide (p<0.05) ( Table 2).
Variables of suicide attempt-related outcomes
Variables of suicide attempt-related outcomes were the awareness condition, medical severity, and future suicide attempt plans (Table 3). Of these, the remaining variables except for the awareness condition showed a significant relationship with voluntary discharge of the patient (p<0.05) ( Table 3). The lower the medical severity, the higher the percentage of voluntary discharge, and in cases of future suicide attempt plans, the percentage of patients who did not disclose their intentions was the highest in the VDG (p<0.05) ( Table 3).
Treatment-related variables
The variables related to the treatment of suicide attempts were medical requests for NP, case management service, links with community service, psychotherapy after discharge, and the location when discharged. All these variables showed a statistically significant relationship with voluntary discharge (p<0.05) (Table 4, Figure 1). Of a total of 504 suicide attempts, 311 (61.7%) were discharged voluntarily from the emergency Figure 1). The percentage of patients who agreed to psychiatric treatment was 64.0% (199/311) in the VDG and 69.9% (135/193) in the NDG. Both groups showed a higher percentage >60%, but there was no significant difference between the two groups ( Table 4, Figure 1). However, the percentage of patients who refused psychiatric treatment was 29.6% (92/311) in the VDG, which was more than twice as high as 12.4% (24/193) in the NDG (Table 4, Figure 1). The percentage of patients who agreed to case management was 68.4% (132/193) in the NDG, which was more than twice as high as 34.1% (106/311) in the VDG (Table 4, Figure 1). The percentage of patients who did not have information about their follow-up intervention in the VDG was 52.4% (163/311), which was more than three times as high as 17.1% (33/193) in the NDG (Table 4, Figure 1). The percentage of patients who visited the outpatient clinic (psychotherapy after discharge) was 57.0% (110/193) in the NDG, which was approximately four times as high as 14.5% (45/311) in the VDG (Table 4, Figure 2). The percentage of discharges from a hospital room was 80.8% (156/193) in the NDG, which was more than three times higher than the 22.2% (69/311) in the VDG (Table 4). In contrast, the percentage of patients discharged from the emergency department was 77.8% (242/311) in the VDG, which was more than four times as high [19.2% (37/193)] in the NDG ( Table 4).
The frequencies of emergency department voluntary discharge patients according to the visit and discharge time differences
The total number of emergency department VDG patients was 242 (Table 4 and 5). Of these, 43 patients (17.8%) visited the emergency department from 8:00 am to 17:00 pm when direct case management service in the emergency department was available (Table 5). In contrast, 185 patients (82.2%) visited the emergency department from 17:00 pm to 8:00 am, when direct case management service in the emergency department was unavailable (Table 5). Eighty-five patients (35.1%) were discharged from the emergency department from 8:00 am to 17:00 pm, when direct case management service in the emergency department was available (Table 5). In contrast, 157 patients (64.9 %) were discharged from the emergency department from 17:00 pm to 8:00 am, when direct case management service in the emergency department was unavailable (Table 5).
Voluntary discharge outcomes according to univariate logistic regression analyses
In addition to treatment-related variables, univariate logistic regression was performed on variables that were significant using the chi-square test for baseline characteristics, variables related to suicide attempts, and suicide attempt-related outcome variables (Table 6). There were significant differences in age, level of education, marital status, physical status, drinking status, psychiatric disease history, method of suicidal attempt, acknowledgement about the suicidal attempt, plans for future suicidal attempts, and the medical severity for voluntary discharges (p<0.05) ( Table 6).
Voluntary discharge outcomes according to multivariate logistic regression analyses
The level of education, physical status, psychiatric disease history, and medical severity were significant (p<0.05) using multivariate logistic regression analyses of variables that were significant using univariate logistic tests for voluntary discharge ( Table 7). The OR of the no response group regarding their level of education increased by 3.32 times compared with college-educated or higher patients (reference group) for voluntary discharge (p=0.004) ( Table 7). Patients who did not respond to the physical status question had a 2.22 times higher increase in the OR compared with the healthy group for voluntary discharge (p=0.009) ( Table 7). Patients with a psychiatric disease history had an OR 1.62 times higher than pa- tients without a psychiatric disease history (p=0.046) ( Table 7). The OR of the low medical severities group was 2.1 times higher than that of patients with high medical severities (p= 0.008) ( Table 7).
DISCUSSION
The voluntary discharge of patients accounts for approximately 2% of all hospital discharges, which results in a quality problem for health care. 4 In a general medical service study, the percentage of rehospitalizations in the voluntary discharge patient group within 15 days of the same diagnosis was always seven times higher than that of the normally treated group. 4,5 In the present study, we classified factors affecting voluntary and normal discharge of suicide attempt patients, including the emergency department and hospital admission [ward or intensive care unit (ICU)] as basic characteristics (demographic variables), suicide attempt-related variables, suicide attemptrelated outcome variables per type of discharge, and treatment modality variables per type of discharge. Of these, demographic variables, suicide attempt-related variables, and suicidal attempt-related outcome variables per type of discharge were variables for predicting the voluntary discharge of patients. However, the category of treatment modality variables per type of discharge was variable due to the voluntary discharge of the patient. Regarding demographic variables, the percentage of VDG patients was significant depending on the age, level of education, marital status, and physical health. The percentage of voluntary discharges of patients ≤20 years of age was high, and the percentage of normal discharges was high in patients ≥50 years of age. It was found that the younger the age, the higher the probability of voluntary discharge. In the case of elderly patients ≥50 years of age, the patient's or caregiver's concerns about the patient's physical health may have caused an increase in the percentages of admissions and normal discharges. The percentage of voluntary discharge from the married group was the lowest in the VDG, which was presumed to be influenced by the emotional intervention of the spouse.
In the case of alcoholic patients related to suicide attempts, the percentage of drinkers was twice as high as that of nondrinkers in the VDG when the unmeasured patients were excluded. This was consistent with the finding that most past retrospective studies have consistently involved decisions to use a person's discharge note if there was a drug or alcohol problem. 4 The most frequent method of suicidal attempt was ingested poisoning, including briquette gas poisoning, and wrist cutting/puncture wound was the next most frequent. The percentage of ingested poisoning, including briquette gas poisoning, was also the highest, and wrist cutting/puncture wound was the next most frequent in the VDG. In a Korean study of adolescents aged 10-19 years, ingested poisoning was the most common method of attempting suicide, while hanging and falling were the most lethal suicide attempts, and the percentage of voluntary discharge from the emergency department was 22.8%. 6 Patients with a psychiatric history were more likely to be discharged against medical advice. This result is partially consistent with a study reporting that the discharge of a patient was most commonly predicted from patient factors, such as pessimistic attitudes toward treatment; aggressive, destructive, and anti-social behavior; multiple voluntary discharges from previous hospitalizations; a young age; male; unmarried; and an accompanying personality disorder or substance abuse diagnosis. 7 Other studies have also reported that the percentage of patients discharged voluntarily from the psychiatric group ranged from 3-51% (average, 17%), which was much higher than that of the medical group. 7 In the NDG involving ingested poisoning, medical treatment was often necessary, mainly because the ingested poison involved sleeping pills or psychiatric drugs, leading to a decreased consciousness that made the patient unable to be discharged early. Awareness condition was not significantly associated with the voluntary discharge of the patient, in contrast to lower medical severity, which resulted in a significantly higher percentage of voluntary discharges. The unmeasured group with future plans for a suicide attempt had the highest percentage of patients in the VDG. In these patients, the non-responders were judged to be untreated patients, and because of the highest percentage of voluntary discharge of untreated patients, it was thought that a 24-h care system was needed to separately manage suicide attempters in the hospital. Suicide attempters were mostly impulsive. Excluding non-responders, impulsive patients were more likely to be voluntarily discharged.
The variables related to treatment, including psychiatric referral, consent, and intervention of a case management service; link with community service; post-discharge psychiatric treatment; and the location of the discharge (emergency department, general ward, or the ICU) were significantly associated with the type of discharge of the suicide attempter. Of the total of 311 voluntary discharge patients, 242 (77.8%) were discharged from the emergency department. The percentage of patients who refused psychiatric consultation in the VDG was more than twice that in the group who refused psychiatric consultation in the NDG. This was consistent with previous results from Holden et al. 8 that indicated the need for early psychiatric intervention as there was a tendency for medical patients to be less voluntarily discharged. The percentage of patients who agreed to psychiatric consultation in the VDG was similar to that of the NDG. In contrast, the percentage of the group who agreed to case management services in the NDG was more than twice as high as that in the VDG. The percentage of patients who agreed to psychiatric outpatient visits in the NDG was four times higher than that of patients in the VDG. These results indicated that the consent of psychiatric consultation alone did not have a significant effect on the patient's discharge pattern, but it was found that when the case management service was linked, it increased the percentage of connections among normal discharge, community links, and psychiatric outpatient visits. These results indicated that most of the normally discharged patients were discharged from entering the hospital ward, and that the case manager could easily intervene in a stable state, so that the percentage of agreement was high. However, in cases of voluntary discharge, the agreement percentage of the post-management service was lower than that of the NDG. In particular, when a patient was discharged voluntarily from the emergency department, there were many cases in which the case management team member was not able to intervene during the off-duty hours, ranging from 5:00 pm to 8:00 am. Based on these findings, it is important for the national government to establish and fund a system to manage suicide attempters for 24 h.
The percentages of the unmeasured or non-responder group involving variables such as the level of education, physical status, drinking status, psychiatric disease history, hospitalization history due to suicide attempts, plans for future suicide attempts, case management services, link with community services, and psychotherapy after discharge in the VDG were higher than those in the NDG. In VDG patients, there was a tendency not to accurately convey the condition of the patient or the coping process of the suicide attempter in the hospital care system could not be properly determined.
Whether the hospital had a psychiatric closed ward was also considered as a contributing factor for voluntary discharge of suicide attempters. In most of the admitted suicide attempters, in the course of the retrospective chart review of this study, after medical or surgical treatment was terminated, the patient was referred to a psychiatric department for treatment. In the course of this process, it was frequently found that patients and their caregivers were not compliant with the procedures and were discharged voluntarily from the hospital when they were told that they needed to be transferred to other hospitals with closed wards.
Interviews with the case management team staff at this research institute indicated that this team needed active participation by the medical staff. This meant that when the case management team was later contacted, the patient and caregiver showed a favorable attitude that led to post-management and community links with positive results.
Based on the results of the multivariate logistic regression analyses, the OR of the no response group relative to the college graduation or higher group, the physical status of the no response group relative to the healthy group, the psychiatric disease history group relative to the no psychiatric disease history group, and the low medical severity group relative to the high medical group were proportional to the percentages of voluntary discharges. It is anticipated that knowing the characteristics of the variables mentioned above and studying past investigations will help to reduce the voluntary discharge of patients from the hospital. Some studies have suggested that early detection, discussions, and counseling for communication are the preferred methods for reducing the percentage of discharges among patients with a history of drug or alcohol abuse. 4 Targum et al. 9 reported that there was an approximate 30% decrease in all voluntary discharges of psychiatric inpatients from private hospitals after appointing a nurse as a patient spokesperson, whose responsibility was to resolve the patient's concerns about hospitalization, point out the fears, and alert the hospital staff about these concerns and fears. Alfandre suggested that to prevent voluntary discharges, the patient should be interviewed to recognize mental factors including substance addiction. 4 Another study reported that physicians who started their care with a suspicious attitude tended to meet with patients in uncomfortable conditions and were likely to terminate treatment under poor conditions. All patients should therefore be granted empathy regardless of the diagnosis, but should also be entitled to a more accurate and effective patient assessment. 10,11 Studies have shown that when a medical officer is angered when dealing with a difficult type of patient with persistent physical complaints, increased anger and insomnia may be associated with dependence or loss of disinhibition that prevents communication, which in turn distracts the patient and, in the worst cases, dismisses the caregiver, leading to voluntary discharge without adequate treatment of the patient. 4 However, some studies have shown that if the medical staff has a good understanding of the anger-like feelings that naturally occur while they are in contact with these demanding patients, they can better refocus their discussions with the patient, and a psychiatric interview can be helpful. 4 Another study reported that patients who decide to voluntarily discharge for personal reasons may prioritize financial interests rather than concern for their health and should be respected if it is an informed decision. 4 Levy et al. 12 suggested that if the medical staff was unable to prevent voluntary discharge of their patients, they should perform three necessary steps that will absolve the medical staff from legal responsibilities. First, the medical staff should confirm that the patient is in a condition, with a normal mental status, to refuse treatment. Second, the medical staff should explain all potential risks to the patient and patient guardian or caregiver. Finally, the medical staff should take steps to retain the patient' s discharge with the medical advice consent in a suitable chart.
This study was conducted based on a retrospective medical record survey at a single hospital, and it occurred during a specific period of time, so there was a possibility that there was selection bias and incomplete data discrimination. For example, patients recorded as "ordinary type" may have involved local enrollment or company enrollment, but it was impossible to distinguish between them. To overcome these limitations, future prospective multidisciplinary studies will be necessary.
Because suicide is caused by multiple causes, various forms of follow-up care are needed after discharge, and resources such as case management and community centers should be available.
In conclusions, three hundred eleven patients (61.7%) of the total 504 suicide attempters were discharged from the hospital. The non-response group relative to the college graduation or higher group, the physical status of the non-response group relative to the healthy group, the psychiatric disease history group relative to the no psychiatric disease history group, and the low medical severity group relative to the high medical severity group were all proportional. The VDG had a very low percentage of post-discharge intervention in the ongoing treatment methods of psychiatric outpatients, case management, and community links. Although case management services prevented and reduced the discharge of suicide attempters and increased the link between community and outpatient mental health services after discharge, it was not known whether at least 80% of the total suicide attempters were still linked to community services. Based on these results, further aspects of national supportive measures, including strengthening case management services, should be considered to reduce voluntary discharge of suicide attempters. | 2018-07-01T00:44:03.854Z | 2018-06-01T00:00:00.000 | {
"year": 2018,
"sha1": "7cf2fa78663784d41662a63a21f0032a8f3f512a",
"oa_license": "CCBYNC",
"oa_url": "http://psychiatryinvestigation.org/upload/pdf/pi-2018-04-15.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7cf2fa78663784d41662a63a21f0032a8f3f512a",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
252061719 | pes2o/s2orc | v3-fos-license | Investigating the Mechanical and Durability Characteristics of Fly Ash Foam Concrete
Although fly ash foam concrete (FAFC) is lightweight, heat-retaining, and insulating, its application options are constrained by its weak construction and short lifespan. The effects of various dosage ratios of the foaming agent (i.e., hydrogen peroxide), silica fume, and polypropylene fiber on the dry density, compressive strength, thermal insulation performance, pore structure parameters, and durability of FAFC were analyzed in this study, which sought to address the issues of low strength and low durability of FAFC. According to the findings, there is a negative correlation between the amount of hydrogen peroxide (as the foaming agent) and compressive strength, and, as the silica fume and polypropylene fiber (PP fiber) content rise, the strength will initially rise and then fall. The distribution of pore sizes gradually shifts from being dominated by small pores to large pores as the amount of foaming agent increases, while the porosity and average pore size gradually decrease. When the hydrogen peroxide content is 5%, the pore shape factor is at its lowest. The pore size distribution was first dominated by a small pore size and thereafter by a large pore size when the silica fume and PP fiber concentration increased. Prior to increasing, the porosity, average pore size, and pore shape factor all decreased. Additionally, the impact of PP fiber on the freeze–thaw damage to FAFC was also investigated at the same time. The findings indicate that the freeze–thaw failure of FAFC is essentially frost heave failure of the pore wall. The use of PP fiber is crucial for enhancing FAFC’s ability to withstand frost. The best frost resistance is achieved at 0.4% PP fiber content. In conclusion, the ideal ratio for overall performance was found to be 5% hydrogen peroxide content, 4% silica fume content, and 0.1% polypropylene fiber content. The results obtained could be applied in different fields, such as construction and sustainable materials, among others.
Introduction
In recent years, with China's rapid economic and social development, infrastructure has also developed rapidly, energy consumption has accelerated, and the energy crisis has been expanding, which has caused widespread concern. To reduce the consumption of resources and alleviate the energy crisis, we can start by decreasing the energy consumption of buildings. The main advantages of foam concrete (FC) are its light weight, its excellent sound and heat insulation properties, high fire resistance, low price, and easier pumping and application [1]. The use of FC can also reduce carbon dioxide emissions [2,3], so it is widely used in vibration damping, insulation, mine backfilling, and structural seismic protection and other building facilities. The total production of fly ash and silica fume in the Ningxia region in 2021 reached 1800 × 10 4 t. Its considerable accumulation not only occupies land but also causes severe environmental pollution, so the question of how to obtain value from waste and realize resource utilization is a problem worth studying. Research showed that the active substances SiO 2 , Al 2 O 3 , etc., in fly ash and silica fume
Chemical Compositions
The chemical composition and physical indexes of cement are shown in Table 1. Class II fly ash was produced by the Ningxia Yinchuan Thermal Power Plant, and its chemical composition and physical indexes are shown in Table 2. Silica fume was produced by the Ningxia Zhongtong Weiye Company, and its chemical composition and physical indexes are shown in Table 3. Tianjin Comio Chemical Reagent Co. produces hydrogen peroxide. The fiber diameter is 15 µm; tensile strength is 460 MPa, and length is 12 mm; the test water is tap water.
Matching Ratio
The appropriate amount of silica fume and PP fiber dosage can significantly improve the compressive strength and splitting tensile strength of FAFC and improve the brittleness of FAFC. Therefore, this test was designed to carry out the physical and mechanical property and pore structure test research on FAFC with three variables: hydrogen peroxide dosing, silica fume dosing, and PP fiber dosing. The specific matching ratio is shown in Table 4.
Specimen Preparation
Physical foaming: the foaming agent aqueous solution is converted into foam by the mechanical stirring method, and then the foam is added to the slurry composition. The principle of physical foaming is to form a double electron layer structure in the solvent by relying on a surfactant or surface active substance, and wrap the air to form bubbles. From the molecular microstructure, surfactants are composed of two distinct parts: one is an oleophilic group (also known as hydrophobic group), and the other is a hydrophilic group (also known as oleophobic group). Based on the structural characteristics of surfactants, when the surfactants are dissolved in the solvent, the hydrophilic group is attracted by water molecules, while the hydrophilic group is repelled by water molecules. In order to achieve a stable state, the surfactant is only occupied on the surface of the solution, the oleophilic base enters the gas phase, the hydrophilic base is submerged deep into the water, and the concrete foaming agent is dissolved in the water. Mechanical stirring introduces air bubbles, which leads to a single bubble foam. The key to the formation of chemically foamed concrete is that the foaming rate of the foaming agent is consistent with the setting and hardening rate of slurry, and a dynamic balance is reached. Firstly, the foaming agent is added to the slurry and properly stirred to ensure that the foaming agent is evenly dispersed. Under the action of the initiator, the foaming agent continues to undergo chemical reactions to produce gases, forming numerous, uniformly distributed independent gas sources. Then, air pressure is gradually generated in some areas around the gas source. When the gas pressure is greater than the ultimate shear stress of the slurry (the sum of viscous resistance and hydrostatic pressure), the gas source begins to expand rapidly, forming an independent bubble, and the slurry begins to expand. In the process of air inflation, due to the hydration gel materials, the pulp consistency increases, so the expansion to overcome the resistance is also increasing; at the same time, as the reaction material is consumed, the expansion of the potential power is smaller, so the process of inflation shifts from acceleration to a gentle, slow pace, and the process gradually becomes stagnant. Finally, the expansion is completed and the foamed concrete is obtained. The chemical foaming method differs from the physical foaming method, as substances react and produce a new gas foam. The main characteristic of chemical foaming is bubbles without a bubble wall, and poor stability. The hydration products of cement and other materials should be the base material and can be stable, and the difference in the pressure of the gas bubble diameter size depends on the amount of foam material, which is difficult to control. However, the chemical foaming method has great advantages in terms of the strength, water absorption, and other properties of low-density foamed concrete [33].
In this study, FAFC was prepared by the chemical foaming method. Firstly, cement, fly ash, silica fume, water reducing agent, coagulant, foam stabilizer, PP fiber, water, and hydrogen peroxide were weighed with an electronic scale according to the pre-calculated ratio and packed in the pre-prepared box for the preparation of the FAFC test blocks. Next, the test was started by mixing cement, fly ash, silica fume, foam stabilizer (calcium stearate), coagulant (lithium carbonate), and PP fiber in a high-speed disperser for 1 min. Again, the weighed water and water reducing agent were added to the previously mixed cement slurry and continued to mix for 1 min; the mixing was stopped by mixing the cement mixture attached to the cylinder wall with water with a spatula, and then continued to mix for 1 min. Then, the pre-weighed hydrogen peroxide was added and combined for 1 min. Then, we added the pre-measured hydrogen peroxide, stirred for approximately 6~7 s until the hydrogen peroxide was evenly dispersed in the cement mortar, stopped stirring, and quickly placed the stirred FAFC into pre-greased 100 mm × 100 mm × 100 mm and 30 cm × 30 cm × 3 cm molds, and covered the top with a plastic film to avoid moisture dissipation. Finally, it was immediately placed into a standard curing box and demolded after 48 h. It was placed in the standard curing box with a standard temperature of 20 ± 2 • C and relative humidity of more than 95% for curing prior to the subsequent tests.
The FAFC test blocks shown in Figure 1a,b are FAFC test blocks with dimensions of 10 cm × 10 cm × 10 cm. Figure 1c,d shows FAFC test blocks with dimensions of 30 cm × 30 cm × 3 cm. The test block shown in Figure 1a is cut flat on the surface and demolded to become the test block shown in Figure 1b. The test block shown in Figure 1c needs to be cut after demolding to become the test block shown in Figure 1d, where Figure 1a is a standard test piece, and Figure 1c is a self-made thermal conductivity film tool.
Test Methods
To test the performance of the prepared specimens, dry density, compressive strength, and thermal conductivity tests, an ultrasonic test, a pore structure test, and a freeze-thaw cycle test are performed.
(1) The dry density determination method is carried out according to "Foam Concrete" JG/T266-2011 [34], and the calculation formula is ρ = M/V × 10 3 (1) where ρ is the dry density of the FAFC specimen, kg/m 3 ; M is the drying mass of the FAFC specimen, g; V is the volume of the FAFC specimen, mm 3 .
(2) Compressive strength test The compressive strength test machine was used to test the compressive strength of specimens with the size of 100 mm × 100 mm × 100 mm, referring to (JG/T266-2011) [31]. Three samples were tested for each ratio, and, finally, the average value of three pieces was taken as the final test strength.
(3) Thermal conductivity test The CD-DR3030 thermal conductivity tester was used. The test method of thermal conductivity determination was carried out concerning the GB/T10295-2008 [35] specification. The specimen size was 30 cm × 30 cm × 30 cm, with three specimens in each group. According to the specification, the thermal conductivity tester needs to be calibrated with professional thermal conductivity reference samples before use to ensure the accuracy of the measured FAFC thermal conductivity data. The standard models from the Building Materials Industry Technical Supervision and Research Center were used for calibration.
(1) The standard model is a yellow, medium alkali glass fiber resin composite plate with 30 cm × 30 cm × (25~27) mm specification and a density range of 110~130 kg/m 3 . (2) The reference plate was first dried in a drying oven at 100 °C for 8 h. After the quality was constant, the reference plate was removed, and the average thermal conductivity was
Test Methods
To test the performance of the prepared specimens, dry density, compressive strength, and thermal conductivity tests, an ultrasonic test, a pore structure test, and a freeze-thaw cycle test are performed.
(1) The dry density determination method is carried out according to "Foam Concrete" JG/T266-2011 [34], and the calculation formula is where ρ is the dry density of the FAFC specimen, kg/m 3 ; M is the drying mass of the FAFC specimen, g; V is the volume of the FAFC specimen, mm 3 .
The compressive strength test machine was used to test the compressive strength of specimens with the size of 100 mm × 100 mm × 100 mm, referring to (JG/T266-2011) [31]. Three samples were tested for each ratio, and, finally, the average value of three pieces was taken as the final test strength.
(3) Thermal conductivity test The CD-DR3030 thermal conductivity tester was used. The test method of thermal conductivity determination was carried out concerning the GB/T10295-2008 [35] specification. The specimen size was 30 cm × 30 cm × 30 cm, with three specimens in each group. According to the specification, the thermal conductivity tester needs to be calibrated with professional thermal conductivity reference samples before use to ensure the accuracy of the measured FAFC thermal conductivity data. The standard models from the Building Materials Industry Technical Supervision and Research Center were used for calibration.
(1) The standard model is a yellow, medium alkali glass fiber resin composite plate with 30 cm × 30 cm × (25~27) mm specification and a density range of 110~130 kg/m 3 . (2) The reference plate was first dried in a drying oven at 100 • C for 8 h. After the quality was constant, the reference plate was removed, and the average thermal conductivity was measured according to the specification GB/T10294-2008 [36], measured at the average temperature of 298 K to obtain the standard thermal conductivity value of 0.0328 W/(m·K).
(4) Ultrasonic test
This test uses the non-metallic ultrasonic testing analyzer produced by Beijing Kangkorui Company; after setting the test block size parameters, the transmitting and receiving probes are attached to the surfaces of both sides of the test block; then, we press the sampling key and store the ultrasonic testing results. Each test block tested 5 points.
The images of the FAFC surface pore structure were taken by an electron microscope and the images of the FAFC surface were binarized using Photoshop software. The binarized images were processed by Image-Pro-Plus image processing software, and the pore structure characteristic parameters were obtained by analysis. The effect of different hydrogen peroxide dosages, silica fume dosages, and PP fiber dosages on the usual parameters of the pore structure of FAFC was studied.
(6) Freeze-thaw cycle test
Referring to the specification JGJ/T341-2014 [37], the frost resistance of FAFC test blocks was studied by conducting 25 freeze-thaw cycles and measuring the dry density, compressive strength, and ultrasonic wave velocity of FAFC test blocks after drying at the end of every 5 cycles to study the effects of PP fiber dosage and the number of freezethaw cycles on the freeze-thaw resistance of FAFC, and to evaluate the quality loss rate, compressive strength loss rate, and ultrasonic wave velocity. The relationship between the pore structure and the freezing resistance was analyzed to provide a reference for improving the freezing resistance of FAFC.
Results and Discussion
3.1. Effect of External Admixture on Dry Density 3.1.1. Analysis of Silica Fume Dosing and Hydrogen Peroxide on Dry Density As seen in Figure 2, the relationship between the dry density of FAFC and the amount of hydrogen peroxide dosage is negatively correlated. The dry density of FAFC was 241, 225, 205, 191, 180 kg/m 3 , and the reduction rates were 6.64%, 8.89%, 6.83%, 5.76% when the hydrogen peroxide dosage was 4%, 4.5%, 5%, 5.5%, and 6%, respectively. With the increase in foaming agent content, the dry density of FAFC decreased, which was consistent with the research results of Su et al. [38]. The results showed that the amount of hydrogen peroxide dosing significantly affects the dry density of FAFC because, as the amount of hydrogen peroxide dosing increased, the bubbles per unit volume of FAFC increased, the mass of cement decreased, and the dry density decreased.
The relationship between FAFC dry density and silica fume dosage is negatively correlated in Figure 2. The dry density decreased from 218 kg/m 3 to 202 kg/m 3 when the silica fume dosage increased from 0 to 8%, which was reduced by 7.34%; thus, it can be seen that the silica fume dosage has little effect on the dry density. This is attributed to the fact that, although, with the increase in silica fume dosing, silica fume particles filled in the pores of the FAFC slurry, increasing the compactness of the FAFC pore wall, the volcanic ash activity of silica fume promoted the secondary hydration reaction of cement, enhancing the bubble stability of the test block. The dry density of the test block is reduced, but the amount of silica fume substituted for cement is not large, so the effect of the silica fume dosage on the FAFC dry density is not significant. The content of foaming agent can significantly affect the density, and the influence of silica fume is small. However, Liu et al. [39] found that the dry density of foamed concrete increased with the increase in silica fume content, which may be related to the raw materials used for preparing foamed concrete. 225, 205, 191, 180 kg/m 3 , and the reduction rates were 6.64%, 8.89%, 6.83%, 5.76% when the hydrogen peroxide dosage was 4%, 4.5%, 5%, 5.5%, and 6%, respectively. With the increase in foaming agent content, the dry density of FAFC decreased, which was consistent with the research results of Su et al. [38]. The results showed that the amount of hydrogen peroxide dosing significantly affects the dry density of FAFC because, as the amount of hydrogen peroxide dosing increased, the bubbles per unit volume of FAFC increased, the mass of cement decreased, and the dry density decreased.
Analysis of PP Fiber Dosage's Effect on Dry Density
As shown in Figure 3, the relationship between FAFC dry density and PP fiber dosage is negatively correlated. The dry density of the specimen decreased from 212 ·kg/m 3 to 202 kg/m 3 with a slight change of 4.72% when the PP fiber dosage increased from 0 to 0.4%. The relationship between FAFC dry density and silica fume dosage is negatively correlated in Figure 2. The dry density decreased from 218 kg/m 3 to 202 kg/m 3 when the silica fume dosage increased from 0 to 8%, which was reduced by 7.34%; thus, it can be seen that the silica fume dosage has little effect on the dry density. This is attributed to the fact that, although, with the increase in silica fume dosing, silica fume particles filled in the pores of the FAFC slurry, increasing the compactness of the FAFC pore wall, the volcanic ash activity of silica fume promoted the secondary hydration reaction of cement, enhancing the bubble stability of the test block. The dry density of the test block is reduced, but the amount of silica fume substituted for cement is not large, so the effect of the silica fume dosage on the FAFC dry density is not significant. The content of foaming agent can significantly affect the density, and the influence of silica fume is small. However, Liu et al. [39] found that the dry density of foamed concrete increased with the increase in silica fume content, which may be related to the raw materials used for preparing foamed concrete.
Analysis of PP Fiber Dosage's Effect on Dry Density
As shown in Figure 3, the relationship between FAFC dry density and PP fiber dosage is negatively correlated. The dry density of the specimen decreased from 212 kg/m 3 to 202 kg/m 3 with a slight change of 4.72% when the PP fiber dosage increased from 0 to 0.4%. These findings are ascribed to the admixture of PP fiber, as the three-dimensional space mesh structure built in FAFC can improve the stability of the foam in the slurry. The dry density decreased with the increasing admixture of PP fiber. Part of the PP fiber was present in the cement slurry agglomerate. In the FAFC slurry, the essential material was cement. Therefore, the FAFC slurry is referred to as cement slurry. With the introduction of additional gas, the surrounding tiny bubbles rupture and fuse into large bubbles [40], and the dry density decreases. Figure 4 shows that the compressive strength of FAFC at the same age decreased significantly as the amount of hydrogen peroxide increased. On the one hand, with the rise in the hydrogen peroxide admixture, the cement mass per unit volume decreases. The cementitious substances, such as hydrated calcium silicate and hydrated calcium These findings are ascribed to the admixture of PP fiber, as the three-dimensional space mesh structure built in FAFC can improve the stability of the foam in the slurry. The dry density decreased with the increasing admixture of PP fiber. Part of the PP fiber was present in the cement slurry agglomerate. In the FAFC slurry, the essential material was cement. Therefore, the FAFC slurry is referred to as cement slurry. With the introduction of additional gas, the surrounding tiny bubbles rupture and fuse into large bubbles [40], and the dry density decreases. Figure 4 shows that the compressive strength of FAFC at the same age decreased significantly as the amount of hydrogen peroxide increased. On the one hand, with the rise in the hydrogen peroxide admixture, the cement mass per unit volume decreases. The cementitious substances, such as hydrated calcium silicate and hydrated calcium aluminate, generated by the hydration reaction, decrease. Thus, the density of the hydration products decreases, the thickness of the pore wall decreases, and the compressive strength of the FAFC test block decreases. On the other hand, with the increase in the amount of hydrogen peroxide, the rate of bubbles generated in the test block accelerates; the number of pores per unit volume increases; the pore wall becomes thinner; the pores easily break and fuse, the formation of joint pores increases, and the (harmful) pore increase leads to more pore structure defects and lower compressive strength [41].
Analysis of the Effect of Foaming Agent Admixture on Compressive Strength
Materials 2022, 15, x FOR PEER REVIEW 9 of 28 aluminate, generated by the hydration reaction, decrease. Thus, the density of the hydration products decreases, the thickness of the pore wall decreases, and the compressive strength of the FAFC test block decreases. On the other hand, with the increase in the amount of hydrogen peroxide, the rate of bubbles generated in the test block accelerates; the number of pores per unit volume increases; the pore wall becomes thinner; the pores easily break and fuse, the formation of joint pores increases, and the (harmful) pore increase leads to more pore structure defects and lower compressive strength [41]. The compressive strength of FAFC with different hydrogen peroxide admixtures increased with the extension of the curing age. This is because fly ash is a tiny spherical particle, and its tumbling effect can improve the fluidity and uniformity of FAFC slurry, filling in the spaces between the voids of foam concrete slurry and improving the compactness of FAFC. Fly ash has volcanic ash activity, and the amorphous silica that it contains can react with cement and water to generate low alkaline hydrated calcium silicate and other cementitious substances; with the extension of the curing age, the hydration products grow, the microstructure gradually becomes denser, and the macroscopic performance shows that the compressive strength of FAFC gradually increases. However, the growth rate over 7-28 d was relatively larger than during other periods, which is due to the cement hydration reaction producing Ca(OH)2, fly ash in SiO2, Al2O3, and other reactive oxides occurring in the secondary hydration reaction, generating C-S-H, Aft, and other hydration products. With the extension of the maintenance age, the active material decreases, the rate of the secondary hydration reaction slows down, and the strength growth rate decreases [42].
Analysis of Silica Fume Admixture's Effect on Compressive Strength
As seen from Figure 5, the compressive strength of FAFC after adding silica fume into FAFC shows a law of increasing first and then decreasing with the increase in silica fume admixture, indicating that an admixture with the appropriate amount of silica fume is beneficial to increase the compressive strength of FAFC. This can be attributed to the fact that silica fume contains a large amount of active SiO2, which can react with Ca(OH)2 to produce the cementitious substance hydrated calcium silicate, which improves the compactness and strength of the pore wall, but, with the increasing amount of silica fume admixture, the slurry fluidity decreases, the FAFC test block encounters difficulties and internal defects increase. Additionally, the increase in the silica fume admixture led to a relative decrease in cement content. The content of Ca(OH)2 generated by the cement The compressive strength of FAFC with different hydrogen peroxide admixtures increased with the extension of the curing age. This is because fly ash is a tiny spherical particle, and its tumbling effect can improve the fluidity and uniformity of FAFC slurry, filling in the spaces between the voids of foam concrete slurry and improving the compactness of FAFC. Fly ash has volcanic ash activity, and the amorphous silica that it contains can react with cement and water to generate low alkaline hydrated calcium silicate and other cementitious substances; with the extension of the curing age, the hydration products grow, the microstructure gradually becomes denser, and the macroscopic performance shows that the compressive strength of FAFC gradually increases. However, the growth rate over 7-28 d was relatively larger than during other periods, which is due to the cement hydration reaction producing Ca(OH) 2 , fly ash in SiO 2 , Al 2 O 3 , and other reactive oxides occurring in the secondary hydration reaction, generating C-S-H, Aft, and other hydration products. With the extension of the maintenance age, the active material decreases, the rate of the secondary hydration reaction slows down, and the strength growth rate decreases [42].
Analysis of Silica Fume Admixture's Effect on Compressive Strength
As seen from Figure 5, the compressive strength of FAFC after adding silica fume into FAFC shows a law of increasing first and then decreasing with the increase in silica fume admixture, indicating that an admixture with the appropriate amount of silica fume is beneficial to increase the compressive strength of FAFC. This can be attributed to the fact that silica fume contains a large amount of active SiO 2 , which can react with Ca(OH) 2 to produce the cementitious substance hydrated calcium silicate, which improves the compactness and strength of the pore wall, but, with the increasing amount of silica fume admixture, the slurry fluidity decreases, the FAFC test block encounters difficulties and internal defects increase. Additionally, the increase in the silica fume admixture led to a relative decrease in cement content. The content of Ca(OH) 2 generated by the cement hydration reaction decreases. The content of cementitious substances such as hydrated calcium silicate and hydrated calcium aluminate is reduced, and the compressive strength of FAFC decreases. With the extension of the maintenance age, the compressive strength of FAFC samples with different amounts of silica fume admixture all showed an increasing trend. These findings are ascribed to the silica fume's effect on FAFC, which mainly included the microfilming effect of silica fume and the volcanic ash effect [43]. Silica fume particles with small particle sizes and large specific surface areas can fill in the pores of FAFC and improve the density of FAFC. Meanwhile, in FAFC, the hydration reaction of cement generates a large amount of Ca(OH)2 and active SiO2 and Al2O3 in silica fume to produce cementitious substances such as hydrated calcium silicate and hydrated calcium aluminate. The sufficient alkaline environment inside the slurry accelerates the volcanic ash effect of silica fume and improves the compressive strength of FAFC.
Analysis of PP Fiber Dosage's Effect on Compressive Strength
As shown in Figure 6, the compressive strength of FAFC of the same age tends to increase first and then decrease with the increase in fiber admixture. Similarly, Geng Ling [44] found similar laws when studying the influence of polypropylene fiber and glass fiber content on the compressive strength of ultra-light foam concrete. The reason is that when the admixture of PP fiber rose from 0 to 0.1%, the fiber built a three-dimensional spatial mesh structure in FAFC, which had the role of bridging the skeleton, protecting the foam from rupture, and improving the foam stability; the fiber and hydration products were closely connected into a whole, reducing the development of cracks, while playing the role of cutting foam, so that the pore size distribution was uniform, improving the compressive strength of FAFC. Moreover, from 0.1 to 0.4%, the fiber admixture was too large and could not be evenly dispersed in the slurry. The local aggregation phenomenon occurred, which caused the foam to rupture and the pore structure to be damaged. The hydration was uneven, resulting in a reduction in the FAFC compressive strength. With the extension of the maintenance age, the compressive strength of FAFC samples with different amounts of silica fume admixture all showed an increasing trend. These findings are ascribed to the silica fume's effect on FAFC, which mainly included the microfilming effect of silica fume and the volcanic ash effect [43]. Silica fume particles with small particle sizes and large specific surface areas can fill in the pores of FAFC and improve the density of FAFC. Meanwhile, in FAFC, the hydration reaction of cement generates a large amount of Ca(OH) 2 and active SiO 2 and Al 2 O 3 in silica fume to produce cementitious substances such as hydrated calcium silicate and hydrated calcium aluminate. The sufficient alkaline environment inside the slurry accelerates the volcanic ash effect of silica fume and improves the compressive strength of FAFC.
Analysis of PP Fiber Dosage's Effect on Compressive Strength
As shown in Figure 6, the compressive strength of FAFC of the same age tends to increase first and then decrease with the increase in fiber admixture. Similarly, Geng Ling [44] found similar laws when studying the influence of polypropylene fiber and glass fiber content on the compressive strength of ultra-light foam concrete. The reason is that when the admixture of PP fiber rose from 0 to 0.1%, the fiber built a three-dimensional spatial mesh structure in FAFC, which had the role of bridging the skeleton, protecting the foam from rupture, and improving the foam stability; the fiber and hydration products were closely connected into a whole, reducing the development of cracks, while playing the role of cutting foam, so that the pore size distribution was uniform, improving the compressive strength of FAFC. Moreover, from 0.1 to 0.4%, the fiber admixture was too large and could not be evenly dispersed in the slurry. The local aggregation phenomenon occurred, which caused the foam to rupture and the pore structure to be damaged. The hydration was uneven, resulting in a reduction in the FAFC compressive strength. Materials 2022, 15, x FOR PEER REVIEW 11 of 28 With the age extension, the compressive strength of FAFC with different amounts of PP fiber gradually increased, because, in FAFC, the slurry is wrapped with PP fibers. The active substances in the slurry undergo a hydration reaction to produce hydrated calcium silicate and other cementing substances, which are closely connected with PP fibers. After the test block sets and hardens, the connection between PP fibers and the pore wall is tighter, which reduces the generation of cracks and further increases the FAFC's compactness. The tensile strength of PP fiber is significant. In the compressive strength test, PP fiber resisted the pressure together with the pore wall, which enhanced the compressive capacity of FAFC and increased the compressive strength.
Analysis of the Thermal Conductivity of the Dosage Amount of Foaming Agent
From Figure 7, the thermal conductivity of FAFC tends to decrease with the increase in hydrogen peroxide dosing. When the dosage of hydrogen peroxide was 4 to 6%, the maximum reduction in thermal conductivity was 7.79%. Since the density of FAFC decreases, the pore wall becomes thinner; the foam content per unit volume inside the foam concrete increases, and the foam condenses and hardens into the pores of the foam concrete, preventing the heat from convection from propagating inside the structure, reducing the thermal conductivity, and enhancing the thermal insulation [45]. With the age extension, the compressive strength of FAFC with different amounts of PP fiber gradually increased, because, in FAFC, the slurry is wrapped with PP fibers. The active substances in the slurry undergo a hydration reaction to produce hydrated calcium silicate and other cementing substances, which are closely connected with PP fibers. After the test block sets and hardens, the connection between PP fibers and the pore wall is tighter, which reduces the generation of cracks and further increases the FAFC's compactness. The tensile strength of PP fiber is significant. In the compressive strength test, PP fiber resisted the pressure together with the pore wall, which enhanced the compressive capacity of FAFC and increased the compressive strength.
Analysis of the Thermal Conductivity of the Dosage Amount of Foaming Agent
From Figure 7, the thermal conductivity of FAFC tends to decrease with the increase in hydrogen peroxide dosing. When the dosage of hydrogen peroxide was 4 to 6%, the maximum reduction in thermal conductivity was 7.79%. Since the density of FAFC decreases, the pore wall becomes thinner; the foam content per unit volume inside the foam concrete increases, and the foam condenses and hardens into the pores of the foam concrete, preventing the heat from convection from propagating inside the structure, reducing the thermal conductivity, and enhancing the thermal insulation [45]. It can be observed form Figure 8 that the thermal conductivity tends to increase and then decreases with the increase in silica fume dosing. The thermal conductivity was the
Analysis of Silica Fume Dosage's Effect on Thermal Conductivity
It can be observed form Figure 8 that the thermal conductivity tends to increase and then decreases with the increase in silica fume dosing. The thermal conductivity was the largest when the silica fume dosing was 4%. This is related to the pore wall compactness and pore size; the silica fume's filling effect increases the pore wall compactness. Meanwhile, SiO 2 in silica fume promotes the secondary hydration reaction of cement, which increases the pore wall compactness; the more significant the pore wall compactness, the higher the heat transfer capacity and the greater the thermal conductivity. At the same time, incorporating silica fume increases the number of tiny pores, facilitating heat transfer and increasing thermal conductivity. A further increase in the silica fume admixture leads to a slower reaction speed of volcanic fume and lower pore wall compactness, which is not conducive to heat transfer and leads to lower thermal conductivity, lower slurry fluidity, a larger pore size, lower heat transfer capacity of large pores, and reduced thermal conductivity [46].
Analysis of Silica Fume Dosage's Effect on Thermal Conductivity
It can be observed form Figure 8 that the thermal conductivity tends to increase and then decreases with the increase in silica fume dosing. The thermal conductivity was the largest when the silica fume dosing was 4%. This is related to the pore wall compactness and pore size; the silica fume's filling effect increases the pore wall compactness. Meanwhile, SiO2 in silica fume promotes the secondary hydration reaction of cement, which increases the pore wall compactness; the more significant the pore wall compactness, the higher the heat transfer capacity and the greater the thermal conductivity. At the same time, incorporating silica fume increases the number of tiny pores, facilitating heat transfer and increasing thermal conductivity. A further increase in the silica fume admixture leads to a slower reaction speed of volcanic fume and lower pore wall compactness, which is not conducive to heat transfer and leads to lower thermal conductivity, lower slurry fluidity, a larger pore size, lower heat transfer capacity of large pores, and reduced thermal conductivity [46].
Analysis of the Thermal Conductivity of the Dosage Amount of PP Fiber
The thermal conductivity increased first and decreased with the rise in the PP fiber admixture, as is visualized in Figure 9. This is because PP fibers are uniformly dispersed in FAFC and the uniformity of the cement slurry is improved, foam stability is improved, fusion is controlled, pore sizes are relatively small, and PP fibers are better integrated into the cement slurry, which increases the thermal conductivity and pore wall compactness.
In the study, the amount of PP fiber blending increased from 0.1 to 0.4%, resulting in uneven dispersion, which led to cracking around the PP fiber; the impregnation size increased, the compactness of the pore walls decreased, and the pore structure degraded; however, since gas has lower thermal conductivity than a solid, the insulation performance improved.
The thermal conductivity increased first and decreased with the rise in the PP fiber admixture, as is visualized in Figure 9. This is because PP fibers are uniformly dispersed in FAFC and the uniformity of the cement slurry is improved, foam stability is improved, fusion is controlled, pore sizes are relatively small, and PP fibers are better integrated into the cement slurry, which increases the thermal conductivity and pore wall compactness. In the study, the amount of PP fiber blending increased from 0.1 to 0.4%, resulting in uneven dispersion, which led to cracking around the PP fiber; the impregnation size increased, the compactness of the pore walls decreased, and the pore structure degraded; however, since gas has lower thermal conductivity than a solid, the insulation performance improved. Figure 10 shows the pore structure before and after the cross-sectional treatment when foam dosing is 4%. Figure 11 visualizes the variation rule for the porosity of the mixture for different mix ratios with respect to the amount of foaming dosing. The FAFC porosity increased significantly with the increase in blowing agent dosing in a nearly linear relationship. The porosity ranged from 82.69 to 91.48%. This is because, with the increase in the blowing agent admixture, the amount of foam in the unit volume of FAFC increases during the process of foaming and slurry setting and hardening, the mass of cementitious material relatively decreases, and the pore wall of the pore becomes thinner, which creates an increase in FAFC porosity. Figure 10 shows the pore structure before and after the cross-sectional treatment when foam dosing is 4%. Figure 11 visualizes the variation rule for the porosity of the mixture for different mix ratios with respect to the amount of foaming dosing. The FAFC porosity increased significantly with the increase in blowing agent dosing in a nearly linear relationship. The porosity ranged from 82.69 to 91.48%. This is because, with the increase in the blowing agent admixture, the amount of foam in the unit volume of FAFC increases during the process of foaming and slurry setting and hardening, the mass of cementitious material relatively decreases, and the pore wall of the pore becomes thinner, which creates an increase in FAFC porosity. In the study, the amount of PP fiber blending increased from 0.1 to 0.4%, resulting in uneven dispersion, which led to cracking around the PP fiber; the impregnation size increased, the compactness of the pore walls decreased, and the pore structure degraded; however, since gas has lower thermal conductivity than a solid, the insulation performance improved. Figure 10 shows the pore structure before and after the cross-sectional treatment when foam dosing is 4%. Figure 11 visualizes the variation rule for the porosity of the mixture for different mix ratios with respect to the amount of foaming dosing. The FAFC porosity increased significantly with the increase in blowing agent dosing in a nearly linear relationship. The porosity ranged from 82.69 to 91.48%. This is because, with the increase in the blowing agent admixture, the amount of foam in the unit volume of FAFC increases during the process of foaming and slurry setting and hardening, the mass of cementitious material relatively decreases, and the pore wall of the pore becomes thinner, which creates an increase in FAFC porosity. As visualized in Figure 12, the average pore size of FAFC increased with the increase in the hydrogen peroxide admixture. The average pore size ranged from 694.08 to 1506.03 µm. In response to an increase in hydrogen peroxide, more foam was generated, cementitious material surrounding the foam decreased, and the foam became less stable [47]. As visualized in Figure 12, the average pore size of FAFC increased with the increase in the hydrogen peroxide admixture. The average pore size ranged from 694.08 to 1506.03 μm. In response to an increase in hydrogen peroxide, more foam was generated, cementitious material surrounding the foam decreased, and the foam became less stable [47]. It can easily fuse with the surrounding foam, increasing the average pore size. Moreover, the more hydrogen peroxide that was mixed, the greater the rate of foam generation and the greater the impact force; after the slurry was solidified and hardened, the average pore size increased. If the volume of fusion foam is too large, or if the average pore size is too large, the test block will collapse to a certain extent during foaming. Therefore, the amount of hydrogen peroxide should not be too large [48].
Variation Law of Foam Agent Admixture and Pore Structure of Foam Concrete
As shown in Figure 13, the hydrogen peroxide admixture affected the pore size distribution of foam concrete. As the hydrogen peroxide admixture increased, the FAFC's pore size distribution shifted towards larger pore sizes, and a rise in the surface hydrogen peroxide admixture would result in an increase in large pore sizes. With the increasing rise in hydrogen peroxide dosage, the balance of tiny pores with a pore size less than 900 image. Figure 11. Effect of foam dosing on porosity.
As visualized in Figure 12, the average pore size of FAFC increased with the increase in the hydrogen peroxide admixture. The average pore size ranged from 694.08 to 1506.03 μm. In response to an increase in hydrogen peroxide, more foam was generated, cementitious material surrounding the foam decreased, and the foam became less stable [47]. It can easily fuse with the surrounding foam, increasing the average pore size. Moreover, the more hydrogen peroxide that was mixed, the greater the rate of foam generation and the greater the impact force; after the slurry was solidified and hardened, the average pore size increased. If the volume of fusion foam is too large, or if the average pore size is too large, the test block will collapse to a certain extent during foaming. Therefore, the amount of hydrogen peroxide should not be too large [48].
As shown in Figure 13, the hydrogen peroxide admixture affected the pore size distribution of foam concrete. As the hydrogen peroxide admixture increased, the FAFC's pore size distribution shifted towards larger pore sizes, and a rise in the surface hydrogen peroxide admixture would result in an increase in large pore sizes. With the increasing rise in hydrogen peroxide dosage, the balance of tiny pores with a pore size less than 900 It can easily fuse with the surrounding foam, increasing the average pore size. Moreover, the more hydrogen peroxide that was mixed, the greater the rate of foam generation and the greater the impact force; after the slurry was solidified and hardened, the average pore size increased. If the volume of fusion foam is too large, or if the average pore size is too large, the test block will collapse to a certain extent during foaming. Therefore, the amount of hydrogen peroxide should not be too large [48].
As shown in Figure 13, the hydrogen peroxide admixture affected the pore size distribution of foam concrete. As the hydrogen peroxide admixture increased, the FAFC's pore size distribution shifted towards larger pore sizes, and a rise in the surface hydrogen peroxide admixture would result in an increase in large pore sizes. With the increasing rise in hydrogen peroxide dosage, the balance of tiny pores with a pore size less than 900 µm in FAFC gradually decreases. With the increasing hydrogen peroxide dosage, the dominant pore size interval (the pore size range with the most significant percentage of pore size) increased continuously. When the hydrogen peroxide dose was 5%, the dominant pore size interval was 900~1200 µm; when the hydrogen peroxide dose was 5.5%, the dominant pore size interval was 1200~1500 µm; and when the hydrogen peroxide dose was 6%, the dominant pore size interval was greater than 1500 µm. μm in FAFC gradually decreases. With the increasing hydrogen peroxide dosage, the dominant pore size interval (the pore size range with the most significant percentage of pore size) increased continuously. When the hydrogen peroxide dose was 5%, the dominant pore size interval was 900~1200 μm; when the hydrogen peroxide dose was 5.5%, the dominant pore size interval was 1200~1500 μm; and when the hydrogen peroxide dose was 6%, the dominant pore size interval was greater than 1500 μm. The pore shape factor of FAFC tended to decrease and then increase with the increase in the hydrogen peroxide admixture, as visualized in Figure 14. By increasing the hydrogen peroxide admixture, the amount of foam increased, and a large number of foams came into contact with each other, fused into large pores, and extruded between each other; as a result, shapes tended to deform, and the pore shape factors increased. The pore shape factor of FAFC tended to decrease and then increase with the increase in the hydrogen peroxide admixture, as visualized in Figure 14. By increasing the hydrogen peroxide admixture, the amount of foam increased, and a large number of foams came into contact with each other, fused into large pores, and extruded between each other; as a result, shapes tended to deform, and the pore shape factors increased. dominant pore size interval (the pore size range with the most significant percentage of pore size) increased continuously. When the hydrogen peroxide dose was 5%, the dominant pore size interval was 900~1200 μm; when the hydrogen peroxide dose was 5.5%, the dominant pore size interval was 1200~1500 μm; and when the hydrogen peroxide dose was 6%, the dominant pore size interval was greater than 1500 μm. The pore shape factor of FAFC tended to decrease and then increase with the increase in the hydrogen peroxide admixture, as visualized in Figure 14. By increasing the hydrogen peroxide admixture, the amount of foam increased, and a large number of foams came into contact with each other, fused into large pores, and extruded between each other; as a result, shapes tended to deform, and the pore shape factors increased. Figure 14. Effect of hydrogen peroxide on pore shape factor. Figure 14. Effect of hydrogen peroxide on pore shape factor. Figure 15 shows the pore structure before and after the cross-sectional treatment when silica fume is 4%. It can be observed from Figure 16 that with the increase in the silica fume admixture, the FAFC porosity showed a trend of first decreasing and then increasing, and the FAFC porosity kept falling when the silica fume admixture rose from 0 to 4%. As a result of the silica fume having a micro-aggregate filling effect and volcanic ash activity, silica fume particles filled the pores of the FAFC slurry, and the active SiO 2 in the silica fume reacted with the cement to create gels. The substance hydrated calcium silicate gel, etc., increases the compactness of FAFC, improves the stability of the foam, and reduces the porosity of FAFC. As the silica fume content increased from 4 to 8%, the slurry's active SiO 2 content increased, cement content was reduced, as was the Ca(OH) 2 content. The volcanic ash activity of silica fume was reduced, the degree of the secondary hydration reaction was reduced, the uniformity and compactness of the slurry were reduced, and the foam was easily broken and fused, increasing the FAFC porosity. Excess silica fume will lead to the destruction and fusion of part of the foam to form joint pores, which increases the FAFC porosity [48]. It is shown that the appropriate amount of silica fume in FAFC can improve its porosity. silica fume admixture, the FAFC porosity showed a trend of first decreasing and then increasing, and the FAFC porosity kept falling when the silica fume admixture rose from 0 to 4%. As a result of the silica fume having a micro-aggregate filling effect and volcanic ash activity, silica fume particles filled the pores of the FAFC slurry, and the active SiO2 in the silica fume reacted with the cement to create gels. The substance hydrated calcium silicate gel, etc., increases the compactness of FAFC, improves the stability of the foam, and reduces the porosity of FAFC. As the silica fume content increased from 4 to 8%, the slurry's active SiO2 content increased, cement content was reduced, as was the Ca(OH)2 content. The volcanic ash activity of silica fume was reduced, the degree of the secondary hydration reaction was reduced, the uniformity and compactness of the slurry were reduced, and the foam was easily broken and fused, increasing the FAFC porosity. Excess silica fume will lead to the destruction and fusion of part of the foam to form joint pores, which increases the FAFC porosity [48]. It is shown that the appropriate amount of silica fume in FAFC can improve its porosity. As shown in Figure 17, the average pore size of FAFC tended to decrease and then increase with the increase in silica fume dosing. When the silica fume dosing increased from 0 to 4%, the micro-aggregate filling effect and volcanic ash effect of silica fume improved the uniformity and compactness of the FAFC slurry, enhanced the stability of the foam, caused the pore size to be small, and the average pore size decreased. When the silica fume admixture, the FAFC porosity showed a trend of first decreasing and then increasing, and the FAFC porosity kept falling when the silica fume admixture rose from 0 to 4%. As a result of the silica fume having a micro-aggregate filling effect and volcanic ash activity, silica fume particles filled the pores of the FAFC slurry, and the active SiO2 in the silica fume reacted with the cement to create gels. The substance hydrated calcium silicate gel, etc., increases the compactness of FAFC, improves the stability of the foam, and reduces the porosity of FAFC. As the silica fume content increased from 4 to 8%, the slurry's active SiO2 content increased, cement content was reduced, as was the Ca(OH)2 content. The volcanic ash activity of silica fume was reduced, the degree of the secondary hydration reaction was reduced, the uniformity and compactness of the slurry were reduced, and the foam was easily broken and fused, increasing the FAFC porosity. Excess silica fume will lead to the destruction and fusion of part of the foam to form joint pores, which increases the FAFC porosity [48]. It is shown that the appropriate amount of silica fume in FAFC can improve its porosity. As shown in Figure 17, the average pore size of FAFC tended to decrease and then increase with the increase in silica fume dosing. When the silica fume dosing increased from 0 to 4%, the micro-aggregate filling effect and volcanic ash effect of silica fume improved the uniformity and compactness of the FAFC slurry, enhanced the stability of the foam, caused the pore size to be small, and the average pore size decreased. When the As shown in Figure 17, the average pore size of FAFC tended to decrease and then increase with the increase in silica fume dosing. When the silica fume dosing increased from 0 to 4%, the micro-aggregate filling effect and volcanic ash effect of silica fume improved the uniformity and compactness of the FAFC slurry, enhanced the stability of the foam, caused the pore size to be small, and the average pore size decreased. When the amount of silica fume was increased from 4 to 8%, the amount of silica fume was too high, the cement content was relatively reduced, the degree of secondary hydration reaction was reduced, the hydration products were reduced, the compactness and uniformity of the cement slurry were reduced, the liquidity of the slurry was reduced, the stability of the foam was reduced, the fusion of foam occurred, and the average pore size was increased. amount of silica fume was increased from 4 to 8%, the amount of silica fume was too high, the cement content was relatively reduced, the degree of secondary hydration reaction was reduced, the hydration products were reduced, the compactness and uniformity of the cement slurry were reduced, the liquidity of the slurry was reduced, the stability of the foam was reduced, the fusion of foam occurred, and the average pore size was increased. The effect of silica fume dosage on the pore size distribution of FAFC is shown in Figure 18. Upon increasing the silica fume dosage, the FAFC pore size distribution was approximately normal, and it first migrated in the direction of a smaller pore size, and then migrated in the direction of a larger pore size. The proportion of tiny pores with FAFC pore sizes less than 900 μm increased first and then decreased with increasing silica fume dosage. Among them, the pore size distribution range was the same when the silica fume dose was 2% and 4%, and the difference was in the fact that, when the pore size interval was <900 μm, the percentage of FAFC pores with a 4% silica fume dose was more significant than that with a 2% silica fume dose. The effect of silica fume dosage on the pore size distribution of FAFC is shown in Figure 18. Upon increasing the silica fume dosage, the FAFC pore size distribution was approximately normal, and it first migrated in the direction of a smaller pore size, and then migrated in the direction of a larger pore size. The proportion of tiny pores with FAFC pore sizes less than 900 µm increased first and then decreased with increasing silica fume dosage. Among them, the pore size distribution range was the same when the silica fume dose was 2% and 4%, and the difference was in the fact that, when the pore size interval was <900 µm, the percentage of FAFC pores with a 4% silica fume dose was more significant than that with a 2% silica fume dose. amount of silica fume was increased from 4 to 8%, the amount of silica fume was too high, the cement content was relatively reduced, the degree of secondary hydration reaction was reduced, the hydration products were reduced, the compactness and uniformity of the cement slurry were reduced, the liquidity of the slurry was reduced, the stability of the foam was reduced, the fusion of foam occurred, and the average pore size was increased. The effect of silica fume dosage on the pore size distribution of FAFC is shown in Figure 18. Upon increasing the silica fume dosage, the FAFC pore size distribution was approximately normal, and it first migrated in the direction of a smaller pore size, and then migrated in the direction of a larger pore size. The proportion of tiny pores with FAFC pore sizes less than 900 μm increased first and then decreased with increasing silica fume dosage. Among them, the pore size distribution range was the same when the silica fume dose was 2% and 4%, and the difference was in the fact that, when the pore size interval was <900 μm, the percentage of FAFC pores with a 4% silica fume dose was more significant than that with a 2% silica fume dose. Figure 19 shows that with increasing silica fume dosing, the pore shape factor of FAFC decreased first and then increased; the smaller the pore shape factor, the closer the pore size is to a sphere. Therefore, when the silica fume dosing is 4%, the pore shape factor is the lowest, at 1.250, and the pore shape is the most rounded. Increasing the silica fume dosage from 0 to 4% decreased the pore shape factor continuously; due to the micro-aggregate effect of silica fume, it could fill the FAFC, which increased the uniformity of the cement paste and the volcanic ash effect, which improved the cement paste's stability, and the tumbling effect, which decreased the pore shape factor. As the silica fume levels increased from 4 to 8%, the pore shape factor continuously increased because, with increasing silica fume levels, the cement paste's fluidity decreased.
The Variation Law of Silica Fume Dosing on the Pore Structure of Foam Concrete
FAFC decreased first and then increased; the smaller the pore shape factor, the closer the pore size is to a sphere. Therefore, when the silica fume dosing is 4%, the pore shape factor is the lowest, at 1.250, and the pore shape is the most rounded. Increasing the silica fume dosage from 0 to 4% decreased the pore shape factor continuously; due to the micro-aggregate effect of silica fume, it could fill the FAFC, which increased the uniformity of the cement paste and the volcanic ash effect, which improved the cement paste's stability, and the tumbling effect, which decreased the pore shape factor. As the silica fume levels increased from 4 to 8%, the pore shape factor continuously increased because, with increasing silica fume levels, the cement paste's fluidity decreased. The foam cannot be evenly dispersed in the paste during the foaming process. Mutual aggregation leads to the fusion of foam rupture, the number of irregular foams increases, and the pore shape factor increases. Thus, mixing the correct amount of silica fume can reduce the pore shape factor of FAFC. Figure 20 shows the pore structure before and after the cross-sectional treatment when fiber dosage is 0.1%.According to Figure 21, the FAFC porosity increased and then decreased with the dose of PP fibers, but the FAFC average pore size decreased and then increased with the dose of PP fibers, and the FAFC porosity, average pore size, and shape factor were the lowest at a 0.1% dose of PP fibers. The foam cannot be evenly dispersed in the paste during the foaming process. Mutual aggregation leads to the fusion of foam rupture, the number of irregular foams increases, and the pore shape factor increases. Thus, mixing the correct amount of silica fume can reduce the pore shape factor of FAFC. Figure 20 shows the pore structure before and after the cross-sectional treatment when fiber dosage is 0.1%. According to Figure 21, the FAFC porosity increased and then decreased with the dose of PP fibers, but the FAFC average pore size decreased and then increased with the dose of PP fibers, and the FAFC porosity, average pore size, and shape factor were the lowest at a 0.1% dose of PP fibers.
The Variation Law of PP Fiber Admixture on the Pore Structure of Foam Concrete
FAFC decreased first and then increased; the smaller the pore shape factor, the closer the pore size is to a sphere. Therefore, when the silica fume dosing is 4%, the pore shape factor is the lowest, at 1.250, and the pore shape is the most rounded. Increasing the silica fume dosage from 0 to 4% decreased the pore shape factor continuously; due to the micro-aggregate effect of silica fume, it could fill the FAFC, which increased the uniformity of the cement paste and the volcanic ash effect, which improved the cement paste's stability, and the tumbling effect, which decreased the pore shape factor. As the silica fume levels increased from 4 to 8%, the pore shape factor continuously increased because, with increasing silica fume levels, the cement paste's fluidity decreased. The foam cannot be evenly dispersed in the paste during the foaming process. Mutual aggregation leads to the fusion of foam rupture, the number of irregular foams increases, and the pore shape factor increases. Thus, mixing the correct amount of silica fume can reduce the pore shape factor of FAFC. Figure 20 shows the pore structure before and after the cross-sectional treatment when fiber dosage is 0.1%.According to Figure 21, the FAFC porosity increased and then decreased with the dose of PP fibers, but the FAFC average pore size decreased and then increased with the dose of PP fibers, and the FAFC porosity, average pore size, and shape factor were the lowest at a 0.1% dose of PP fibers. The FAFC's pore size distribution approximated the customary distribution law with an increase in PP fiber dosage, as shown in Figures 22 and 23. The pore size distribution of FAFC tended towards a large pore size. With the increase in the PP dosage, the dominant pore size interval of FAFC first decreased and then increased. The dominant pore size interval of FAFC was always in the range of 600~900 µm when the PP fiber dosage was 0, 0.2, and 0.4%. The percentage of tiny pores with a size less than 900 µm was the largest, and the pore size distribution was reasonable. Therefore, 0.1% of PP fibers was selected. The FAFC's pore size distribution approximated the customary distribution law with an increase in PP fiber dosage, as shown in Figures 22 and 23. The pore size distribution of FAFC tended towards a large pore size. With the increase in the PP dosage, the dominant pore size interval of FAFC first decreased and then increased. The dominant pore size interval of FAFC was always in the range of 600~900 μm when the PP fiber dosage was 0, 0.2, and 0.4%. The percentage of tiny pores with a size less than 900 μm was the largest, and the pore size distribution was reasonable. Therefore, 0.1% of PP fibers was selected. The FAFC's pore size distribution approximated the customary distribution law with an increase in PP fiber dosage, as shown in Figures 22 and 23. The pore size distribution of FAFC tended towards a large pore size. With the increase in the PP dosage, the dominant pore size interval of FAFC first decreased and then increased. The dominant pore size interval of FAFC was always in the range of 600~900 μm when the PP fiber dosage was 0, 0.2, and 0.4%. The percentage of tiny pores with a size less than 900 μm was the largest, and the pore size distribution was reasonable. Therefore, 0.1% of PP fibers was selected. The variation trends in the pore shape factor for the five mixture ratios are shown in Figure 24. The FAFC pore shape factor tended to decrease first and then increase with the increase in the PP fiber dosage. The pore shape factor was the smallest at 0.1% of PP fibers, which was 1.218, and the pore shape was closest to spherical. The pore shape factor decreased at a 0.1% PP fiber dosage compared to FAFC without PP fibers. Materials 2022, 15, x FOR PEER REVIEW 20 of 28 Figure 23. Effect of fiber doping on pore size distribution.
The variation trends in the pore shape factor for the five mixture ratios are shown in Figure 24. The FAFC pore shape factor tended to decrease first and then increase with the increase in the PP fiber dosage. The pore shape factor was the smallest at 0.1% of PP fibers, which was 1.218, and the pore shape was closest to spherical. The pore shape factor decreased at a 0.1% PP fiber dosage compared to FAFC without PP fibers. The above phenomenon occurred because the PP fibers built a three-dimensional space mesh structure in the FAFC [49], increasing the foam stability and decreasing the porosity. When the PP fiber dosage increased from 0.1 to 0.4%, the agglomeration of PP fibers appeared, the homogeneity of the cement paste decreased, the foam stability decreased, the foam fractured and fused, joint pores increased, and the porosity, average pore size, and shape factor increased. This shows that mixing the appropriate amount of PP fibers can reduce the porosity and average pore size of FAFC, but excessive amounts of PP fiber will have adverse effects. The variation trends in the pore shape factor for the five mixture ratios are shown in Figure 24. The FAFC pore shape factor tended to decrease first and then increase with the increase in the PP fiber dosage. The pore shape factor was the smallest at 0.1% of PP fibers, which was 1.218, and the pore shape was closest to spherical. The pore shape factor decreased at a 0.1% PP fiber dosage compared to FAFC without PP fibers. The above phenomenon occurred because the PP fibers built a three-dimensional space mesh structure in the FAFC [49], increasing the foam stability and decreasing the porosity. When the PP fiber dosage increased from 0.1 to 0.4%, the agglomeration of PP fibers appeared, the homogeneity of the cement paste decreased, the foam stability decreased, the foam fractured and fused, joint pores increased, and the porosity, average pore size, and shape factor increased. This shows that mixing the appropriate amount of PP fibers can reduce the porosity and average pore size of FAFC, but excessive amounts of PP fiber will have adverse effects. The above phenomenon occurred because the PP fibers built a three-dimensional space mesh structure in the FAFC [49], increasing the foam stability and decreasing the porosity. When the PP fiber dosage increased from 0.1 to 0.4%, the agglomeration of PP fibers appeared, the homogeneity of the cement paste decreased, the foam stability decreased, the foam fractured and fused, joint pores increased, and the porosity, average pore size, and shape factor increased. This shows that mixing the appropriate amount of PP fibers can reduce the porosity and average pore size of FAFC, but excessive amounts of PP fiber will have adverse effects.
Analysis of Freeze-Thaw Quality Loss Rate Law
The mass loss rate of FAFC test blocks is calculated by the formula where M m is the mass loss rate of the freeze-thaw cycle test, %; M 0 is the dry mass of the test block before the freeze-thaw cycle test, g; M s is the dry mass of the test block after the freeze-thaw cycle test, g. As seen in Figure 25, the mass loss rate of the FAFC test blocks without PP fiber dosage increased with the number of freeze-thaw cycles. Mass loss reached 36.9% after 15 freezethaw cycles, and all the surface layers of the test blocks fell off, resulting in profound mass loss. The mass loss rate of FAFC specimens with different amounts of PP fiber gradually increased with the number of freeze-thaw cycles.
Analysis of Freeze-Thaw Quality Loss Rate Law
The mass loss rate of FAFC test blocks is calculated by the formula where Mm is the mass loss rate of the freeze-thaw cycle test, %; M0 is the dry mass of the test block before the freeze-thaw cycle test, g; Ms is the dry mass of the test block after the freeze-thaw cycle test, g. As seen in Figure 25, the mass loss rate of the FAFC test blocks without PP fiber dosage increased with the number of freeze-thaw cycles. Mass loss reached 36.9% after 15 freeze-thaw cycles, and all the surface layers of the test blocks fell off, resulting in profound mass loss. The mass loss rate of FAFC specimens with different amounts of PP fiber gradually increased with the number of freeze-thaw cycles. When the dosage of PP fibers was 0.4%, the mass loss rate of the FAFC test block was the smallest, indicating that the dosage of PP fibers can effectively reduce the mass loss of the test block and improve the FAFC test block freezing resistance. As a result of the freezing process, a large amount of liquid water is absorbed into the pores of foam concrete, which freezes into solid ice when exposed to freezing pressure [50,51]. FAFC's compressive strength is low when the tensile stress is greater than the tensile strength of the pore wall. The pore wall produces microcracks, and the dense pore wall structure becomes loose as a result; in the melting process, the solid ice melts into liquid water, the hole wall shrinks, and the water enters the test block inside through the cracks in the hole wall, increasing its water content. Therefore, the mass loss rate increases with the increase in the number of freeze-thaw cycles.
When the specimens are not mixed with PP fibers, the corners of FAFC specimens are the first to be damaged, and spalling occurs. With the increase in the number of freezethaw cycles, the spalling phenomenon became increasingly severe, and the quality loss rate increased. Comparing the test blocks with and without PP fibers, when the PP fiber content was 0.1%, the porosity decreased, the structure compactness of the pore wall increased, the compressive strength of the pore wall increased, the moisture content inside the test block decreased, the freezing pressure caused by freezing decreased, and the freezing resistance improved. At the same time, the uniformly dispersed PP fibers were bonded with the cement paste. The PP fibers provided tensile stress for the FAFC test blocks. For the test blocks whose pore walls were damaged by freezing and swelling, the When the dosage of PP fibers was 0.4%, the mass loss rate of the FAFC test block was the smallest, indicating that the dosage of PP fibers can effectively reduce the mass loss of the test block and improve the FAFC test block freezing resistance. As a result of the freezing process, a large amount of liquid water is absorbed into the pores of foam concrete, which freezes into solid ice when exposed to freezing pressure [50,51]. FAFC's compressive strength is low when the tensile stress is greater than the tensile strength of the pore wall. The pore wall produces microcracks, and the dense pore wall structure becomes loose as a result; in the melting process, the solid ice melts into liquid water, the hole wall shrinks, and the water enters the test block inside through the cracks in the hole wall, increasing its water content. Therefore, the mass loss rate increases with the increase in the number of freeze-thaw cycles.
When the specimens are not mixed with PP fibers, the corners of FAFC specimens are the first to be damaged, and spalling occurs. With the increase in the number of freeze-thaw cycles, the spalling phenomenon became increasingly severe, and the quality loss rate increased. Comparing the test blocks with and without PP fibers, when the PP fiber content was 0.1%, the porosity decreased, the structure compactness of the pore wall increased, the compressive strength of the pore wall increased, the moisture content inside the test block decreased, the freezing pressure caused by freezing decreased, and the freezing resistance improved. At the same time, the uniformly dispersed PP fibers were bonded with the cement paste. The PP fibers provided tensile stress for the FAFC test blocks. For the test blocks whose pore walls were damaged by freezing and swelling, the broken pieces with large cracks in the main body of the test blocks could not be completely peeled off, and they were still connected to the main body of the test blocks through the PP fibers [52]. The improvement effect of PP fibers is more evident as the number of freeze-thaw cycles increases.
As the PP fiber dosage increased from 0.1 to 0.4%, the porosity increased, the pore wall became thinner, the hydration products decreased, and the pore wall compactness decreased as well.
The FAFC pore wall was more easily destroyed in the freeze-thaw cycle test because water entered rapidly through cracks in the pore wall. The water content of the test block increased, freezing pressure increased, and the FAFC pore wall was more easily damaged. However, with an increasing amount of PP fiber, the tensile stress provided by PP fibers gradually increases. The pore wall was damaged in the freeze-thaw cycle test, and the cement paste structure around the PP fibers became loose. The test blocks fell off after crumbling, resulting in the exposure of the PP fibers, but their tensile stress kept the central part of the test blocks connected. Hence, the mass loss rate decreased with the increasing amount of PP fiber.
With a certain amount of PP fiber, the quality loss rate increases slowly with the increase in the number of freeze-thaw cycles; when the number of freeze-thaw cycles is certain, the quality loss rate decreases slowly with the addition of PP fibers, and with 0.4% of PP fiber, the quality loss rate is the lowest after 5, 10, 15, 20, and 25 freeze-thaw cycles.
Analysis of Freeze-Thaw Compressive Strength Loss Rate Law
The following formula calculates the loss rate of the compressive strength of FAFC test blocks: where ∆f c is the compressive strength loss rate of the freeze-thaw cycle test, %; f cn is the compressive strength of the test block after the freeze-thaw cycle test, MPa; and f c0 is the compressive strength of the test block before the freeze-thaw cycle test, MPa. Figure 26 shows that for FAFC without PP fibers, after 15 freeze-thaw cycles, the test block was severely damaged, the compressive strength loss rate reached 74.1%, and the freeze-thaw cycles were terminated. With the increase in the number of freeze-thaw cycles, the compressive strength loss rate of FAFC gradually increased. The compressive strength loss rate gradually decreased with the rise in PP fiber dosing.
peeled off, and they were still connected to the main body of the test blocks through the PP fibers [52]. The improvement effect of PP fibers is more evident as the number of freeze-thaw cycles increases.
As the PP fiber dosage increased from 0.1 to 0.4%, the porosity increased, the pore wall became thinner, the hydration products decreased, and the pore wall compactness decreased as well.
The FAFC pore wall was more easily destroyed in the freeze-thaw cycle test because water entered rapidly through cracks in the pore wall. The water content of the test block increased, freezing pressure increased, and the FAFC pore wall was more easily damaged. However, with an increasing amount of PP fiber, the tensile stress provided by PP fibers gradually increases. The pore wall was damaged in the freeze-thaw cycle test, and the cement paste structure around the PP fibers became loose. The test blocks fell off after crumbling, resulting in the exposure of the PP fibers, but their tensile stress kept the central part of the test blocks connected. Hence, the mass loss rate decreased with the increasing amount of PP fiber.
With a certain amount of PP fiber, the quality loss rate increases slowly with the increase in the number of freeze-thaw cycles; when the number of freeze-thaw cycles is certain, the quality loss rate decreases slowly with the addition of PP fibers, and with 0.4% of PP fiber, the quality loss rate is the lowest after 5, 10, 15, 20, and 25 freeze-thaw cycles.
Analysis of Freeze-Thaw Compressive Strength Loss Rate Law
The following formula calculates the loss rate of the compressive strength of FAFC test blocks: where Δfc is the compressive strength loss rate of the freeze-thaw cycle test, %; fcn is the compressive strength of the test block after the freeze-thaw cycle test, MPa; and fc0 is the compressive strength of the test block before the freeze-thaw cycle test, MPa. Figure 26 shows that for FAFC without PP fibers, after 15 freeze-thaw cycles, the test block was severely damaged, the compressive strength loss rate reached 74.1%, and the freeze-thaw cycles were terminated. With the increase in the number of freeze-thaw cycles, the compressive strength loss rate of FAFC gradually increased. The compressive strength loss rate gradually decreased with the rise in PP fiber dosing. Since the water in the FAFC pore is liquid, the water molecules lean together due to the hydrogen bonding force, which reduces the volume. As liquid water freezes into solid Since the water in the FAFC pore is liquid, the water molecules lean together due to the hydrogen bonding force, which reduces the volume. As liquid water freezes into solid ice, the water molecules are affected by a molecular force, which reduces the hydrogen bonding force. Water enters the pores through the cracks created in the pore wall in the freeze-thaw cycle test, and the more water that enters the pores, the larger the volume expansion when the water freezes. During freeze-thaw cycles, pore wall cracks gradually develop and expand, the pore wall compactness gradually decreases, internal joint pores form, the compressive strength gradually decreases, and the compressive strength loss rate gradually increases.
As a result of the freeze-thaw cycle test on the FAFC without PP fibers, the internal pores of the test block increased, the pore wall loosened, the internal structure deteriorated, the surface cracks expanded into larger cracks, the broken pieces spalled, the compressive strength decreased rapidly, and the compressive strength loss rate increased gradually. The FAFC was mixed with the PP fibers, and the cement paste was well bonded together. The PP fibers' crack-blocking effect prevents and disperses FAFC cracks, increases the compactness of the test block, prevents water from entering the pores, and decreases the water content in the pores [53,54].
The freezing pressure generated during freezing is slight, which can improve the ability of FAFC to resist freeze-thaw damage. At the same time, PP fibers inside FAFC can share part of the internal stress and part of the freezing pressure generated by the specimen under the temperature change, inhibiting the generation and development of cracks indicated by the pore wall and improving the resistance of the pore wall to damage. There is a three-dimensional chaotic distribution of PP fibers in FAFC, and the bonding effect with cement paste prevents the surface from spalling and increases the ability of FAFC to withstand freeze-thaw cycles. The greater the admixture of PP fibers, the greater the test block's resistance to spalling.
PP fiber is a good substitute for polyester fiber, but when the PP fiber dosage was 0.4%, the quality loss rate was the lowest after 5, 10, 15, 20, and 25 freeze-thaw cycles; when the amount of PP fiber was 0.4%, and when the amount of PP fiber was 0.4%, the quality loss rate was the lowest after 5, 10, 15, 20, and 25 freeze-thaw cycles.
Relationship between Ultrasonic Wave Speed and the Number of Freeze-Thaw Cycles
Under the action of freeze-thaw cycles, the FAFC test block pore wall produces microcracks; the cracks will have an absorption and dissipation effect on ultrasonic energy, and the propagation speed of ultrasonic waves will be reduced with the increase in defects such as cracks inside the foam concrete test block.
As the number of freeze-thaw cycles increases, the cracks inside the test block increase, and the ultrasonic wave velocity decreases [55]. Ultrasonic wave velocity can be used to analyze the ability of FAFC to resist freeze-thaw cycles; this paper uses the damage factor D to evaluate the degree of damage inside the FAFC test block from freeze-thaw cycles. The damage factor D is calculated as where E n is the elastic modulus of the specimen after the nth freeze-thaw cycle, MPa; E 0 is the initial elastic modulus of the specimen, MPa; V n is the ultrasonic wave velocity after the nth freeze-thaw cycle, km/s; and V 0 is the initial ultrasonic wave velocity of the specimen, km/s. The ultrasonic test results of the test block before the freeze-thaw cycle are shown in Table 5. When PP fiber doping was present, the ultrasonic wave velocity increased and then decreased due to the bonding effect between the cement slurry and PP fibers, with a more uniform pore size distribution, a reduction in porosity and average pore size, an increase in FAFC wall compactness, small energy loss, and large wave velocity. With an excess of PP fibers in the test block, the foam ruptures, the joint pores grow, the porosity and average pore size increase, the internal structure of the test block becomes poor, and the ultrasonic wave speed decreases.
Therefore, the initial ultrasonic wave velocity was maximum at a 0.1% PP fiber dosage and fell with the increase in the PP fiber dosage. Nonetheless, the ultrasonic wave velocity increased compared to FAFC without a PP fiber dosage. The effect of PP fibers on the ultrasonic velocity of FAFC after freeze-thaw cycles is shown in Figure 27. In addition to the increase in cracks in the pore walls caused by the freeze-thaw cycles, the density decreases, and the ultrasonic energy loss increases as the wave passes through the defects. During the freeze-thaw cycle, the tensile stress of PP fibers resists the damage to the pore wall due to the freezing pressure. The improvement in the anti-freezing performance of PP fibers is more evident with the increase in PP fiber dosage [56]. This is consistent with the changing pattern of PP fibers on the mass loss rate and strength loss rate of FAFC, indicating that the use of ultrasonic waves could well reflect the effect of PP fibers on the internal pore structure of foam concrete after freeze-thaw cycles. and average pore size increase, the internal structure of the test block becomes poor, and the ultrasonic wave speed decreases. Therefore, the initial ultrasonic wave velocity was maximum at a 0.1% PP fiber dosage and fell with the increase in the PP fiber dosage. Nonetheless, the ultrasonic wave velocity increased compared to FAFC without a PP fiber dosage.
The effect of PP fibers on the ultrasonic velocity of FAFC after freeze-thaw cycles is shown in Figure 27. In addition to the increase in cracks in the pore walls caused by the freeze-thaw cycles, the density decreases, and the ultrasonic energy loss increases as the wave passes through the defects. During the freeze-thaw cycle, the tensile stress of PP fibers resists the damage to the pore wall due to the freezing pressure. The improvement in the anti-freezing performance of PP fibers is more evident with the increase in PP fiber dosage [56]. This is consistent with the changing pattern of PP fibers on the mass loss rate and strength loss rate of FAFC, indicating that the use of ultrasonic waves could well reflect the effect of PP fibers on the internal pore structure of foam concrete after freezethaw cycles. The effect of PP fibers on the damage factor of FAFC after freeze-thaw cycles is shown in Figure 28. The damage factor increased as the number of freeze-thaw cycles increased, and the damage factor decreased as the PP fiber dosing increased [57]. When the amount of PP fibers was 0.4%, the damage factor was the smallest. It increased with the freeze-thaw cycles, and decreased with high PP fiber dosing. After mixing with PP fibers, the PP fiber and cement paste were closely bonded, which increased the size of the pore wall that could resist the freezing pressure and reduced the generation and development of cracks in the pore wall under the freezing pressure. The damage to the pore structure in FAFC was reduced, and the damage factor was diminished. This conclusion is consistent with the effect of PP fiber dosing on the ultrasonic wave velocity of FAFC after The effect of PP fibers on the damage factor of FAFC after freeze-thaw cycles is shown in Figure 28. The damage factor increased as the number of freeze-thaw cycles increased, and the damage factor decreased as the PP fiber dosing increased [57]. When the amount of PP fibers was 0.4%, the damage factor was the smallest. It increased with the freeze-thaw cycles, and decreased with high PP fiber dosing. After mixing with PP fibers, the PP fiber and cement paste were closely bonded, which increased the size of the pore wall that could resist the freezing pressure and reduced the generation and development of cracks in the pore wall under the freezing pressure. The damage to the pore structure in FAFC was reduced, and the damage factor was diminished. This conclusion is consistent with the effect of PP fiber dosing on the ultrasonic wave velocity of FAFC after the freeze-thaw cycle test, indicating that the damage factor can well characterize the impact of PP fibers on the frost resistance of FAFC. the freeze-thaw cycle test, indicating that the damage factor can well characterize the impact of PP fibers on the frost resistance of FAFC.
Conclusions
Through this experimental research on the compressive strength, thermal conductivity, and durability of FAFC, the following conclusions are drawn.
(1) The test results of different hydrogen peroxide dosages on the compressive strength, thermal conductivity, and pore structure parameters of FAFC showed that the compressive strength and thermal conductivity of FAFC decreased with the increase in hydrogen peroxide dosage, and the peak compressive strength was 0.670 MPa when the hydrogen peroxide dosage was 4%. The thermal conductivity was 0.0580 W/(m·K) when the hydrogen peroxide dosage was 5%. The porosity and average pore size of FAFC are positively correlated with the hydrogen peroxide dosage, and the pore size distribution migrates in the direction of large pores. Therefore, it is recommended that the hydrogen peroxide dose is 5%.
(2) In studying the effects of different silica fume dosages on the compressive strength, thermal conductivity, and pore structure parameters of FAFC at 5% of hydrogen peroxide, the dry density of FAFC showed a decreasing trend with the increase in silica fume dosage, and the remaining indexes all peaked. The peak compressive strength was 0.625 MPa, and the peak thermal conductivity was 0.0596 W/(m·K) when the silica fume dosing was 4%. The porosity, average pore size, and pore shape factor of FAFC showed a decreasing trend with the increased silica fume dosing. The pore size distribution migrates first to the small pore direction and then to the significant pore direction. Considering all the properties, the dose of silica fume is recommended to be 4%.
(3) We also studied the effects of PP fiber dosage on the compressive strength, thermal conductivity, and pore structure parameters of FAFC with 5% hydrogen peroxide and 4% silica fume. The compressive strength of FAFC doped with PP fibers was better than when it was not doped. The peak compressive strength was 0.679 MPa, and the peak thermal conductivity was 0.0610 W/(m·K) when the PP fiber dosage was 0.1%. When the dosage of PP fibers was 0.1%, the porosity, average pore size, and pore shape factor of FAFC were the lowest, at 83.24%, 529.05 μm, and 1.218, respectively, and the pore size distribution first migrated to the small pore direction and then to the significant pore direction. Therefore, a PP fiber dosage of 0.1% is recommended.
(4) With the growth of the number of freeze-thaw cycles, the damage index () of FAFC increased. Nonetheless, with the increase in the PP fiber admixture, its damage
Conclusions
Through this experimental research on the compressive strength, thermal conductivity, and durability of FAFC, the following conclusions are drawn.
(1) The test results of different hydrogen peroxide dosages on the compressive strength, thermal conductivity, and pore structure parameters of FAFC showed that the compressive strength and thermal conductivity of FAFC decreased with the increase in hydrogen peroxide dosage, and the peak compressive strength was 0.670 MPa when the hydrogen peroxide dosage was 4%. The thermal conductivity was 0.0580 W/(m·K) when the hydrogen peroxide dosage was 5%. The porosity and average pore size of FAFC are positively correlated with the hydrogen peroxide dosage, and the pore size distribution migrates in the direction of large pores. Therefore, it is recommended that the hydrogen peroxide dose is 5%. (2) In studying the effects of different silica fume dosages on the compressive strength, thermal conductivity, and pore structure parameters of FAFC at 5% of hydrogen peroxide, the dry density of FAFC showed a decreasing trend with the increase in silica fume dosage, and the remaining indexes all peaked. The peak compressive strength was 0.625 MPa, and the peak thermal conductivity was 0.0596 W/(m·K) when the silica fume dosing was 4%. The porosity, average pore size, and pore shape factor of FAFC showed a decreasing trend with the increased silica fume dosing. The pore size distribution migrates first to the small pore direction and then to the significant pore direction. Considering all the properties, the dose of silica fume is recommended to be 4%. (3) We also studied the effects of PP fiber dosage on the compressive strength, thermal conductivity, and pore structure parameters of FAFC with 5% hydrogen peroxide and 4% silica fume. The compressive strength of FAFC doped with PP fibers was better than when it was not doped. The peak compressive strength was 0.679 MPa, and the peak thermal conductivity was 0.0610 W/(m·K) when the PP fiber dosage was 0.1%. When the dosage of PP fibers was 0.1%, the porosity, average pore size, and pore shape factor of FAFC were the lowest, at 83.24%, 529.05 µm, and 1.218, respectively, and the pore size distribution first migrated to the small pore direction and then to the significant pore direction. Therefore, a PP fiber dosage of 0.1% is recommended. (4) With the growth of the number of freeze-thaw cycles, the damage index () of FAFC increased. Nonetheless, with the increase in the PP fiber admixture, its damage index () gradually decreased, and the mass loss rate, compressive strength loss rate, | 2022-09-04T15:01:28.469Z | 2022-09-01T00:00:00.000 | {
"year": 2022,
"sha1": "823dc382fd57ad883bd48212b6cc4717d2853e06",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1944/15/17/6077/pdf?version=1662042890",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b7e5cacdbdfae8e4f10be06fd855adf535e9e45b",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": []
} |
219970841 | pes2o/s2orc | v3-fos-license | Microbiome as a Target for Cancer Therapy
Recently, the microbiome has been gaining traction as a major player regulating various functions that correlate with many pathological conditions, including cancer. The central gut microbiota population has the capability to regulate normal inflammatory, immune, and metabolic functions, and disturbance in the balance of the normal microbiota population can subsequently induce pathological responses that closely relate with the mechanistic development and progression of cancer in various forms and sites. As a disease with major socioeconomic burden partly due to its current therapeutic options, modulating the imbalanced gut microbiota represents a novel option not only as an adjuvant therapy to relieve cancer treatment–related symptoms but also to influence cancer progression itself. In this review, we will discuss how the microbiome, specifically the gut microbiota, could affect cancer pathogenesis and what the effect of gut microbiota–targeting treatment options have on the many aspects of cancer pathologies based on the knowledge of recent years.
hand, microbiota refers to specific microorganisms that are located in specific environment. 7,8 As such, all microorganisms could be called microbiota, such as bacteria, viruses, fungi, and parasites. 7,8 Microbiota or microbes can be found in many parts of the human body, with the primary sites being the external and internal surfaces of the body, including gastrointestinal tract, skin, saliva, oral mucosa, vagina, and conjunctiva. 9 In total, the number of human microbiota is estimated at up to 100 trillion symbiotic microbial cells. 10 Host-microbe interactions occur primarily along mucosal surfaces, and one of the largest interfaces is the human intestinal mucosa. 11 Because of that, it makes sense that the vast majority of commensal bacteria reside in the colon. 12 From 1200 different bacterial species that have been identified, it is estimated that an individual has at least 160 different species in the gut. 13,14 The gut microbial community is composed by 5 phyla, Bacteroidetes, Firmicutes, Actinobacteria, Proteobacteria, and Verrucomicrobia. 15 In normal conditions, Bacteroidetes and Firmicutes are the more dominant microbiota in the human gut. 16 However, an imbalance in the gut microbial community, termed dysbiosis, could occur in the presence of a disease. 17 It has been established recently that there is a close relationship between host (human) and microbiota, and this forgotten organ plays novel roles in human health. 11 Among the variable microbiomes in specific parts of the body, the gut microbiota has been known to play important roles in modulating immune responses of not only the local gastrointestinal tract but the whole body itself. 14 Indeed, several groundbreaking findings have pointed out the critical role that the gut microbiome has in many pathological conditions. It has been implied in many reports that the gut microbiota mechanistically plays its important role in several ways. First, the microbiome harbored in the gut can help in biodegradation of complex sugars and glycans, 13 for example, degradation of pectin and sorbitol. 18 The long linear chains of α-1,4-glycoside-linked d-galacturonic acid (pectin) are also fermented by microflora. 19 The major end product are the short-chain fatty acids (SCFAs); acetate, propionate and butyrate, the gases H 2 and CO 2 , ammonia, amines, and phenols. 20 In fact, the SCFAs have several different functions, including as nutrients for the colonic epithelium, modulators of colonic and intracellular pH, cell volume, and other functions associated with ion transport. In addition, the SCFAs are also regulators of proliferation, differentiation, and gene expression. 21 The increase of SCFAs in the human body results in decreased pH, which indirectly influences the composition of the colonic microflora (the more acidic the pH, more the potentially pathogenic clostridia are reduced), decreases solubility of bile acids, increases absorption of minerals (indirectly), and reduces ammonia absorption by the protonic dissociation of ammonia and other amines ( Figure 1). 22,23 The homeostatic relationship between the gut microbiota and intestinal mucosal immune system is important in maintaining normal conditions of the body. The disruption of this interaction might link to various diseases. 24,25 This begins with the transmission of gut microbiota signals across the intestinal epithelium. 16 Microbe-associated molecular patterns such as lipopolysaccharide, peptidoglycan, flagellin, or other structural components are recognized by pattern-recognition receptors, such as Toll-like receptors (TLRs), NOD-like receptors, or RIG-1-like receptors, on epithelial and immune cells. 26 Remarkably, lipopolysaccharides derived from different gut microbial species induce TLR4 signaling differently 27 and might also have distinct effects early in life. 28 Only a fraction of microbial signaling can be attributed to general recognition of microbial derivatives through pattern-recognition receptors, 29 and there are probably more specific microbial signals that regulate host transcription.
Moreover, several studies have suggested that the gut microbiota has the ability to produce important cytokines that regulates intestinal mucosal homeostasis and provides resistance to the fungus Candida albicans. In addition, Lactobacilli have been known as a catabolizing agent of the amino acid tryptophan into the metabolite indole-3-aldehyde, a ligand to the aryl hydrocarbon receptor (AHR). AHR is expressed by innate lymphoid cells group 3 (ILC3s), and its activation induces the expression of the aforementioned cytokine interleukin (IL)-22. In turn, IL-22 mediates a pivotal innate antifungal resistance so the host can survive from "the fungus-shifted-induceddiseases," and protect the intestinal mucosa from inflammation. [30][31][32] Taken together, all of the aforementioned mechanistic insights provide proofs that maintaining a proper gut microbiota population could go a long way toward maintaining proper homeostatic balance of various functions of the body.
The Link Between the Microbiome and Cancer
During the past few years, numerous researchers have analyzed the correlation between cancer and microbiota, due to the connection between cancer and immune responses, particularly the central gut microbiota population. Several groups have tried to link a change in gut microbiota population with cancer occurrence and progression. Dysbiosis or disturbance of gut microbiota can increase the risk of a person to develop inflammatory, autoimmune, and malignant diseases. 33,34 Although one would logically think that gut microbe dysbiosis is associated with gastrointestinal tract malignancies, which has been shown, much evidence also suggest that disturbances in the gut microbiota population could also be related to cancer of other organs, such as breast cancer, lung cancer, and adult T-cell leukemia. [35][36][37] As mentioned previously, due to the fact that there is a specific set of bacteria that normally inhabit the gut mucosal layers, any changes that can cause a shift in the bacterial population toward any "unwanted" bacteria could induce pathogenic reactions and this so-called pathogenic reaction could cause different reactions and induce different forms of cancer in various sites. Mechanistically, there are several proposed pathways in play to explain the link between cancer occurrence and gut microbiota dysbiosis, especially to explain the manner in which some specific bacterium could induce and modulate cancer occurrence and progression. In general, the mechanism of how unwanted microbiota could modulate cancer pathophysiology can be divided into 3 classes of action: • • Class A is defined as involving immunologic tissues, in which the bacteria stimulate chronic inflammation. Inflammatory mediators produced in this process cause or facilitate cell proliferation, mutagenesis, oncogene activation, and angiogenesis. 38 With regard to pathogenic bacteria, several strains have been linked to cancer. The most well-known bacterium associated with development of cancer in human is Helicobacter pylori. This class I carcinogen bacterium, which is the main cause of chronic gastritis and peptic ulcer, could also induce further development of gastric adenocarcinoma, gastric mucosa-associated lymphoid tissue, and lymphoma with intestinal metaplasia. 40 Additionally, this particular bacterium can also be found in the oral cavity.
In other examples, various studies have also identified some specific species that remarkably correlate with oral squamous cell carcinoma (OSCC), such as Streptococcus sp, Peptostreptococcus sp, Prevotella sp, Fusobacterium sp, Porphyromonas gingivalis, and Capnocytophaga gingivalis. [40][41][42][43] Remarkably, the discovery of specific bacterial species in OSCC samples from humans have been reported. One study performed immunohistochemical staining to investigate the presence of P gingivalis. The result showed that P gingivalis was significantly positive only in the OSCC sample in comparison to controls. 44 Furthermore, other studies also have found that 3 specific species were increased in the saliva from 80% of individuals with OSCC; which are Capnocytophaga gingivalis, Prevotella melaninogenica, and Streptococcus mitis. With 80% sensitivity and 82% specificity, it might become a diagnostic indicator of OSCC and a true proof that a specific set of bacteria is needed to induce OSCC. 42 In addition to microbe-associated OSCC, microbiota also have been linked with esophageal diseases such as Barret's esophagus (BE), esophageal squamous cell cancers, and esophageal adenocarcinoma. Plenty of research has reported the correlation of microbe and cancerous esophagus diseases. For instance, researchers from the Esophageal and Lung Institute, Canonsburg, PA, found that Escherichia coli was detected in BE and esophageal adenocarcinoma patient groups but was absent in the tumor-adjacent normal epithelium, dysplasia, and the gastroesophageal reflux disease groups, implicating the need for E coli presence for BE development to occur. 45 Moving to another organ, it has already been established that breast cancer pathology is associated with estrogen, and interestingly, the systemic estrogens are also modulated by gut microbiota. 35 The connection of breast cancer with gut microbiota is bridged by a set of enteric genes whose products are of capable metabolizing estrogen, termed the estrobolome. The estrobolome enteric bacteria possess βglucuronidases and β-glucosidases, hydrolytic enzymes involved in the deconjugation of estrogens. An estrobolome enriched in enzymes favoring deconjugation would promote reabsorption of free estrogens, and thus increase relative total estrogen burden. 35 Because estrogen is widely recognized as a causal factor in the etiology of hormone receptor-positive breast cancer and plays an important role in the initiation and promotion of neoplastic growth, the increase in total estrogen burden would be disadvantageous. 46 Based on an integrated microbial genomes database, there are more than 50 bacteria colonizing the human intestinal tract that encode β-glucuronidases and/or βglucosidases including Alistipes, Bacteroides, Bifidobacterium, Citrobacter, Clostridium, Dermabacter, Escherichia, Faecalibacterium, Lactobacillus, Marvinbryantia, Propionibacterium, Roseburia, Tannerella, and many more. 35 Any overabundance found in this set of bacteria could induce further imbalance in the estrogen burden and subsequently promote breast cancer.
Moreover, not only the rise of the pathogenic bacteria but also the decrease in different normal inhabitants of the gut or probiotics could also induce an imbalance in the aforementioned normal inflammatory and immune responses of the body, both of which are strongly related to carcinogenesis. As an example, the correlation between microbiota and lung cancer has been recently reported as being related to such an imbalance. A study held by Zhuang et al 36 found that although there was no difference in gut microbial alpha diversity, microbial composition, nevertheless, showed significant differences compared with healthy controls. These differences were mainly caused by Actinobacteria (phylum level), Bifidobacterium, and Enterococcus (genus level), which might have a significant potential as biomarkers for lung carcinogenesis. 36 Actinobacteria was found as the strongest marker in healthy controls, and it was elevated in healthy individuals. Bifidobacteriales disclosed a major abundance in healthy controls, whereas the elevated bacteria in the lung cancer groups were Enterococcaceae. 36 The decrease of the phylum Actinobacteria in the human gut may also be involved in the pathogenesis of lung cancer. This notion is supported by a finding by Zhou et al, 47 where they found that the Actinobacteria produce cancer-killing substance in the human intestine, while its bioactive secondary metabolites have potent cancer-suppressing activity. As such, not only it is important to minimize the growth of pathogenic bacteria in the gut microbiota, but it is also essential that normal bacteria population to be maintained to achieve optimal microbiota function.
As mentioned, the difference in gut microbiota composition could also affect the immune response to various pathogens, including those related to cancer pathogenesis. One aspect that has been recently studied is the immune checkpoints, key regulators of the immune responses in part responsible for carcinogenesis. In particular, 2 molecules have been well studied up to this point, CTLA-4 and PD-1. 48 CTLA-4, a receptor constitutively expressed in regulatory T-cells, is known to play a role in dampening T-cell activation and subsequent responses via its capability to act as a CD28 antagonist. 49 One of the main consequences of this is the decrease of the key cytokine IL-2 that is already known to be pivotal in modulating the differentiation of CD4 + regulatory T-cells into T-helper 1 or T-helper 2 cells while subsequently inhibiting T-helper 17 differentiation, thereby serving as a so-called "regulator" for Th1-and Th2regulated immune responses. 50,51 On the other hand, PD-1 is a transmembrane receptor with known ligands PD-L1 and PD-L2 that acts as a regulator in the event of infection. 48,49 The PD-1/PD-L1 interactions will inhibit the activation and differentiation of effector T-cells and their subsequent functions, rendering them exhausted. On this aspect, the impact of gut microbiota populations has been recently studied by several groups, especially in the condition of the blockade of CTLA4 or PD-1 using therapeutic agents (also known as immune checkpoint inhibitors [ICIs]). 48,49 As mentioned later, several microbiotas are known to be able to modulate the efficacy of ICI therapy in cancer conditions due to their various functions, which will be elaborated further on.
Modulating Gut Microbiota as Treatment Strategy of Cancer
It has been shown how disturbances in the gut microbiota population balance could cause unwanted bacteria to prosper and exert their pathological and carcinogenic effects; thus, maintaining an intact and normal gut microbiota is essential to prevent such phenomena. 52 As such, the capability to modulate or reverse the unbalanced gut microbiota population becomes important to achieve, or reacquire, said normalcy. There are several ways to modulate the gut microbiota population clinically. 53 The most well-known and established method to alter gut microbiota, which is the consumption of probiotics and other specific dietary products, such as yogurt or fiber-rich food, has previously been explored in several conditions, such as cardiovascular diseases, chronic kidney disease, brain injury, and obesity, among others, with varying degrees of success. [54][55][56][57] Another option to modulate microbiota is via the fecal microbiota transplantation (FMT), in which liquefied and filtered stool from a healthy donor would be transplanted to recipients during various procedures, such as colonoscopy or enema administration. 58 FMT is currently considered as a treatment option in recurrent Clostridium difficile-infected patients. 58 Together with probiotics administration, FMT is also considered an effective option to alter gut microbiota and, subsequently, other local microbiota populations ( Figure 2).
It is interesting to note that most, if not all, of the treatment options that have been explored in the field of microbiota mainly modulate the gut microbiota, rather than the local microbiota population of various target organs/cells. This is mostly due to the function of the gut microbiota as a central regulator for local populations through its ability to centrally modulate the immune response and subsequent cellular gene expression patterns, as previously explained. 34 In the intact gut microbiota condition, it has been shown in many studies that proper microbiota-driven innate immunity activation, through the regulation of CD4 + and CD8 + T-cells functions, could act as both a sensor and an inducer of needed reactions to defend host organisms, both locally and systemically, while during dysbiosis or imbalanced gut microbiota condition, the balance of this function would also be disturbed and give rise to self-reactive T-cells, which could potentially induce prolonged local and systemic proinflammatory and carcinogenic effects. 33,52 Another example of how gut microbiota population affects cancer treatment is in the aforementioned immune checkpoints and the ICIs, as previously mentioned. Several studies have proven how specific microbiota activities could positively affect ICI efficacy in immunocompromised patients, including those suffering from cancers, and this affects both CTLA-4 inhibitor and PD-1 inhibitor groups. First, a 2015 study from Vétizou et al 59 revealed that in the presence of Bacteroides thetaiotaomicron and Bacteroides fragilis, CTLA-4-specific 9D9 antibody has an improved capability of binding and blocking CTLA-4 activity in antibiotic-treated mice with tumors. This effect was attributed to decreased subclinical colitis signs, increased Th1 immune response activities, and promotion of the maturation of intratumor dendritic cells. The authors similarly applied FMT from donor patients to mice and found that mice transplanted with feces from patients with Bacteroides-rich microbiota population responded better to CTLA-4 inhibitor treatment. In a more clinical setting, it has been shown by Gopalakrishnan et al that patients with favorable PD-1 inhibitor response have a distinct microbiota population in comparison to those with unfavorable responses. 60 Specifically, they found that responders to PD-1 inhibitor therapy have enrichment in Faecalibacterium genus, Ruminococcaceae family and Clostridiales order. Enrichment of the aforementioned kinds of microbiota were revealed to increase CD4 + and CD8 + effector cells with preserved cytokine responses to anti-PD-1 therapy. Conversely, those with unfavorable anti-PD-1 responses have an abundance of Bacteroidales population with subsequent increase in regulatory T-cells and blunted cytokine responses. In short, the capability of gut microbiota in modulating not only local but also systemic immune response as some sort of central regulator is what drives the current treatment options to also be focused on this particular population of bacteria. As will be discussed later, the ways to alter gut microbiota would include probiotics, FMT, and other microbiota-altering agents.
Probiotics
Utilizing the aforementioned gut microbiota-altering agents or therapies in cancer conditions has been explored or is being explored as an adjuvant therapy to directly affect the progression and growth of cancer cells. Among the modalities available to alter gut microbiota, probiotics have been the most extensively studied, due to their availability, low cost, and overall safe nature, although other microbiotaaltering dietary products such as yogurt or fibers are also available. 34,53,61 One trial held in Monza, Italy, analyzed the administration of a probiotic mixture of Bifidobacterium longum and Lactobacillus johnsonii perioperatively in colorectal cancer patients undergoing surgery and found, in conjunction with a shift of the colonic mucosal microbiota population in the probiotic-treated group, a higher expression of CD3, CD4, CD8, and naïve and memory lymphocyte subsets compared with the placebo-treated group. 62 Moreover, the proliferative capabilities of the ex vivo colonic mucosal cells were also dramatically reduced in the probiotics-treated group. Similarly, other groups have also shown that by treating colorectal cancer patients with a postoperative probiotic mix, a marked reduction of circulating pro-inflammatory cytokines such as IL-6, tumor necrosis factor-α (TNF-α), IL-17A, IL-17C, and IL-22 could be observed. 63 All of these results collectively suggest that altering the microbiota could affect the progression of cancer through shifting of inflammatory and immune responses toward the anticarcinogenic phenotype clinically.
In a more basic and translational setting, a study by Li et al 64 showed that administering a probiotic mix in vivo, this time a novel mix called Prohep consisting of Lactobacillus rhamnosus GG, viable E coli Nissle 1917, and heat-inactivated VSL#3, could affect the progression of hepatocellular carcinoma cell growth after subcutaneous tumor inoculation in mice in a manner almost similar to cisplatin treatment. This effect is caused by the ability of Prohep to alter T-helper 17 cell distribution and polarization toward the anti-migratory and subsequent anti-inflammatory state, which is important because Th17 is the T-cell with the ability to secrete the pro-inflammatory and proangiogenesis IL-17 cytokine that is important in hepatocellular carcinoma and various other cancer development. 65 Prohep could induce this positive effect due to its ability to alter gut microbiota composition, in which it showed the increase of the Bacteroidetes phylum, the phylum important in producing acetate and propionate from fiber. 64 Moreover, several major anti-inflammatory bacterial genera were significantly increased in the gut population after Prohep treatment, including Butyricimonas and Prevotella. This study underscores greatly how probiotic treatment could affect the immune, metabolic, and inflammatory responses of the whole body with concurrent anticarcinogenic effect in vivo.
Probiotics have also been shown to positively affect various pathological conditions related to cancer and/or conventional cancer treatment modalities. One such condition is gastrointestinal disturbance, including nausea, vomiting, diarrhea, and/or constipation, in which changing the microbiota population has demonstrated success. One study from Canada observed the effect of probiotics in pelvic cancer patients undergoing radiation therapy, in which radiationinduced diarrhea is a common occurrence. 66,67 Remarkably, probiotic administration could reduce the incidence of diarrhea with no apparent side effects. Similar studies analyzed the effects of probiotics on other possible side effects in lung cancer patients undergoing chemotherapy, in addition to reducing the systemic inflammatory responses observed from the neutrophil and lymphocyte counts. 68 Taken together, probiotics are a promising pathway toward maintaining healthy gut microbiota and concurrent anticarcinogenic effects. As there are currently various trials analyzing the effect of probiotics in cancer, it will be interesting to see future developments of probiotic use in this condition.
Fecal Microbiota Transplantation
In contrast to probiotics, FMT is not as commonly explored in the field of cancer therapy in comparison to the aforementioned probiotics or other dietary products. 69 This is partly due to the perceived possible infection risk of translocating bacteria from a different individual, especially in immunocompromised individuals. Due to the need to perform colonoscopy or endoscopy to infuse the donor feces, studies have also highlighted the possible risk of FMT procedure-related adverse effects. 70,71 In addition to that, the possible adverse effect of incurring other noninfectious diseases by modulating the microbiota has also been mentioned, although studies reporting this phenomenon have been rare. 70 Indeed, due to these issues, several studies have excluded immunocompromised patients from their FMT trials, and recommendations from several health organization followed suit with the cautionary approach to FMT treatment in cancer patients. 71 Even so, FMT has been recently utilized in a basic translational setting to positive effect by Riquelme et al 72 in their study on pancreatic cancer. In their study utilizing FMT in pancreatic cancer patients with donor controls, they confirmed the ability of gut microbiota to modulate the local tumor microbiota environment and subsequently alter the responses needed for tumor growth, evidenced by the changes in gene expression patterns of various inflammatory pathways The group receiving FMT from long-term survivor pancreatic cancer patients had considerably lower procarcinogenic features in comparison to those receiving the shortterm survivor pancreatic cancer patients' FMT. 72 Similarly, another study from Li et al 64 found that FMT of fecal samples from colorectal patients could cause enhanced progression of intestinal adenoma in vivo. Additionally, one other study also found that the use of FMT as an adjuvant therapy to chemotherapy treatment with 5-fluorouracil (5-FU) could prevent 5-FU-induced gut dysbiosis. 73 This is another indication that having an intact, healthy microbiota population could help in terms of halting cancer progression.
Mechanistic insights with regard to the FMT-cancer link showed that, among the various inflammatory and immune pathways that are modulated after FMT, restoring the balance of TLR signaling pathways represent one major advantage of FMT application in cancer. 71 It is well known that TLR4 signaling can cause aberrant immune responses skewing toward pro-inflammatory pathways, but other forms of TLRs, such as TLR2, have been linked with anti-inflammatory pathway activation. 27,74 As such, FMT represents one alternative to restoring the so-called anti-inflammatory pathway and prevent further progression of cancer.
Unfortunately, limited clinical evidence currently exists of FMT application due to the aforementioned perceived risk of infection beyond its usage for recurrent Clostridium difficile infection, highlighted by the fact that most of the studies conducted have been limited to human-to-mice transplantation. As controversial as it is, several groups are trying to show that the benefits of human-to-human FMT in cancer patients outweigh its risk. Currently, several trials are ongoing in the clinical application for FMT in cancer patients. For example, one group from Israel has reported their preliminary findings from 3 anti-PD-1 refractory patients undergoing FMT from anti-PD-1 responsive donors. 75 Preliminary reports from the investigators suggested the overall safety of this combined approach with increased tumor CD68 + and CD8 + T-cell infiltrations. 75 Nevertheless, while promising, it is understandable that a cautious approach is being taken to this gut microbiotaaltering therapeutic alternative.
Other Treatments Targeting Microbiota
In addition to probiotics, diet changes, and FMT, many drugs are known to change the population of the microbiota. One logical example would be the antibiotics. Several classes of antibiotics have been known to have the effects, or side effects, of shifting gut microbiota population. One such class is the macrolides, in which one study has shown that there is a shift toward certain phyla (ones that includes E coli and Campylobacter) in microbiota population among infants prescribed azithromycin. 76 This microbiota-altering phenomenon is not exclusive to antibiotics. For example, it is known that statins could also alter microbiota, which is hypothesized to be related to the changes in lipid and glucose metabolism induced by statins. 77 Unfortunately, although most of the aforementioned drugs have the potential to alter the microbiota population, most of the alterations reported have a negative effect toward microbiota population balance, meaning that rather than shifting the population toward the needed population for a positive health outcome, those therapies could rather induce dysbiosis and subsequent pathological consequences. The aforementioned statin treatments, in this case atorvastatin and rosuvastatin, could induce a shift in microbiota population toward the Bacteroides and Mucispirillum, both of which induce pro-inflammatory cytokine expression and release, such as TGF-β and IL-1β. 77 The subsequent changes in metabolite availability, namely, the SCFAs, due to statin-induced microbiota composition shift is thought to induce the pro-inflammatory responses of the host immune system. 77 In addition, reported effects of antibiotics have also highlighted the possibility of dysbiosis, or rather a shift toward the so-called "unwanted" bacterial populations, which would not be beneficial. 76 Even so, the promise of microbiota-altering drugs remains high, especially considering the increased efficacy these drugs can potentially have in comparison to probiotics or dietary changes alone. As such, many researchers are taking interest in future utilization of antibiotics' effect on gut microbiota (Table 1).
Altering Gut Microbiota as Cancer Prevention
Gut microbiota composition and altering the gut microbiota population is not only a treatment option but has also been recently shown to be beneficial in preventing several kinds of cancer. A study by Yang et al 83 pooling cohorts from 10 countries examined how dietary patterns of yogurt and fiber consumption, 2 gut microbiota-altering agents, could have a long-term effect in lung cancer occurrence. In relation to that study, various studies have found that with increased population-altering dietary consumption, such as yogurt and fiber consumption amount, there is an intact, healthy gut microbiota population composed of mainly of the normal bacterial population. 61 As mentioned, these bacteria are responsible for the maintenance of healthy immune response and production of various metabolic products, in addition to suppression of aberrant inflammatory responses.
The positive effect of gut microbiota-altering treatments has not only been studied in lung cancer but also in other forms of cancer, such as colorectal and oral cancers, among others. 84,85 Moreover, in an interesting development, a group from Japan proposed utilizing recombinant Bifidobacterium displaying Wilms' Tumor 1 (WT1) protein, a protein associated with pediatric renal cancer cells, as a vaccine via its gut microbiota function and populationaltering capability. 86 These examples, combined with other emerging evidence, highlight the potential of normal gut microbiota composition maintenance in preventing carcinogenesis in various sites.
Conclusion
Modulating gut microbiota to relieve the burden of cancer is a novel yet important option as a future therapeutic possibility, especially as an additional therapeutic option to increase the efficacy and safety of other cancer treatment modalities through its central immune modulation mechanism. Additionally, treating dysbiosis of the gut microbiota could also be a novel option for cancer prevention.
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article. | 2020-06-23T13:05:54.097Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "424181a4ce9602e8571d513750144b2fd46b3fa8",
"oa_license": "CCBYNC",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/1534735420920721",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "938f2f4bf3514103dd4e87639170e26b88c9b6d6",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
252089014 | pes2o/s2orc | v3-fos-license | Learning an Ensemble of Deep Fingerprint Representations
Deep neural networks (DNNs) have shown incredible promise in learning fixed-length representations from fingerprints. Since the representation learning is often focused on capturing specific prior knowledge (e.g., minutiae), there is no universal representation that comprehensively encapsulates all the discriminatory information available in a fingerprint. While learning an ensemble of representations can mitigate this problem, two critical challenges need to be addressed: (i) How to extract multiple diverse representations from the same fingerprint image? and (ii) How to optimally exploit these representations during the matching process? In this work, we train multiple instances of DeepPrint (a state-of-the-art DNN-based fingerprint encoder) on different transformations of the input image to generate an ensemble of fingerprint embeddings. We also propose a feature fusion technique that distills these multiple representations into a single embedding, which faithfully captures the diversity present in the ensemble without increasing the computational complexity. The proposed approach has been comprehensively evaluated on five databases containing rolled, plain, and latent fingerprints (NIST SD4, NIST SD14, NIST SD27, NIST SD302, and FVC2004 DB2A) and statistically significant improvements in accuracy have been consistently demonstrated across a range of verification as well as closed- and open-set identification settings. The proposed approach serves as a wrapper capable of improving the accuracy of any DNN-based recognition system.
Introduction
The choice of data representation plays a critical role in determining the success of a machine learning model because different representations can highlight and/or suppress different factors of variation underlying the data [1]. In fingerprint recognition, domain-specific prior knowledge has played the dominant role in determining the representation scheme, leading to mostly hand-designed features. Since the late 19 th century [2], it was well-known that minutiae are important for identifying fingerprints accurately. Hence, minutiae-based fingerprint representations have become the de-facto standard [3] as shown in Figure 1. However, in challenging scenarios such as matching latent fingerprints (see middle column of row 4 in Figure 1), using only minutiae-based representation is clearly inadequate.
With the advent of deep learning and its tremendous success in various applications including consumer sentiment analysis [5,6], biometric recognition [7], natural language processing (NLP) [8], healthcare, and finance [9], the concept of data-driven representation learning has come to the fore. It is possible to learn multiple representations from [4]. STN stands for Spatial Transformer Network. the same data by applying different priors, which are usually determined by the architecture (and depth) of the neural network, ground-truth labels that guide/supervise the learning, and the objective/loss function. It is well-known that no single prior can perfectly disentangle all the underlying variations in data and lead to a universally good representation. Consequently, the idea of ensemble learning [10,11], which refers to learning multiple models/representations (as opposed to using a single representation) has been used to improve the diversity of the feature space. This approach has been successfully employed in many computer vision tasks to boost performance compared to a single model [12].
In the field of biometrics, many studies have been conducted over a wide range of modalities (face, fingerprint, gait, lip, etc.) using different implementations of ensemble learning [13,14,15,16,17,18,19]. While some of these methods may not fit into the traditional definition of ensemble learning, they can be considered to be a part of this family since they involve some form of fusion of outputs obtained from different entities. There are two key challenges involved in implementing any ensemble learning approach: (i) generation of multiple representations from the same input that are sufficiently discriminative as well as diverse, and (ii) efficient fusion of these representations during the inference process. Typically, the first problem is solved by learning multiple representations based on different augmentations of the training data, different network architectures, or different data partitions [20]. The latter issue is addressed through a range of early (feature-level) and late (score-or decision-level) fusion techniques [21].
In this work, our objective is to improve the performance of a state-of-the-art (SOTA) deep neural network based fingerprint recognition model called DeepPrint (DP) [4] through ensemble learning. The core advantage of the DP model is its ability to learn a compact fingerprint representation using a combination of domain knowledge (minutiae features) and data-driven (texture patterns) supervision techniques. Figure 2 presents an overview of the DP architecture and the third column of Figure 1 shows heatmap visualizations of the DP representation. Despite its strong ability to extract discriminative information from fingerprint images, the performance of DP models still fall short of commercial-off-the-shelf (COTS) fingerprint recognition systems (which use both minutiae and other proprietary features) in challenging scenarios. We posit that this is primarily due to the reliance on a single representation, which fails to capture all useful information. To overcome this limitation, we make the following contributions in this work: • Generating an ensemble of five fingerprint representations from a single image using the DeepPrint architecture. This is achieved by augmenting the original training data with four types of image manipulations (two generic and two domain-specific transformations).
• Training a single DeepPrint model on unperturbed images, which is capable of learning the diversity present in the ensemble of fingerprint representations. This distilled model can be considered as a feature fusion strategy that learns a single embedding through external supervision at the feature level from the component representations in the ensemble.
• Comprehensive experiments in the verification and identification (both closed and open-set) modes to demonstrate that the proposed feature fusion approach can consistently improve the accuracy without compromising on retrieval or feature extraction times.
Related Work
Ensemble learning has proven to be an effective tool for improving the generalization ability of deep learning models [28,12]. Moreover, many classical machine learning algorithms such as AdaBoost [29], Bagging [30], and Random forest [31,32] also use ensemble learning at their core. Broadly, ensemble learning can be divided into two stages.
Generation of Ensemble
There are several methods of generating an ensemble of features/models that have varying degrees of complexity: • Altering the training data: This method manipulates the training dataset in various ways and trains one instance of the network architecture for each altered training set independently. Manipulation methods may range from common data augmentation techniques like contrast adjustment and rotation to more complex manipulations like binarization and gradient images [12,33]. This is the approach used in this study for generating the ensemble of fingerprint embeddings.
• Altering the architecture of the network: In this method, the network architecture is changed before training each model in the ensemble. These changes could be applied to hyperparameter configurations of the training process or manipulating the overall design of the network itself. This method is considered more complex since it is difficult to ensure that the changes made in the architecture do not dramatically affect the individual accuracy of each model in the ensemble. Another variation of this concept is using entirely different networks as part of the ensemble [34,35,10].
• Data partitioning: This method splits the training dataset into smaller subsets and trains a given network architecture on each subset, generating an ensemble of models that are trained on subsets of the original training set. The partitioning of the training datasets can be done at a sample-level or at a feature-level [36,30,37]. Sample-level partition reduces the size of the train- ing dataset for each model and the resulting models have limited diversity. In contrast, feature-level partition can generate diverse representations, but requires careful feature selection techniques to ensure that the resulting models have good accuracy.
Information Fusion for Ensemble Learning
The second step in ensemble learning is the fusion of outputs generated by individual models in the ensemble [38]. Some of the common fusion techniques include: • Feature fusion: This method aims to combine the multiple feature representations into a single embedding. The most simple approach is feature concatenation, where the embeddings generated by the ensemble for a given sample are concatenated to yield a single, large embedding. An alternative approach that maintains the original feature dimensionality is knowledge distillation [39], where a 'student network' is learned through external supervision from the 'teacher networks' that constitute the ensemble.
• Score fusion: This technique combines the prediction confidences of the individual models into a single value, which is considered as the output of the ensemble. This is typically achieved by computing the sum, mean, median, weighted sum, or weighted mean of the individual outputs [28].
• Decision fusion: In this method, the final prediction of the ensemble is determined by combining the predictions of the individual models. The simplest case is a majority voting scheme, where the decision favored by a majority of the models in an ensemble is considered as the final output [28]. In the case of identification, it is also possible to combine ranks output by models in the ensemble leading to rank fusion schemes [40].
Fingerprint Ensemble Generation
The keys to the success of ensemble learning are "accuracy and diversity" of the feature representations included in the ensemble [41]. Diversity is essential to incorporate as much complementary information as possible within the ensemble, which ensures that the failure modes of the different representations do not overlap significantly. On the other hand, it is important that each representation in the ensemble is accurate and comparable to one another. Lack of either diversity or accuracy can degrade the performance of the ensemble instead of enhancing the accuracy achieved using the best individual model.
Given the complexity of designing a DNN architecture that works well for fingerprints and limited size of the training datasets, we generate the ensemble using the input manipulation approach. Let the original training set be denoted as D o . Each fingerprint image in the original training set is perturbed using four manipulation techniques (denoted as Flip y , Flip x , Ridge, and Minu) to obtain four manipulated datasets (denoted as D y , D x , D r , and D m , respectively) with the same size as D o . Figure 3 shows the manipulations selected in this work, which includes two generic geometric transformations and two transformations that are specific to the fingerprint domain. The two geometric transformations are Flip y and Flip x , where the original images are flipped along the y and x axes, respectively. Since convolutional neural network (CNN) architectures such as DeepPrint are not rotation-invariant, Flip y and Flip x provide a simple way of constructing additional datasets with geometric operations, while ensuring diversity of generated representations.
Next, we apply the Ridge transformation, which is the binarized ridge image extracted using the Verifinger SDK 1 . Since the binarized ridge images emphasize level-1 (global) features in a fingerprint, including better clarity on the core and delta points, the representation obtained through learning on these images is expected to focus more on the global features. Finally, we create a soft-gated minutiae image, where the Minu transformation de-emphasizes the regions of the fingerprint where no minutiae points are detected by the Verifinger SDK. This is achieved by retaining the 64×64 pixel patches centered at the location of each detected minutia point and applying a Gaussian blur (k = 11) on the regions of the fingerprint image that are not included in any minutia patch. The representation learned from these softgated minutiae images can be expected to further emphasize the level-2 (more local, keypoint) features of a fingerprint.
An instance of the DeepPrint (DP) architecture is trained on each of the above five datasets (D o , D y , D x , D r , and D m ), resulting in an ensemble of five models denoted as DP o , DP y , DP x , DP r , and DP m , respectively. Our base- line model (DP o ) actually performs better than the model originally proposed in [4] due to some hyperparameter tuning on our part (lowering the learning rate multiplier of the STN). Therefore, in all of our models in the ensemble, we employ this new set of hyperparameters.
Ensemble Fusion
Both early (feature level) and late (score and decision level) fusion schemes are considered in this work. The advantage of late fusion schemes is that no additional training is usually required and it is possible to make full use of the available representations in the ensemble, leading to higher accuracy. The drawback of late fusion techniques is that when comparing two fingerprint images, the ensemble of representations must be generated for both the fingerprint images. In the case of identification, the ensemble of representations has to be generated for the entire gallery. This can be expected to increase the feature extraction and matching times, thereby reducing the system throughput.
In this study, we use the OR rule for decision-level fusion, which accepts a pair of fingerprint images as a match if at least one of the models in the ensemble outputs a match decision. Note that similarity between a fingerprint pair is computed based on representations of the two images generated using the same model only. This is because each model has its own unique threshold for a given F M R during the training stage and cross-representation similarity scores cannot be interpreted fairly using multiple thresholds. Furthermore, it was observed that the genuine score distribution is disproportionately affected by cross-representation comparisons, while the impostor score distribution remains relatively unaffected, thereby drastically reducing the T AR at a given F M R. In the subsequent discussion, the results from decision fusion are denoted as DP EN −DF . Mean and median fusion rules are employed for score level fusion. In mean (median) score fusion, the mean (median) of similarity scores obtained from each model in the ensemble for a given pair of fingerprints is computed. We only report the results for median score fusion, since it consistently outperformed mean score fusion in all our experiments. Henceforth, score fusion results are referred to as DP EN −SF .
Finally, we implement a feature-fusion technique to generate a single embedding (from a single model) that attempts to encapsulate the diverse information contained within the multiple representations. Using every model in the ensemble (i.e., DP o , DP y , DP x , DP r , and DP m ), we extract the features from each image in the corresponding training set (D o , D y , D x , D r , and D m , respectively). We then use all of these five representations (or a subset of them) as external supervisors to train a new DeepPrint model. The resulting model, denoted as DP EN −SL , is trained in the same way as DP o using images from D o , but utilizes the aforementioned supervisors to minimize an additional objective function: where F e is the new representation that is generated by the supervised model DP EN −SL , F c is the pre-extracted feature representation from model DP c , σ c is a scalar weight assigned to model DP c in proportion to the accuracy of DP c relative to the other models in the ensemble, and M ⊆ {o, y, x, r, m}. Note that if we ignore the weights, the loss in the above equation is minimized when F e is the "centroid" of the multiple feature representations, which is known to be quite effective in image retrieval tasks [42]. However, in contrast to existing techniques that require extraction of multiple embeddings and computation of the centroid at inference time, the proposed DP EN −SL model directly learns to extract the centroid representation from the original image during training. This generates a representation that is more discriminative compared to any of the individual models DP c , thereby improving recognition accuracy. Since there is no need to perturb the given image pair during inference, it has the same throughput as the vanilla DP model. Additionally, the size of the gallery re- mains unchanged as opposed to other late fusion schemes. Thus, the proposed feature fusion method enhances accuracy without increasing computational or memory requirements during recognition, albeit at a higher training cost. Unless specified otherwise, the models used to supervise DP EN −SL are DP r and DP m , since supervision based on these two models yielded the highest accuracy. Moreover, since the baseline accuracy of DP r is generally higher than than that of DP m , we assign σ r > σ m (0.08 and 0.05, respectively). Figure 4 shows the genuine and impostor distribution on NIST SD4 obtained using DP o and DP EN −SL . We can see that the genuine scores generated by the latter are higher than those of the former.
Databases
We consider five fingerprint databases consisting of rolled, plain and latent prints -NIST SD4 (Rolled), NIST SD14 (Rolled), NIST SD27 (Latent-Rolled), N2N (Rolled, Plain) and FVC 2004 DB2A (Plain) for evaluating the proposed methods. Table 1 reports the key information about each database. Though NIST SD4 and SD14 have been widely used to evaluate SOTA algorithms in the past, they are no longer available in the public domain. NIST SD14 contains 27,000 fingerprint pairs, but we restrict our evaluation only to the last 2,700 pairs in order to ensure comparability in accuracy with existing studies in the literature. Table 2 shows the verification accuracy for the various models on the five evaluation databases used in this study. Some of the keys observations from this table are as follows.
Verification
Despite being a SOTA DNN method, there is gap between the accuracy of the original DeepPrint model (DP ) [4] and the COTS Verifinger matcher, especially when matching the more challenging latent fingerprints (NIST SD27 and N2N-RL datasets). Bridging this gap is the primary motivation for this study. Hyperparameter tuning improves the performance of the original DP model, which explains the difference between the DP [4] and DP o models.
All the five individual models in the ensemble (DP o , DP x , DP y , DP r , and DP m ) comparable accuracy to each other. However, none of the individual models can match the accuracy of Verifinger, which underlines the limitations of relying on a single representation. The ensemble learning models based on decision fusion (DP EN −DF ) and feature fusion through external supervision (DP EN −SL ) consistently outperform all the individual models and significantly close the gap to Verifinger accuracy. While decision fusion usually leads to marginally higher gains in performance compared to feature fusion, this improvement comes at the cost of increased computational requirements. The performance of the externally supervised model DP EN −SL is almost comparable with DP EN −DF while being much faster -in this case, almost five times faster since only one inference per image is required as opposed to five in decision fusion. Additionally, memory consumption is 5 times better in the case of DP EN −SL since no perturbations of the gallery are required as opposed to decision level fusion. Figure 5 shows a few examples from NIST SD4, where the DP o model produces non-match errors (failure to match two fingerprints from the same finger). Of these 30 failure cases, 15 of them can be rectified using the DP EN −SL model and this explains the better accuracy for the ensemble model. In addition to these 15 cases, the decision fusion approach DP EN −DF is able further correct two more errors. In comparison, Verifinger fails in only three of these 30 cases. This provides clear evidence that the single embedding generated by DP EN −SL is more diverse than the embedding generated by DP o , thereby validating the proposed ensemble learning approach. However, the results also indicate that there is still some way to go to reach the accuracy levels of the COTS matcher.
Identification
A large gallery of 1.2 million unique fingerprints is used in our evaluation [43]. For open-set identification, we use 1,000 non-mated and 1,000 mated fingerprint images from NIST SD4 as probes. For closed-set identification, we use 2,000 mated fingerprint images from NIST SD4 as probes. We repeat this procedure for NIST SD14 with the last 2,700 pairs. Additionally, we report identification results for two latent databases. While conducting open-set identification on the latent databases, half of the total number of rolled mates are included in the gallery. Table 5 and Figure 6 show the results for closed-set identification. For decision fusion in closed-set identification, a search is deemed to result in a correct rank-1 retrieval if the probe's correct mate in the gallery is included in the rank-1 result of at least one of the component models in the ensem- ble. Clearly, both the decision (DP EN −DF ) and feature (DP EN −SL ) fusion ensemble models significantly outperform the best individual model in the ensemble (DP o ) on all the datasets. DP EN −SL performs at par with DP EN −DF at rank-1, while being five times faster than DP EN −DF . Table 3 shows the results for open-set identification. Note that decision fusion schemes cannot be applied in the open-set scenario because of the presence of non-mated probes. This is a drawback of the decision fusion approach, which can be overcome using the DP EN −SL model that provides significant reduction in the false positive identification rate (F P IR) compared to the baseline model DP o .
Computational requirements
Using an ensemble of representations can have a significant impact on the retrieval times for search operations (see Table 4). This is where DP EN −SL demonstrates a critical advantage. Since there is no requirement to perturb either the probe or gallery image, the computational and memory requirements for DP EN −SL are the same as for DP o . The search speed for DP EN −SL is almost five times faster than the search speed for DP EN −DF and DP EN −SF . However, if memory and search speed is not a major concern, decision level fusion can be adopted while still being orders of magnitude faster than Verifinger. In practice, one also has to consider the memory required to store the training data for each model (may be needed in the future for updating/finetuning) in the ensemble, which linearly increases with the number of models in the ensemble.
Ablation Study
We evaluate the contribution of each representation in the ensemble by performing 1:1 verification experiments on NIST SD4 using various subsets of representations from the ensemble. Table 6 summarizes the results of this study. While the two geometric transformations Flip y and Flip x are important for the decision fusion approach (DP EN −DF ), Ridge and Minu are critical for the success of feature fusion through external supervision. Simple feature concatenation did not lead to any accuracy improvement. Finally, to verify the stability of the results, we train the same network on the same input data starting with five different initializations and evaluate them on NIST SD4. Based on a t-test between the T AR values of DP o and DP EN −SL over the five different initializations, the difference in the mean T AR was found to be statistically significant at 0.05 level.
Summary
In this work, we improve a SOTA deep-learning based fingerprint matcher (DeepPrint) by retraining it on manipulations of the original training dataset. We generated an ensemble of five fingerprint embeddings and proposed a feature fusion method that relies on external supervision of the individual representations to produce a more discriminative representation. This boosts the performance of the individual models without increasing computational requirements. We also considered decision and score fusion, which leads to further marginal improvement, albeit with higher computational complexity. These methods can serve as a wrapper that can be applied to any deep recognition system to boost overall performance. | 2022-09-07T01:16:09.195Z | 2022-09-02T00:00:00.000 | {
"year": 2022,
"sha1": "5cc83e68566399cb7efd415ca966eaef909e4ef5",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "5cc83e68566399cb7efd415ca966eaef909e4ef5",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
260207841 | pes2o/s2orc | v3-fos-license | The Brief negative Symptom Scale (BNSS): a systematic review of measurement properties
Background Negative symptoms of schizophrenia are linked with poor functioning and quality of life. Therefore, appropriate measurement tools to assess negative symptoms are needed. The NIMH-MATRICS Consensus defined five domains for negative symptoms, which The Brief Negative Symptom Scale (BNSS) covers. Methods We used the COSMIN guidelines for systematic reviews to evaluate the quality of psychometric data of the BNSS scale as a Clinician-Rated Outcome Measure (ClinROM). Results The search strategy resulted in the inclusion of 17 articles. When using the risk of bias checklist, there was a generally good quality in reporting of structural validity and hypothesis testing. Internal consistency, reliability and cross-cultural validity were of poorer quality. ClinROM development and content validity showed inadequate results. According to the updated criteria of good measurement properties, structural validity, internal consistency and interrater reliability showed good results, while hypothesis testing showed poorer results. Cross-cultural validity and test-retest reliability were indeterminate. The updated GRADE approach resulted in a moderate grade. Conclusions We can potentially recommend the use of the BNSS as a concise tool to rate negative symptoms. Due to weaknesses in certain domains further validations are warranted.
INTRODUCTION
Schizophrenia consists of several symptom constructs like general psychopathology, positive and negative symptoms. Positive symptoms, e.g. hallucinations or delusions, are mandatory for the diagnosis and respond well to treatment with antipsychotics while negative symptoms are much harder to treat and are linked with poor functioning and quality of life [1][2][3][4][5] . Therefore, they are of great relevance for treatment of patients with schizophrenia.
For a long time, there was no standardized definition of negative symptoms, which however is needed to be able to assess them and develop treatment options. In January 2005 the NIMH-MATRICS Consensus 6 took place to review the understanding of negative symptoms and find a more homogeneous definition. The experts involved in the Consensus conference agreed on five domains of the negative symptoms: blunted affect (reduction in emotional expression), alogia (reduction in spoken words and spontaneous elaboration), asociality (decrease in social interaction due to reduction in the drive to engage in relationships), anhedonia (reduction in experience of pleasure for current events or for future anticipated activities) and avolition (reduction in the ability to initiate and persist in goal-directed activities, due to a lack of motivation) 5 .
Different exploratory factor analytic studies, using different tools, supported the two-dimensional model of negative symptoms in subjects with schizophrenia. According to this model, avolition, anhedonia, and asociality constitute the Motivational Deficit domain (MAP), while blunted affect and alogia the Expressive Deficit domain (EXP) 5 . This model is supported by the evidence that the two domains are related to different behavioral and neurobiological features, as well as different clinical and social outcomes 7 . However, more recently, multicenter confirmatory factor analyses have questioned the validity of the two-factor solution and suggested that a five-factor model or a hierarchical model (five negative symptoms as first-order factors and the two domains, MAP and EXP, as second-order factors) better fit the data, irrespective of the assessment scale, sample nationality/ language or stage of illness 8,9 . There are many scales in schizophrenia that try to assess negative symptoms; however, they do not cover the 5 domains defined by the NIMH 6 as most of them have been developed years before the Consensus. Therefore, the experts involved envisaged the need to develop new assessment tools. The "Clinical Assessment Interview for Negative Symptoms (CAINS)" [10][11][12] was initially developed to be a quite long scale, covering the 5 domains in extensive detail but requiring more time for the assessment. For the other scale the experts concentrated on creating a more concise instrument which would be suitable for a widespread use in clinical trials, and proposed "The Brief Negative Symptom Scale (BNSS)" 13 . The BNSS consists of 13 items, which are divided into 6 subscales: 1. Anhedonia, 2. Distress, 3. Asocialty, 4. Avolition, 5. Blunted affect, 6. Alogia. It is based on a semi structured interview and rated on a 7-point scale from 0 (absent) to 6 (severe). The administration takes about 15 minutes. A total score is calculated by summing all 13 items, possible scores can range from 0 to 78 points.
As there has not been an attempt to systematically review the psychometric properties of existing negative symptom scales, our aim was to evaluate the quality of the BNSS by applying the COnsensus-based Standards for the selection of health Measurement INstruments (COSMIN) [14][15][16] guidelines for systematic reviews of patient-reported outcome measures.
METHODS
The methods used in this systematic review follow the guidelines described by Prinsen et al., 2018: COSMIN guideline for systematic review of patient-reported outcome measures [14][15][16] . They were developed to objectively evaluate rating scales in a standardized way and include several steps: evaluate the methodological quality of the included studies by using the COSMIN Risk of Bias checklist, apply criteria for good measurement properties and grade the quality of the evidence by using the modified GRADE approach according to COSMIN.
The COSMIN methodology was primarily created for patientrated outcome measures (PROMs), however the methodology can be adapted and used on clinician-reported outcome measures (ClinROMs) which is the category the Brief Negative Symptom Scale falls into [14][15][16][17] .
Literature search strategy for validation studies
Two reviewers (LW and SW) independently performed a literature search by searching the databases PubMed and Web of Science for journal articles published in English between January 2010 and June 2022 inclusive, disagreements were resolved by finding consensus, if needed by a third reviewer (SL). The search terms used were "BNSS" OR "Brief Negative Symptom Scale".
Evaluation of measurement properties
The evaluation of the measurement properties was independently performed by two reviewers (LW and SW) for all the following steps. If any disagreements became apparent, a consensus was reached by consulting a third, professor-level reviewer (SL).
Assessing the risk of bias
The Risk of Bias Checklist [14][15][16] was developed to rate the reporting quality of studies for specific criteria.
The standards for good methodological quality are sorted by criteria in 10 boxes: ClinROM development, content validity, structural validity, internal consistency, cross-cultural validity/ measurement invariance, reliability, measurement error, criterion validity, hypothesis testing for construct validity, responsiveness.
Each measurement property is scored on a four-point scale using the descriptors "very good", "adequate", "doubtful", and "inadequate". A "not applicable" option is also included for each property. An overall score for the methodological quality of each measurement property is determined by taking the lowest rating of any of the items in a box, which is called "worst score counts" principle.
The first two boxes of the Risk of Bias checklist, "outcome measure tool development" and "content validity" which relate to content validity, were deemed to be applicable to only the original publication which describes the development of the scale.
Criterion validity and responsiveness were excluded from this systematic review because there is no true gold standard for negative symptom assessment scales. Even the most frequently used scale in schizophrenia, the Positive and Negative Syndrome Scale (PANSS) 18 , has not undergone all steps required by the COSMIN criteria including the evaluation of content validity. Therefore, it can't serve as a true gold standard.
Assessing the updated criteria for good measurement properties The quality of the instrument itself was assessed by using the updated criteria for good measurement properties [14][15][16] , which comprise eight criteria: structural validity (i.e., the scale validity assessed by using Rasch analysis/Item Response Theory or Classical Test Theory), internal consistency (measured by the Cronbach's alpha when at least low evidence of structural validity is available), reliability (inter-rater or test-retest reliability, measured by intraclass correlation coefficient), measurement error (determining the limits of agreement and smallest detectable change against a measure of the minimal important change), hypotheses testing for construct validity (assessing whether a clear hypothesis was defined and tested), cross-cultural validity/ measurement invariance (i.e., measurement invariance across groups defined by ethnicity or age/ gender), criterion validity and responsiveness (measured as correlation with gold standard or area under the curve ≥ 0.70). Criterion validity and responsiveness could not be evaluated due to the lack of gold standards, as mentioned above.
Grading the quality of evidence The grade approach was used to grade the quality of evidence which refers to the confidence that the result is trustworthy. It is based on the Grading of Recommendations Assessment, Development, and Evaluation (GRADE) approach for systematic reviews of clinical trials, modified by the COSMIN group [14][15][16] and uses four factors to determine the quality of the evidence: risk of bias (quality of the studies), inconsistency (of the results of the studies), imprecision (total sample size of all included studies) and indirectness (evidence comes from different populations, interventions or outcomes than the population of interest in the review). The quality of the evidence is graded as high, moderate, low or very low. The starting point is always the assumption that the evidence is of high quality and is subsequently downgraded by one, two or three levels per factor if the criteria are not sufficient (see Table 1).
Risk of bias.
To use the risk of bias assessment for the GRADE approach, each risk of bias item/box was evaluated with applying criteria from Table 2. Following the worst-case approach, if one Risk of Bias item/box has an extremely serious risk of bias it can be downgraded by three points. Only if the given item had a determinate result in Step 2 "updated criteria of good measurement" (received a "+" or "-" rating and not a "?"), it was considered to downgrade the confidence in the evidence of the item.
Inconsistency. As we didn't quantitatively pool (meta-analyzed) the results, our criteria to downgrade was as follows: if no inconsistency was found the scale was not downgraded, if little inconsistency was found with valid explanation the scale was not downgraded, if little inconsistency was found with no explanation or moderate to high inconsistency was found with a valid explanation for these results we downgraded -1 (serious), if a moderate to high inconsistency was found with no satisfactory explanation, we downgraded -2 (very serious).
Imprecision. This evaluates the total sample size of all included studies. If the sample size was n = 50-100 we downgraded -1, if the sample size was n < 50 we downgraded −2.
Indirectness. There was a downgrading for indirectness if the patients included in the studies were not part of the population of interest. For this review, the sample groups must consist of patients with schizophrenia or schizoaffective disorder.
If there was a comparator group of patients with a different disease or a healthy control group, no downgrade was given.
Retrospective re-validation
The authors of two of our included validation studies 19,20 , AM and SG, who also participated as co-authors in this systematic review, re-validated structural validity for one and internal consistency for both studies (see supplement).
Literature search strategy for validation studies
A total of sixty-seven articles (n = 67) were found on PubMed, twenty articles (n = 20) were chosen by title/abstract and thirteen of these articles (n = 13) were included in the systematic review. A total of one thousand ninety-nine articles (n = 1099) were found on Web of Science, twenty-four (n = 24) were chosen by title/ abstract and four (n = 4) were included in the systematic review. The literature search is shown in the Flowchart in Fig. 1. The general characteristics of the included studies are portrayed in Table 3.
Assessing the risk of bias Content validity
ClinROM development: ClinROM development is per definition not a measurement property, it is however considered when evaluating content validity. It asks about the general design requirements and if the assessment of comprehensibility and comprehensiveness during pilot testing was performed.
One study 13 was evaluated for the ClinROM development and received an "inadequate" rating because it is not clear if the patients were asked about comprehensibility or comprehensiveness of the scale (see Table 4).
Content validity:
A content validity study refers to a study asking patients and professionals about the relevance, comprehensiveness, or comprehensibility of an existing ClinROM. Such a study can be performed by the developers or by researchers who were not included in the initial development.
No information was given if testing on content validity was performed, therefore it could not be considered in this systematic review.
Internal structure
Structural Validity: Structural validity measures the degree to which the scores of the scale are an adequate reflection of the construct to be measured. Therefore, it is only relevant if the scale is based on a reflective model, where it is assumed that all items in a scale or subscale are manifestations of one underlying construct and are expected to be correlated. This means that each item and subscale of the BNSS measure the same underlying construct which is negative symptoms in patients with schizophrenia or schizoaffective disorder.
Structural validity is measured by performing factor analysis. Confirmatory factor analysis is preferred, which results in a "very good" rating while studies with exploratory factor analysis only receive an "adequate" rating.
Of the overall seventeen included studies, ten performed a factor analysis. Five 19-23 performed a confirmatory factor analysis which resulted in a "very good" rating, two "adequate" ratings 24,25 for only performing exploratory factor analysis, one "doubtful" 26 rating for exploratory factor analysis compared with a sample size < 100 and two "inadequate" 13,27 ratings also due to an inadequate sample size (see Table 4).
Cross-cultural validity/ Measurement invariance: One study 31 reported on cross-cultural validity by comparing patients with schizophrenia, bipolar patients and a healthy control group with each other. The reporting quality of the validation received a "doubtful" rating (see Table 4).
Remaining measurement properties: Reliability: Eleven papers reported on interrater reliability. Three papers 23,32,33 were rated "adequate" and the remaining eight 13,19,22,[25][26][27]30,34 received a "doubtful" rating due to an inappropriate time interval or missing information on the rating conditions and the similarity of instructions, administrations, environment etc. Five papers 13,23,27,29,30 also tested for test-retest reliability. None of them however calculated ICCs for the test-retest reliability, but only Pearson's correlations. The use of Pearson's or Spearman's correlations is considered doubtful due to the COSMIN methodology and therefore leads to an indeterminate result later on (see Table 4).
Hypotheses testing for construct validity: Convergent validity: Hypotheses testing for convergent validity assumes that the investigated scale is valid for the construct it's supposed to Ideally the comparator tool has very good measurement properties and measure the identical construct. However, this turned out to be difficult to evaluate as we are simultaneously rating the measurement properties for other existing negative symptom scales 35 and yet there is no available data on their overall measurement properties. Additionally, due to the construct of negative symptoms going through many changes over the past decades, only similar constructs could be found to be compared but not identical ones.
Discriminant validity: Hypotheses testing for discriminant validity assumes that the investigated scale is valid for the construct it wants to measure and compares it to another scale that measures a different construct. Mostly positive symptom scales were used as a discriminant construct as well as depression scales as it is of great importance to differentiate between symptoms of depression and negative symptoms.
Assessing the updated criteria for good measurement properties Internal structure Structural validity: Although ten studies performed a factor analysis, five 13,24-27 are indeterminate and received a "?" due to missing calculations. This is inconvenient as all five validated the two-factor structure of the BNSS with a MAP and EXP subscale.
The remaining five studies 19-23 all had sufficient results and therefor received "+" ratings (see Table 4).
In both their validation studies, Mucci et al. 19,20 . found sufficient results for the five-factor model and the hierarchical model with CFI > 0.95. It needs to be stated that they excluded the Distress item in their analyses as it is not an original domain named by the NIMH-MATRICS Consensus 5 . Jeakal et al. 22 favored the five-factor model with TLI and CFI resulting in numbers > 0.95 for the five-factor as well as the 2nd order five-factor hierarchical model. Sun et al. 23 also favored the five-factor model with a CFI of 0.996 and TLI of 0.999 but had results of > 0.97 for CFI and TFI for all their tested models.
Ang et al. 21 had sufficient results for all their tested factor structures with TLI and CFI > 0.95. The second-order model, where the Distress item was excluded, had the highest results with a CFI = 0.999. They named the five domains as first-order factors and Emotional Expressivity and Motivation/Pleasure as secondorder factors.
Overall, it can be said that the hierarchical model and the fivefactor model show the best results in the included studies and no clear recommendation can be given on which model should be used.
Internal consistency: Four studies 19-21,23 calculated Cronbach's alpha for the individual subscales and received a "+" rating with Cronbach's alpha ranging from 0.8 to 0.97 for their subscales. One 22 study only calculated Cronbach's alpha if item deleted and no subscale scores. Therefore, it received a "?" as these results are indeterminable. For the remaining ten 13,25-33 studies that calculated Cronbach's alpha, the criteria for "at least low evidence for sufficient structural validity" was not met. Therefore, they all received "?" as their rating. As five studies however have determinable results with Cronbach's alpha > 0.7 for all subscales, sufficient internal consistency can be assumed (see Table 4).
Cross Cultural validity/ Measurement invariance: One study 31 tested measurement invariance comparing patients with schizophrenia, patients with bipolar disorder and a healthy control group. No statement can be made as the results are indeterminate "?" (see Table 4).
Remaining measurement properties
Reliability: Eight 13,19,22,23,25,27,30,32 of the eight studies evaluating the scales' interrater reliability were sufficient and received a "+" rating, one 26 was indeterminate "?" and one 34 was insufficient "−" due to the Distress item with an ICC of 0.46, while another one 33 was insufficient due to an ICC of 0.55 for Blunted affect, which isn't explicable (see Table 4). All other subscales had an ICC > 0.80 for both studies. The range for the intraclass correlation without the Distress item is 0.77-0.98 while the range for the Distress item is 0.46-0.94. The study by Gehr et al. 34 received a particularly poor result for the Distress item (ICC = 0.46), the reason being unclear.
Hypotheses testing for construct validity:
The three hypotheses to be tested according to COSMIN are:
Correlations with instruments measuring similar constructs
should be ≥ 0.50. 2. Correlations with instruments measuring unrelated constructs should be < 0.30. 3. Correlations defined under 1 and 2 should differ by a minimum of 0.10.
Convergent validity: Sixteen studies tested for convergent validity, ten 13,19,20,23,[26][27][28][29]31,32 received a "+" and six 21,22,25,30,33,34 a "-" (see Table 4). Convergent validity was calculated using multiple different scales. With the "Scale for the Assessment of Negative Symptoms (SANS)" 36,37 , correlations ranged from 0,44 to 0,95. We decided to exclude the Distress item from this range as it had a correlation as low as −0,11 with the SANS total. "The Positive and Negative Syndrome Scale (PANSS)" 18 negative subscale has correlations ranging from 0,31 to 0,9 and "the Brief Psychiatric Rating Scale (BPRS)" 38 negative subscale resulted in correlations ranging from 0,1 to 0,87. These three (sub)scales were most used as comparator tools. As the sixteen studies were performed in a wide range of cultures and were also often performed in different languages, a certain inconsistency was expected. The range throughout these studies was however higher than anticipated, with all results ranging between sufficient and insufficient range. One study 22 measured convergent validity for the total scale correlation between the BNSS and the CAINS and resulted in a correlation of 0.90.
Discriminant validity: Fifteen studies tested for discriminant validity, five 19,20,23,29,33 received a "+" and ten 13,21,22,[25][26][27][30][31][32]34 a "-" (see Table 4). For discriminant validity, an even greater number of different comparator tools was used, which is why only the most used (sub-)subscales will be mentioned here. The PANSS positive subscale had correlations with the BNSS from −0,13 to 0,49, the PANSS general psychopathology subscales' correlation ranged from −0,21 to 0,58 and the Hamilton Depression Rating Scale (HDRS) correlation ranged from −0,13 to 0,31. Other (sub-) scales however had only results which were below the hypothesis testing limit of 0,3. For example, the Calgary Depression Scale (CDSS) with a correlation ranging from −0,38-0,28, the BPRS positive subscale with a correlation ranging from −0,31-0,08 and the Young Mania Rating Scale (YMS) with a correlation ranging from −0,1 -(−0,07). The results of discriminant validity are similar to the results of convergent validity in terms of consistency which can also be explained through the cultural differences and multiple different languages of the study groups. Grading the quality of evidence (1) Structural validity, internal consistency, interrater reliability, convergent and discriminant validity all had either multiple studies of adequate quality or at least one of very good quality. There was only one study of doubtful quality for cross-cultural validity, however, the result was indeterminate and will therefore not be considered as a criterion for downgrading. The same applies for test-retest reliability where there were only studies of doubtful quality but with indeterminable results. The BNSS scale will therefore not be downgraded for Risk of Bias. (2) Inconsistency was found in convergent and discriminant validity, which is explained in length under "Updated criteria of good measurement" and therefore a downgrade of −1 was proposed. The proposals for downgrading were discussed between the two independent raters and consensus was found with a third professor-level rater to overall give a downgrading of −1 for the scale's inconsistency as there was sufficient explanation found. This changes the "high" grade to a "moderate" grade. (3) The total included sample size of all studies is n = 2554, so there will not be a downgrade for imprecision. The grade for the evidence of quality will therefore stay "moderate". (4) The tested population only consisted of in-/outpatients with schizophrenia or schizoaffective disorder for all included studies. There is no need to downgrade for indirectness, which results in a "moderate" rating for the BNSS scale.
The overall quality of the evidence is now considered "moderate" for the BNSS scale, which leads to the conclusion that there is moderate quality evidence that the measurement properties of interest are sufficient.
DISCUSSION
Even though the BNSS 13 is a relatively new scale, it has been used in many different countries and cultures. As it is a short measurement tool, it is attractive for clinical studies. However, to the authors' knowledge, this is the first systematic review to examine the measurement properties of the scale. The evaluation was undertaken using the COSMIN guidelines and the COSMIN Risk of Bias checklist [14][15][16] . Seventeen studies were identified as relevant by a systematic literature search and included in this study.
The original publication 13 failed to test for or report on ClinROM development, which includes the general design requirement as well as conducting a cognitive interview study asking patients/ professionals about the relevance /comprehensibility/ comprehensiveness of the included items. This must be considered a weakness of the BNSS. However, the content validity of the BNSS is based on the 2005 NIMH Consensus 6 , thus, it would be possible to test the content validity retrospectively. It is of great importance to report or perform the evaluation of ClinROM development and content validity by using the COSMIN Risk of Bias checklist to make the overall results of the validation of the scale more reliable and provide well-reported psychometric data. One possibility would be to retrospectively validate the content validity by forming focus groups, which could potentially improve the recommendability of the scales.
The BNSS demonstrates good psychometric properties for structural validity, internal consistency, reliability and hypothesis testing. However, the quality of evidence for cross-cultural validity is somewhat poorer. Nonetheless, it is of great importance that a rating scale is culturally adaptable, produces comparable results and is an adequate reflection of the original version in different populations, countries and languages. Therefore, cross-cultural validity needs to be properly validated. As the BNSS scale is available in multiple translations, further validation studies should be relatively easy to conduct.
We recommend validating internal consistency according to the COSMIN guideline as currently most studies only calculated internal consistency for the total scale instead of each individual subscale. Such a retrospective re-validation is possible according to COSMIN criteria, and for two of the included studies 19,20 it improved our rating. It's equally important to mention that internal consistency can only receive a positive rating if the criteria for "at least low evidence for sufficient structural validity" is met. Therefore, we recommend performing confirmatory factor analysis for the BNSS scale as it would help determine its structural validity and also its internal consistency. Indeed, performing further confirmatory analyses would allow to overcome the limits of the exploratory factor analyses and to replicate more recent findings of a five-factor or a hierarchical model of negative symptoms 8,9 , which were also supported by our post-hoc analysis of the study conducted by Mucci et al. To define the correct characterization of negative symptom structure could have important implications, since the 2-factor structure might have foreclosed the identification of neurobiological bases or therapeutic effects that are specific to one of the five domains. Therefore, considering current findings, future versions of the DSM-5 should consider each of the five domains separately, as described by NIMH-MATRICS Consensus 6 .
The additional Distress item turned out to be a weakness of the BNSS scale as it repeatedly showed poorer results and was already excluded by some of the authors in their validation studies. We therefore recommend revising the scale in this regard and in the future exclude the item from the scale, as it was not part of the original five domains established by the NIMH Consensus 6 .
Based on the results of the evaluation, an overall judgement of the recommendability of the BNSS scale is the final product of the evaluation. According to the COSMIN guidelines [14][15][16] ClinROMs are categorized into three categories: (A) ClinROMs with evidence for sufficient content validity (any level) AND at least low-quality evidence for sufficient internal consistency (B) ClinROMs categorized not in A or C (C) ClinROMs with high quality evidence for an insufficient measurement property ClinROMs categorized as "A" can be recommended for use and results obtained with these ClinROMs can be trusted. ClinROMs categorized as "B" have potential to be recommended for use, but they require further research to assess the quality of these ClinROMs. ClinROMs categorized as "C" should not be recommended for use.
No testing for sufficient content validity was performed. Due to this reason the BNSS scale is categorized as (B).
However, content validity is defined as the degree to which the content is an adequate reflection of the construct to be measured. The BNSS is based on the NIMH Consensus with the aim of finding a standardized definition of the negative symptom construct. Therefore, it creates adequate content validity for the scales that are based on it. Still, as mentioned above, ClinROM development and content validity need to be evaluated in the future to grow the confidence in the scale.
It needs to be mentioned that this systematic review only evaluated the BNSS scale according to the COSMIN guidelines for systematic reviews. This tool is relatively new and follows rather strict criteria, while other methodologies might reach different conclusions. Most scales to rate patients with schizophrenia would probably receive these or even worse results. In the future the COSMIN guidelines could be used prospectively to create new rating scales or conduct validation studies so that all demanded criteria are included.
Our study has potential limitations. We were not able to perform a metanalysis on this topic as the data were presented in many different ways and therefore quantitively summarizing the results wasn't possible. Furthermore, no protocol was written during the process.
The BNSS is still recommendable, compared to the older negative symptom scales such as the SANS 36,37 , the BPRS 38 , the "Krawiecka-Manchester-Scale" (KMS) 39 , the "A Negative Symptom Rating Scale" (NSRS) 40 , the PANSS 18 , the "Schedule for the Deficit Syndrome (SDS)" 41 , the "High Royds of Evaluation of Negativity Scale (HEN)" 42 and the "Negative Symptom Assessment of Chronic Schizophrenia Patients (NSA-16)" 43 . Several of them (BPRS, KMS, NSRS, PANSS) do not cover the five negative symptom domains established by the NIMH Consensus. The remaining scales (SANS, SDS, HEN, NSA-16) showed poorer results for the psychometric properties as evaluated in "Clinician-reported negative symptom scales in schizophrenia: a systematic review of measurement properties." (LW, SW (joined first authors), SG, AM, JD, SL; manuscript in preparation). The only "competitor" of the BNSS scale is the CAINS scale 10-12 which we examined in a different paper: "Clinical Assessment Interview for Negative Symptoms (CAINS): a systematic review of measurement properties." (SW, LW, JD, AM, SG, SL; manuscript under review). The CAINS also received a "moderate" rating (manuscript under review), which is why no clear recommendation can be given on which scale is of better quality than the other. As the BNSS however needs a shorter administration time as compared to the CAINS (15 minutes vs. 30 minutes), we would recommend the use of the BNSS over the CAINS if there is a need of a quicker evaluation of negative symptoms. The confidence in both rating scales could still be improved by conducting further validation studies. Moreover, a comparison of the BNSS and the CAINS would be of great interest as they were both developed based on the NIMH Consensus around the same time. So far only one study 22 has compared the two scales which was restricted to convergent validity.
To conclude, the BNSS performed well regarding structural validity, internal consistency, reliability and hypothesis testing for convergent validity; however, the measure did not attain satisfying results regarding hypothesis testing for discriminant validity and only one study reported on cross-cultural validity. Considering the overall result of this systematic review, we classify the BNSS as a potentially recommendable tool to rate negative symptoms, especially if a quick administration time is needed. Further validation studies including the specific requirements made by COSMIN should however be conducted in order to address the weaknesses of BNSS pointed out in this systematic review to further improve the confidence in this scale. | 2023-07-28T13:16:15.940Z | 2023-07-27T00:00:00.000 | {
"year": 2023,
"sha1": "28595926b635019fb4f45d7318b804d49199957c",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "a62a87a5ec832b7d6c4c96d7df92462470d9d369",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
248000822 | pes2o/s2orc | v3-fos-license | Stabilization of parameter estimates from multiexponential decay through extension into higher dimensions
Analysis of multiexponential decay has remained a topic of active research for over 200 years. This attests to the widespread importance of this problem and to the profound difficulties in characterizing the underlying monoexponential decays. Here, we demonstrate the fundamental improvement in stability and conditioning of this classic problem through extension to a second dimension; we present statistical analysis, Monte-Carlo simulations, and experimental magnetic resonance relaxometry data to support this remarkable fact. Our results are readily generalizable to higher dimensions and provide a potential means of circumventing conventional limits on multiexponential parameter estimation.
Multiexponential analysis is a longstanding problem in mathematics and physics, with applications in biomedicine 1-3 , engineering 4 , food sciences 5 , the petrochemical industry 6 , and many other settings 7,8 . The goal of many of these analyses, and the problem we will address, is to extract parameter estimates from a real-valued multiexponential decay function of the form where n is the number of underlying monoexponential components, and τ i and c i the decay time constant and amplitude of the i-th component. This is a special case of the Laplace transform, itself a special case of the Fredholm equation of the first kind, in which the integrand is the product of an exponential kernel and a sum of delta functions, with all quantities real.
Fitting discrete decay data to Eq. (1), in principle, permits estimation of relative component sizes and decay constants for all components. It is well-known, however, that this process can be severely ill-posed 8,9 ; for closelyspaced exponential time constants {τ i } , especially with disparate relative component sizes {c i } , there are many sets of distinct decay times and component amplitudes which closely fit the data. A consequence of this is instability in the values of the set of derived parameters in the presence of noise. This can be illustrated through a modification of an example provided by Lanczos 10 .
In Fig. 1, we see that there is a near-perfect superposition of two biexponential functions with very different pairs of underlying monoexponentials. Clearly, it is impossible to claim that one of these is more suitable than the other to describe an underlying noisy data set. The ill-posedness of this special case of the inverse Laplace transform (ILT) 9,11-14 stands in stark contrast to the well-posed Fourier or, equivalently, inverse Fourier, transform.
Many methods 8,[15][16][17][18][19] have been developed for multiexponential analysis, and are effective in particular settings. However, they do not address the fundamental problem of ill-conditioning. In contrast, we show that we can markedly improve the conditioning through two-and higher-dimensional extension of multiexponential decay. We develop this in the context of magnetic resonance relaxometry (MRR), with which, perhaps uniquely among experimental methods, multiexponential data can be generated in one, two, or higher dimensions 2,[20][21][22][23][24][25] . In fact, Celik, et al. 26 performed a direct experimental confirmation of the increased stability of parameter estimation in 2D MRR, for both nonlinear least squares (NLLS) and non-negative least squares (NNLS) analyses, through a preliminary set of simulations and phantom experiments. However, no analysis was presented to support the empirical results. c i e −t/τ i , Thus, the purpose of the present work is to extend the preliminary findings of Celik et al. 26 by providing a statistical theory along with a much more comprehensive set of simulations and experimental data supporting the increased stability of multiexponential analysis in 2D as compared to in 1D. We focus on the archetypical case of the biexponential model, Eqs. (2) and (3). Our main result is that stability improves progressively and markedly for an increasing ratio between the relaxation times, T 1,1 and T 1,2 , of the two components in the indirect dimension, providing increasing discriminatory information content.
Experimental results are obtained from MRR experiments on a two-component homogeneous gel. For all analyses, we incorporate the consideration that the 2D experiment is of longer duration and provides a greater number of data points than 1D, so that improved stability at a given signal-to-noise ratio (SNR) would be expected. We compensate for this by increasing the signal-to-noise ratio used for 1D Monte-Carlo (MC) simulations and experiments by the square root of the number of indirect dimension measurement points used for the corresponding 2D analysis, √ n indirect , as was also done in Celik et al. 26 . We also note that Kim et al. 25 , as part of a comprehensive work on applications, report an improved Cramer-Rao lower bound (CRLB) for the 2D inverse Laplace transform of decaying exponentials in a numerical example; without however exploration of parameter dependencies, and without simulation or experimental results addressing this concept.
For clarity, we note that in contrast to the problem outlined above, multi-dimensional Fourier transform magnetic resonance spectroscopy is a mature field 29 . A fundamental concept in this area is that spectral lines that overlap in a 1D spectrum may be resolved in two-or higher dimensional spectra. However, the Fourier transform is a well-conditioned numerical problem, with condition number of one, so that this is quite distinct from the improvement in condition number from extension of the inverse Laplace transform into higher dimensions The indicated derivatives are the elements of the Jacobian matrix of G evaluated at p 0 , which we will denote by B with elements Eq. (8) becomes which is of the form of Eq. (5), so that G(p 0 ) and Bp 0 are constants, so Cov(G(p 0 ) − Bp 0 − d) = Cov(d) . Then as above, with the diagonal elements again defining the variances of the derived parameters. This indicates that the condition number for this analysis is defined by B . The complexities of finding a global rather than a local minimum solution to Eq. (8) is a substantial but separate topic; if the linearization is performed about a local minimum, the derived variances will be appropriate for the parameter estimates recovered at that local minimum.
Cramér-Rao lower bound (CRLB) theory is an alternative approach to obtaining these results [31][32][33] ; it is also a local analysis. The matrices G T G and B T B in Eqs. (6) and (13) are Fisher information matrices. www.nature.com/scientificreports/
Methods
One-dimensional analysis. A spin-echo experiment with sampling at echo peaks 1,34 for a two-component system leads to the signal model of Eq. (2), with a signal vector S defined by S j (p) = S(TE j , p) , and p = (c 1 , c 2 , T 2,1 , T 2,2 ) , with least squares parameter estimation following from: where the data vector d is defined according to d j = echo amplitude at time TE j . There is no requirement for sampling at equally spaced echo times, though this is conventional and convenient. The j-th row of the Jacobian of S , which we denote by B , corresponds to TE j , while column l corresponds to the derivatives of S j (p) with respect to the l-th element of p : A certain degree of prior knowledge of T 2,1 and T 2,2 is required to select the vector of measurement times to ensure that the short-time and long-time behavior of the signal is well-sampled; the details of this choice can have a significant effect on fit quality in the presence of noise 35,36 but is not the subject of the present analysis. The derivatives B jl appearing in the Jacobian are calculated analytically from Eq. (2); for more complicated signal models, they can be computed numerically. The calculation of Cov(p * ) for different assumed values of p follows from Eq. (13).
One-dimensional T 1 relaxometry experiments may also be implemented; in brief, an initial inversion pulse is followed by a readout pulse after a variable inversion recovery time TI. A sequence of n TI inversion recovery times indexed by k, TI k , is used to obtain a full data set. The appropriate choice of {TI k } is dependent on the values of T 1,1 and T 1,2 . The corresponding signal model for a two component system is given by the terms in parentheses in Eq. (3), with the B matrix determined from this. Similar comments apply to one-dimensional diffusion experiments designed to obtain ADC values of underlying components. Two-and higher-dimensional analysis. There are two independent measurement variables in 2D relaxometry and related experiments 24 . For the version of T 1 − T 2 experiments we have described, each inversion time TI k is followed by a number n TE of spin echoes indexed by j, denoted by TE j . Individual measurement points are now defined by a pair of times TE j , TI k , with corresponding data points d j,k . For two components, this results in a signal described by the two-component model Eq. 3. A full dataset is obtained by stepping through a pre-defined number, n TI , of inversion recovery times, acquiring spin-echo data for each.
The least squares minimization problem is defined by the Frobenius norm: a finite double sum over the two-dimensional array of these measurement points; this is obviously independent of the ordering of the measurement time pairs {TE j , TI k } and their corresponding data points d j,k . The vector of model parameters is p = (c 1 , c 2 , T 1,1 , T 1,2 , T 2,1 , T 2,2 ) , so that the B matrix has 6 columns. Each of these columns corresponds to a vectorization of the full 2D grid ρ of measurement points, where ρ j,k = {TE j , TI k } ; this is a convenient way to ensure that each column of B contains elements corresponding to each measurement point.
Having defined B , the calculation of Cov(p * ) for different underlying parameter sets and measurement points follows as for the 1D case. Experiments comparable to the T 1 − T 2 experiment, such as ADC − T 2 or ADC − T 1 , may be analyzed analogously. Finally, three and higher-dimensional experiments may also be undertaken at the expense of additional acquisition time and more complex data analysis 20 . Simulations. For MC simulations, we in general fix all but one of the underlying parameters and show results for a range of this variable parameter. The smaller of the NLLS-derived time constants was assigned the label T 2,1 and the larger assigned to T 2,2 , with corresponding fractions c 1 and c 2 . In 2D, T 1,1 and T 1,2 were similarly assigned to these two components, respectively.
(2) were plotted as a function of T 2,2 , based on Eqs. (8), (10) and (13). This linear treatment yields results that are simply proportional to the assigned SD of the noise, and hence essentially arbitrarily scaled. We expect a worsening of the condition number of B and a corresponding increase in the SD of parameter estimates as T 2,2 approaches T 2,1 .
The linearized treatment outlined above, as well as the equivalent CRLB theory, are local theories and do not directly reflect the global properties of Eq. (8). One potential problem with this is that the evaluation of the Jacobian matrix is taken at the underlying parameter values, which may reflect MC simulations with finite SNR. MC results do not use linearization at the underlying parameter values, but are obtained iteratively from initial guesses near the ground truth, so that while they yield less direct theoretical insight, these results are, in that sense, considerably more general. These simulations are again displayed as a function of T 2,2 . The MC simulations were performed by adding N noise noise realizations of Gaussian noise to each decay curve for a given set of parameters and performing an NLLS fit for each. SNR was defined as the ratio of the maximum signal amplitude to the noise SD. The mean and SD's of recovered parameters were then calculated over the set of noise realizations. Quantitative agreement between the analytic linearized solution and the MC results is not expected due to the linearization used to derive Eq. (9), and the evaluation of the Jacobian matrix B at the true underlying model parameters rather than at the parameters recovered by NLLS, and dependence on the www.nature.com/scientificreports/ details of the numerical NLLS algorithm. This effect can be minimized through use of very high SNR values in directly comparing the MC with the analytic results; we have selected SNR=10,000 which, in fact, is close to our experimental SNR (see below). For both the analytic linearized covariance analysis and the MC analyses, we assumed evenly-spaced echo times TE j ranging from 8 ms to 512 ms in 8 ms increments; the number of echo times was therefore n TE = 64 . For all MC simulations, 1000 noise realizations were used. The initial guesses used for NLLS were random numbers within a specified range of the underlying parameter values. Parameter estimation was implemented using the MATLAB function lsqcurvefit , an unconstrained Levenberg-Marquardt algorithm.
Two-dimensional simulations. For the 2D simulations defined by Eq. (3), additional parameters T 1,1 , T 1,2 are introduced and need to be determined. The analytical and MC simulations were designed analogously to the 1D versions described above. Results are presented by fixing T 1,1 = 1000 ms and varying T 1,2 . The parameter covariance matrix and the condition number of the Jacobian matrix for the linearized analytical calculations was determined from Eqs. (10) and (13). For MC simulations, the full set of 6 estimated parameters was again determined for N noise = 1000 realizations of noise-corrupted data for a given set of model parameters. The set of n TI = 25 inversion recovery times {TI k } ranged from 50 ms to 4850 ms in increments of 200 ms. The echo times TE j were the same as for the 1D simulations. The total number of measurements was n TI × n TE = 1600 . SNR was defined as for the 1D case.
1D versus 2D simulations. The 2D experiment collects an n TI -fold greater number of data points than 1D, leading to increased acquisition time by approximately the same factor. In practice, an equal-time comparison is of greatest interest; an equal-time 1D experiment would exhibit SNR which is a factor ∼ √ n TI greater than the corresponding 2D experiment. In other words, given the same experimental time, one can perform either a single 2D acquisition or n TI 1D acquisitions, and we seek to compare the stability of these two approaches. This has been incorporated into all of our MC simulations and experimental analyses, as was also done in Celik et al. 26 .
Otherwise, our comparisons of 1D and 2D experiments used the same set of echo times TE j and underlying common parameter values. For the 2D simulations, the effect of the difference between or ratio of T 1,1 and T 1,2 on the precision of parameter estimation was of greatest interest.
In contrast to the ILT, the inverse Fourier transform (FT) is analytically well-posed and, in the discrete form, well-conditioned 9 . It therefore serves as a type of negative control on our results for the ILT, to demonstrate the fact that the results we obtain in the latter case are due to improvement in conditioning rather than simply to expanding dimensionality. See SI (Supplementary Information), Fig. 5 for details of this analysis.
Three-dimensional simulations. The extension to higher dimensions is straightforward. For example, Eq. (4) represents a signal model incorporating T 1 , T 2 and diffusion effects. Each of the two components is now characterized by a triplet {T 1,i , T 2,i , ADC i } , with i indexing the two components. The number of experimental data points is n TE × n TI × n b , where n b is the number of discrete diffusion-sensitizing measurements. The linearization of this problem, following the formalism above for linearization of 1D and 2D models, is straightforward, including the construction of the Jacobian and the covariance matrix.
Experimental methods. One-dimensional T 2 and 2D T 1 − T 2 experiments were performed on a 5% agarose gel consisting of two cylindrical plugs doped respectively with 0.05% and 0.15% w/v CuSO 4 . Each component was weighed to estimate expected relative signal fractions. To facilitate shimming, the plugs were immersed in perfluorocarbon liquid (3M Fluorinert FC-770, Sigma-Aldrich, St. Louis, MO) and positioned between two home-built polyetherimide (Ultem) plastic plugs. The plugs were separated by a 1mm thick poly(tetrafluoroethylene) spacer to prevent diffusion of copper ions between them. After insertion into a 10mm NMR tube, the two-component gel was placed in a 10mm transmit-receive SAW resonator (m2m Imaging, Australia) within the magnet and scanned using an Avance III 400MHz widebore microimaging spectrometer (Bruker Biospin, Rheinstetten, Germany). Sample temperature was maintained at 4.0 ±0.1 • C using cold air from a vortex tube (Exair, Cincinnati, OH).
Non-localized spectroscopic data were acquired using a CPMG multi-spin echo sequence consisting of rectangular RF pulses of duration 20 µ s (90 • ) and 40 µ s (180 • ), yielding 2048 echo peak intensities at TE = 0.4, 0.8, · · · , 819.2 ms for each spin excitation, followed by a recovery delay of 2s. For two-dimensional T 1 − T 2 experiments, each CPMG pulse train was preceded by a 40 µ s rectangular inversion pulse and an inversion recovery delay TI, which was incremented nonlinearly from 15ms to 2s in 24 steps. 1D T 2 experiments, without the inversion recovery preparation module, were performed with 30 averages to yield similar total scan time to that of the 2D experiments. Each 1D or 2D experiment was repeated 100 times to produce data with different noise realizations. Total scan time for each experiment was 1 hour 38 minutes.
Spin-echo imaging experiments were performed to measure T 1 and T 2 relaxation times in each gel independently. In each experiment, a 3 mm axial slice was positioned through a single gel, with in-plane field-of-view 10 mm × 10 mm and matrix size 64 × 64. Excitation and refocusing were performed using 1ms hermite 90 • and bandwidth-matched hermite 180 • pulses, respectively. All imaging experiments were performed without signal averaging.
Monoexponential T 2 's were measured using a CPMG sequence in which the read pre-phase gradient directly preceded the readout gradient to minimize diffusion effects. Sequence parameters included: acquisition bandwidth = 81.5 kHz, interpulse delay TR = 2 s, and TE = 4.7, 9.4, · · · , 601.6 ms. Mean magnitude intensities for all gel pixels were fit to a three-parameter exponential decay function A + M 0 * exp(−TE/T 2 ) in Bruker ParaVision 5.1 software; the offset term was incorporated to account roughly for the Rician noise floor. www.nature.com/scientificreports/ Monoexponential T 1 values were measured similarly, using a progressive saturation experiment in which TR was varied nonlinearly from 15 ms to 3 s in 8 steps. Acquisition bandwidth = 50 kHz and TE = 6.0 ms were employed. Data were fit to the function A + M 0 * (1 − exp(−TR/T 1 )).
The relaxometry experiments were performed in spectroscopic mode, that is, with no spatial-encoding gradients. The acquired data was therefore of the form of a single complex value for each acquisition time. Each data point was phased individually along the real axis to maintain full amplitude without any magnitude operation, as is standard experimental practice in MR relaxometry. Thus, the noise in our experiments was Gaussian, as required for the strict validity of Eqs. (6) and (12).
Results
One-dimensional analyses. Analytic calculation. Fig. 2 shows the linearized results for the standard deviation of T 2,2 derived from the model described by Eq. (2), as a function of T 2,2 , with c 1 = 0.3 , c 2 = 0.7 and T 2,1 = 60 ms. SNR was set to 800, though this value appears only as a multiplicative constant and does not otherwise enter the calculation.
We see that the variance increases greatly, and asymptotically approaches infinity, as T 2,2 approaches T 2,1 . Similar results are seen for the SD of all other derived parameters (See Figure 1 in the SI). Correspondingly, the condition number of the Jacobian matrix B approaches infinity as T 2,2 approaches T 2,1 , as shown in Fig. 2 of the SI. In fact it may readily be confirmed by inspection that B is singular in this limit; B T B is then also singular, so that the right-hand side of Eq. (13) is undefined.
This limit represents the coalescence of the two exponential terms in Eq. 2. The time constant of the resulting monoexponential expression follows easily from NLLS analysis, but of course it is impossible to separate the two underlying components. We also see that cond(B) becomes very large even before this limit is attained, so that the calculation of Cov(p * ) becomes effectively meaningless and is therefore excluded from Fig. 2.
The theoretical calculations in Eq. (13) in effect assume infinite SNR through the fact that the Jacobian is always calculated at the correct underlying values, with finite SNR incorporated into the formalism only through multiplication by the noise variance.
Monte-Carlo simulations. Fig. 3 shows MC results indicating the improvement in stability as T 2 values become increasingly different. Results are shown for SNR= 10000 over 1000 noise realizations.
These results can be extended by plotting histograms of recovered parameter values for a range T 2,2 values. Fig. 4 shows this for recovered T 2,2 values over 1000 noise realizations with SNR = 10000 ; high SNR was selected in order to minimize the potential effect of local minima, so that the MC results could be more directly compared to the linearized treatment. Parameter SD's were calculated for each value of T 2,2 . Note that SD indicates the standard deviation of the distribution obtained from MC simulations and should be distinguished from the σ defining the standard deviation of a Gaussian distribution. As seen, accuracy increases and the SD of estimates decreases as the ratio of T 2,1 to T 2,2 increasingly differs from unity, in agreement with Fig. 3. Results for the other parameters are similar.
In this extremely high SNR case, the pattern of the calculated SD's is very similar to that found from the linearized theory. This is to be expected, since the latter in effect assumes infinite SNR in that the Jacobian is always calculated at the correct underlying values. In contrast, MC results are independent of linearization and are not reliant on a high-SNR approximation. The MC results we show in the remainder of this paper, for more realistic SNR, are expected to show the same trends as in the linearized theory, but not to agree in detail. In particular, for moderate SNR, we expect large parameter SD across a much larger range of the independent variable as compared www.nature.com/scientificreports/ to the linearized result. Nevertheless, for both linearized analytic and MC calculations, we expect maximal SD in the regime T 2,1 ≈ T 2,2 . A more exact correspondence between these methods is not to be expected. Fig. 3). Additive Gaussian noise was again assumed and entered only as an overall multiplicative constant. As expected, the SD for each parameter attains its maximum for T 1,2 = T 1,1 = 1000 ms, and decreases as T 1,2 deviates from this value. This result is the major finding of this work, indicating the statistical basis for our previous empirical results 26 . In particular, this supports the stabilization of parameter estimation for the biexponential model through introduction of a second dimension with disparate values for T 1 's.
Two-dimensional analyses. Analytic calculation.
We expect that the behavior of the SD should correspond to the condition number of the Jacobian matrix B , and therefore of B T B for the linearized problem, as described in the theory section. Fig. 6 shows that the condition number of the Jacobian matrix for 2D experiments with differing T 1,1 and T 1,2 is smaller than for corresponding 1D experiments. When T 1,1 = T 1,2 , the two condition numbers are approximately equal. This indicates that the stability of parameter estimation from 2D experiments is greater than for 1D except (0.3, 0.7, 60 ms) . The smaller value of the derived time constants is assigned to T 2,1 , and the larger to T 2,2 . SNR = 10000. In the upper panel, correct underlying values of T 2,2 are indicated with the red line, with corresponding values of the recovered T 2,2 shown as asterisks. As T 2,2 increasingly differs from T 2,1 , accuracy and precision both improve greatly. The SD's for the recovered values of T 2,2 are shown in the lower panel and are seen to be largest in the regime T 2,2 ≈ T 2,1 , and to decrease as these values progressively differ. www.nature.com/scientificreports/ in the regime of this special case. The equivalence of the 1D experiment to the 2D experiment for equal T 1 's can only be approximate, since the condition number of the latter has a dependence on the set of T 1 -sensitizing TI values; this variable does not exist in 1D. Fig. 8 in the SI is a threedimensional view corresponding to Fig. 5 above.
In comparing the results of Fig. 5 with Fig. 2, and from Fig. 6, we see that the T 1,1 = T 1,2 behavior in 2D mimics the T 2,1 = T 2,2 behavior in 1D. Stability improves in 1D as the ratio T 2,1 /T 2,2 departs from unity, while it improves in 2D as the ratio T 1,1 /T 1,2 similarly departs from unity. However, even with T 11 = T 12 , the condition number will remain finite as long as T 21 = T 22 .
Monte-Carlo simulation results. Fig. 7 compares MC results for the stability of the 1D and 2D biexponential analyses. Parameter recovery was performed over 1000 noise realizations. Underlying parameter values were c 1 = 0.3 , c 2 = 0.7 , T 2,1 = 45 ms, and T 2,2 = 60 ms, with, in addition, T 1,1 = 1000 ms. Results are shown for three values of T 1,2 . SNR was set to SNR 2D = 400 for the 2D analysis, and SNR 1D = 400 × √ n TI = 2000 in 1D. The histograms show recovered values of the indicated parameters.
As seen, the histograms for the 1D analysis and for the 2D analysis with equal T 1 's are essentially indistinguishable. Precision increases in 2D as the T 1 's progressively differ, demonstrating the potential for improved stability of 2D T 1 − T 2 experiments as compared to 1D, even on the equal-time basis in which the SNR of the 1D experiment is, in this case, √ n TI =5-fold greater than that of the 2D experiment. The results of Fig. 7 can be extended as shown in Fig. 8. Variation in NLLS parameter estimation is shown for 2D analysis as a function of T 1,2 . Recovered values are correctly obtained for T 1,2 increasingly different from T 1,1 . The SD's were calculated over 1000 noise realizations for each parameter set. As seen, the SD of the estimates decrease as the ratio of T 1,2 to T 1,1 deviates from unity, in agreement with the results shown in Fig. 3 in the SI. In addition, the stability of the 2D experiment depends on the ratio of the T 1 's rather than on their absolute separation; see Fig. 4 in the SI. A three-dimensional version of Fig. 8, showing a MC calculation of SD as a function of T 1,1 and T 2,1 , is provided in Fig. 9 in the SI.
Three-dimensional analyses. We provide a more condensed treatment of the further extension of the ILT of multiexponential decay to three dimensions (3D). As described in the three-dimensional simulations section, expressions for the SD of parameter estimates can be derived through linearization analogously to the 2D case. We illustrate this for the T 1 − T 2 − ADC signal model, with fixed values of c 1 , c 2 , T 2,1 , T 2,2 , T 1,1 , ADC 1 = 0.7, 0.3, 45 ms, 60 ms, 1000 ms, 1.5 mm 2 /ms , and varying T 1,2 and ADC 2 . We used nine evenly-spaced diffusion sensitizing values {b l } , ranging from 0 ms/mm 2 to 2 ms/mm 2 in increments of 0.25 ms/mm 2 . The dimensions of b are inverse to those of ADC. The values of TE j and {TI k } were taken to be the same as in the one and two-dimensional simulations sections. The number of measurement points is n TE × n TI × n b = 14, 400 . However, we reiterate that typically, the 64 values of TE that provide T 2 -sensitization are acquired at no additional time cost through a multi-echo acquisition, so that the duration of the experiment is largely proportional to n TI × n b . The standard deviation of T 2,2 is illustrated in Fig. 9 as a function of the indirect dimension values of T 1,2 and ADC 2 . Similar results are seen for the SD of all other derived parameters; see Fig. 6 in the SI. Figure 9 shows that the maximum SD for T 2,2 estimation, that is, greatest degree of instability, occurs for T 1,2 = T 1,1 and ADC 2 = ADC 1 . This is exactly analogous to the previous results in 1D and 2D. The basis for this can be seen from calculating the condition number of the Jacobian matrix. This is shown in Fig. 10, calculated using the same set of parameters as in Fig. 9. www.nature.com/scientificreports/ The uppermost plot in Fig.10 shows that the condition numbers are largest in 1D and for the 2D and 3D analyses when the indirect dimension parameter values are equal. The condition number is then seen to decrease when the indirect dimension parameter values become disparate. Thus, for equal indirect dimension parameter values, no additional information is provided by that dimension, and the condition numbers become essentially equal to those for the next-lower dimension. The lower left plot shows the rough equality of condition numbers for all three dimensions when the indirect dimension parameter values are equal, with the condition number decreasing as ADC 2 becomes increasingly different from ADC 1 . The lower right plot shows the marked increase in condition number as T 1,2 approaches T 1,1 , and the decrease as these two values become increasingly disparate. For 1D, the condition number is plotted as a constant across the T 1,2 and ADC 2 axes, while the condition number for the two-dimensional problem is plotted as constant along the ADC 2 axis. These results indicate that the condition number for the 1D model is effectively an upper bound for the 2D model, which is itself an approximate upper bound for the 3D model. The 1D NLLS analysis of the signal obtained from the double gel sample yields SD c * 1 ≈ 8 × 10 −3 , SD c * 2 ≈ 8 × 10 −3 , SD T * 2,1 ≈ 1.37 × 10 −1 ms, and SD T * 2,2 ≈ 1.34 × 10 −1 ms; the 2D T 1 − T 2 NLLS analysis with T 1,2 T 1,1 ≈ 2.9 yields substantially smaller standard deviations for all four parameters, with SD c * 1 ≈ 2.7 × 10 −3 , SD c * 2 ≈ 3.7 × 10 −3 , SD T * 2,1 ≈ 3.4 × 10 −2 ms, and SD T * 2,2 ≈ 3.5 × 10 −2 ms. Additional simulations confirm that the stability of the 2D reconstruction shows substantial improvement as compared to the 1D results, where the same TI, TE and SNR values as in the experiment are used. See Fig. 7 in the SI for more details.
Quite separately from stability issues, we note that there are unmodeled effects in these experimental data that likely contribute to the differences in derived mean values between the two methods, although such bias is not the topic of the current work. Among these effects are the non-i.i.d. Gaussian noise encountered in actual experimental practice, along with the fact that the effect of noise on the bias in 1D and 2D NLLS is a complicated function of noise characteristics and SNR. In addition, the underlying T 2 values of the two components of the gel sample will not be delta-function monoexponentials, but rather distributions, albeit narrow. An analysis of the bias in noisy 2D NLLS would represent a significant undertaking beyond the scope of the present paper.
Discussion
Parameter estimation for multi-component exponential decay has been studied for over 200 years, dating back at least to Prony 15 , and has remained an active area of research through the present day 7,8,18,37 . Many algebraic and numerical approaches for this have been established and reviewed 8,18 , including Prony's 15,38,39 and the Laplace-Padé methods 40 , several implementations of nonlinear least squares analysis (itself a topic of longstanding study, likely originating with Gauss 41 ) 42-44 , and others 7,8,18,45 . More recently, machine learning methods have been applied to this problem 19,46 .
Bromage 38 made important comments regarding the conditioning of the 1D biexponential model as a function of the separation of the c and the T 2 values. In the one-dimensional analyses section of the present work, we present comparable results, but also provide condition numbers derived from a linearized analytic treatment, and parameter variances for both the MC and analytic approaches. Varah 14 analyzed the uniqueness and stability of the biexponential 1D problem in the context of both discrete (NLLS) and continuous ( L 2 norm of the misfit) analyses. Least squares surfaces as defined by the objective function in Eq. (14) were presented for a range of parameters to illustrate the ill-conditioning of the biexponential problem. Shrager 17 provided an extensive Figure 11. Histograms of the fitted values for the indicated parameters derived using NLLS from 100 sets of experimental data. The comparisons in the upper two rows are for 1D versus 2D. Note that the mirror image appearance of c 1 and c 2 arises from the constraint that c 1 + c 2 = 1. www.nature.com/scientificreports/ perspective on the difficulties of multiexponential model fitting to experimental data, especially in the context of ill-conditioning. An alternative Bayesian approach to deriving parameter uncertainties in this setting has also been presented 16,47 in terms of relaxation rates 1 T 2 . Nevertheless, in spite of the range of techniques presented in the literature, the fundamental difficulty in deriving the amplitudes and time constants for multiexponential, and even for biexponential, decay remains; it is intrinsic to the redundancy in the family of exponential functions, with the possibility that very different exponential models may provide nearly identical values [8][9][10] . Especially in the presence of unavoidable experimental noise, parameter extraction in such cases is an intractable problem. In practical terms, this means that derived parameters are extremely unstable with respect to noise 48,49 .
Previous analyses have been developed within established fundamental limitations 50 . However, perhaps uniquely among experimental sciences, magnetic resonance studies permit the experimental implementation of two-and higher-dimensional experiments yielding data exhibiting multiexponential decay. Evaluation of the increased stability of these models is the central idea of the present work. This is a fundamental departure from the vast body of previous work on 1D biexponential or multiexponential analysis. Celik et al. 26 provided an initial report of this improved conditioning; however, they presented no underlying theory, and minimal simulation and experimental results. In the present manuscript, we provide a statistical foundation for this result, as well as a much more extensive theoretical and numerical analysis, along with substantially expanded experimental results. Overall, our results provide a potential means of circumventing conventional limits on multiexponential parameter estimation.
Although the approximate analytic and MC results presented here are fully supportive of each other, there are some obvious differences. First and foremost, the MC calculation is exact in the sense that it involves no explicit assumptions. In contrast, the analytic approach is a linearization. In addition, the results of the MC simulations depend to a certain extent on initial guesses and details of the implementation of the nonlinear optimization. We used a commercial, highly-developed, code for this, but these dependencies remain. Further, the computations involved in the approximate analytic analysis become less reliable as the condition number of the B matrix increases. Likewise, the calculation of variances is based on a first order Taylor expansion around the true parameter values, so that the minimizer p * must be near the defined underlying parameter values, that is, there is an implicit assumption of limited bias. Similarly, the Jacobian matrix in the linearized treatment is always evaluated at the correct underlying parameters, which does not realistically correspond to the MC results. Nevertheless, both approaches, linearized theoretical modelling and MC simulations, reflect the main result of this paper, which is the increased stability of parameter estimation for the ILT of biexponential decay in 2D with distinct indirect dimension parameters.
A significant issue is that the condition number of the Jacobian matrix in the linearization of Eq. (8) is unitdependent. This follows from the fact that the parameters to be estimated, component sizes and relaxation times, are of different dimensions. There appears to be no fully satisfactory resolution of this issue other than to choose reasonable and conventional units that are internally consistent within the given analysis.
Extensions of the present work would include dimensional considerations for stability as a function of discretization 50,51 , and within the framework of NNLS, for which the stabilizing effect of increased dimensionality was also demonstrated empirically by Celik et al. 26 . Such further analyses would be of particular interest given the recent advances in accelerating data acquisition for higher dimensional MRR 6,28,52,53 .
In conclusion, we have demonstrated a fundamental improvement in the stability of biexponential decay analysis through extension into higher dimensions. | 2022-04-08T06:22:44.709Z | 2022-04-06T00:00:00.000 | {
"year": 2022,
"sha1": "f99d80d5aaa8688bff854ba128313deaea7a6ba9",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-022-08638-7.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9b21d1e7dff8fa65a7bb509429477e842eb6aa13",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Medicine"
]
} |
14766336 | pes2o/s2orc | v3-fos-license | Mouse alloantibodies capable of blocking cytotoxic T-cell function. I. Relationship between the antigen reactive with blocking antibodies and the Lyt-2 locus.
In an attempt to produce allonatibodies to cytotoxic T-cell receptors, hyperimmune anti-lymphocyte antisera have been raised in mice of various strain combinations, and have been tested for their ability to block allogeneic cell-mediated lymphocytotoxicity (CML) in the absence of complement at the T killer cell level. Most of the sera failed to show any significant and reproducible inhibitory effects. However, among C3H anti-B10.BR antisera, some sera were found to be capable of significantly inhibiting CML. This effect was attributable to antibodies reacting with the killer population rather than the target cells, because the sera inhibited B10 anti-C3H CML but not C3H anti-b10 CML. Among mouse strains tested, A/J, BALB/c, B10, and B6 strains were sensitive to the inhibitory effect of the sera whereas AKR, CBA, C3H, and DBA/2 strains were insensitive. The sensitivity of killer cells to the inhibitory effect correlated well with the strain distribution of the Lyt-2.2 antigen. In the presence of complement, these same sera were toxic to 100% of spleen cells of AKR, BALB/c, B10, and DBA/2 strains, with comparable cytotoxic titers. Thus, the inhibitory activity of the sera could not be explained by nonspecific effects of high-titered antibodies. To study the relationship between the antigen(s) responsible for the blocking effect and Lyt-2-linked genes, killer cells from Lyt-2 congenic strains were tested and conventional anti-Lyt-2.2 antisera were raised in an appropriate congenic strain combination. Killer cells from B6, but not from B6.Ly2.1 animals, were significantly sensitive to the blocking effects of the inhibitory C3H anti-B10.BR sera. The conventional anti-Lyt.2.2 sera did produce CML blocking, although there was no apparent correlation between such blocking and the anti-Lyt-2.2 cytotoxic titer. These results thus indicate that the target molecules responsible for blocking of killer cells are encoded or regulated by genes that are closely linked to or identical with Lyt-2.
(2 -4).
Antibodies to constant region determinants of immunoglobulins are known to block binding of antigen by B cells (5). Thus, if T-cell receptors also have constant regions, antibodies to such determinants might interfere with achievement of T-cell effector functions by blocking receptor-antigen interactions. Of course, antibodies to other cell surface molecules might also exert similar inhibitory effects on effector cell functions.
The effector function of cytotoxic T cells in allogeneic cell-mediated lymphocytotoxicity (CML) a reactions have previously been shown to be insensitive to treatment with a variety of antibodies in the absence of complement (6). Attempts to block allogeneic CML by alloantibodies reactive with killer cells have been uniformly unsuccessful, indicating that coating killer cells with antibodies is generally insufficient to affect the effector function. There have been two positive reports indicating inhibition of CML by xenogeneic antisera at the killer cell level in the absence of complement (7,8), although the target molecules responsible for such inhibition or gene(s) encoding such molecules were not characterized.
If alloantibodies with such inhibitory activity could be raised, they would undoubtedly provide a more useful tool than xenoantibodies for analysis of the nature and genetics of molecules involved in cytotoxic T-cell effector function. Therefore, attempts were made to raise mouse alloantisera capable of inhibiting allogeneic CML in the * The nomenclatures of immunoglobulin genes used in this paper conform to those proposed by Green et al. Green, M. G. et al., Immunogenetics. In press. Abbreviations used m this paper: CML, cell-mediated lymphocytotoxicity; B6, C57BL/6; B 10, C57BL/10; CWB, C3H.SW/Hz[GIB; MLR, mixed leukocyte reaction.
432
THE JOURNAL OF EXPERIMENTAL MEDICINE • VOLUME 150, 1979 absence of complement. Hyperimmune anti-lymphocyte sera raised in various strain combinations of mice were tested for their ability to inhibit allogeneic CML using killer-target combinations chosen to assess the effect on killer cells. Although most of our attempts were unsuccessful, we succeeded in raising mouse alloantisera in one strain combination which had the requisite properties. In this paper we describe the properties of these sera and an analysis of the linkage relationships of genes coding for the antigens on the cytotoxic T cells responsible for the observed inhibitory effect.
Materials and Methods
Mice. Adult were mixed and incubated for 15 min at 37°C. Wells were washed and rabbit complement was added, followed by another 30-min incubation at 37°C. Cell death was determined microscopically by trypan blue uptake.
Results
Hyperimmune mouse alloantisera were raised in various strain combinations including H-2 and non-H-2 incompatible pairs. Some animals were immunized with cells which had been sensitized to allogeneic cells three times in vitro. These sera were tested for their ability to inhibit allogeneic CML when added to CML cultures in the absence of complement, using strain combinations chosen to assess effects on the killer cells. Over 150 sera raised in 20 different combinations were screened and results are summarized in Table I. Most of these sera failed to show any significant or reproducible inhibitory effect, although many of them proved to have fairly high-titered antibodies reactive with 100% of donor spleen cells in complement-dependent cyto- toxicity assays. These results suggested that coating of killer cells with antibodies was not sufficient to interfere with their killing function. However, among the C3H anti-B10.BR sera tested, some were found to have significant inhibitory effects on CML. Reproducibility of the inhibitory effect of one of these sera (N18-I) on B6 and B10 killers is shown in Table II. In experiments 2 and 3, C3H spleen cells were used as targets so that antibodies should not react with target cells. In experiments 1, 4, and 5, the target cells used (K46, a BALB/c tumor) were reactive to this non-H-2 antiserum. However, the antibodies in this serum reactive with the target cells proved not to be responsible for the inhibition observed, as will be shown in the following experiments.
To study whether the observed inhibitory effect of this serum was attributable to alloantibodies reactive with the killer population or was of a nonspecific nature independent of their antibody activity, the effects of the serum on killer cells of the donor and recipient strains were examined. Because this serum (C3H anti-B 10.BR) should not contain anti-MHC antibodies, all H-2 congenic strains on the B10 background would be expected to show reactivity to this serum similar to that observed for B10.BR. Therefore, C3H and B 10 cells were sensitized to each other in vitro and the inhibitory effect of this serum was assessed on CML of the two reciprocal combinations of killers and targets. As shown in Table III, presence of this serum in the CML culture caused significant reduction of specific lysis of target cells in the combination of B10 killers and C3H targets, whereas it did not have a significant effect on C3H anti-Bl0 CML. In sharp contrast to this serum, the C3H anti-C3H.SW (H--°k anti-H-2 b) serum inhibited CML only when target cells were reactive to this serum. This pattern of inhibition is characteristic of anti-H-2 antibodies, as has been well documented by many investigators (11,12). This anti-H-2 b serum had unusually high-titered cytotoxic anti-H-2 antibodies (1:5,000 ~1:10,000) and also fairly hightitered anti-Ia antibodies. When B10 and C3H killers, directed to a third strain (BALB/c, H-2d), were tested on the same target cell preparation, again, killing by B 10 killers was significantly reduced by the C3H anti-B 10.BR serum and C3H killers were insensitive. These results indicated that the inhibitory activity of this serum on CML was not of a nonspecific nature and this effect was achieved by antibodies reacting with the killer population rather than with target cells. It should be also noted that specificity of killer cells (i.e., H-2 type of target cells recognized by the killers) did not seem to affect sensitivity to the inhibitory effect of this serum.
In an attempt to obtain an insight into the genetic basis of the inhibitory antibodies, sensitivity of CML effector cells of various mouse strains to the inhibitory effect of this antiserum was examined (Table IV). In experiment 1, all killer cells were sensitized to H-2 b cells and tested on C3H.SW spleen cells. In experiment 2, all cells were sensitized to H-2 a cells and tested on K46 cells. Killer cells of A/J, A.BY, BALB/ c, BALB.B, B10, B10.BR, B10.D2, and B6 strains were significantly inhibitable by these sera whereas those of AKR, CBA/J, C3H/HeJ, C3H/HeN, C3H.SW, and DBA/2 strains were insensitive, and the distinction between the sensitive and insensitive strains was unambiguous. Complement-dependent cytotoxicity of this serum on spleen cells of representative strains is depicted in Fig. 1. As anticipated from the complexity of the genetic differences between the donor and the recipient, this serum (C3H anti-B10.BR) was toxic to cells of all strains except for those on the C3H background. In particular, it should be noted that the serum was toxic to 100% of spleen cells from AKR and DBA/2 strains with titers comparable to those on B 10.BR and BALB/c cells, respectively. This finding again corroborates the notion that simply coating killer ceils with antibodies is insufficient to inhibit their killing function. The results of scoring various strains for sensitivity to the CML-inhibiting effect of these sera were compared with the strain distribution of other known genetic markers. A salient correlation was found between this marker and Lyt-2 phenotypes (Table IV).
To study further the possible relationship between the antigen(s) responsible for the inhibitory effect and Lyt-2-1inked genes as well as Igh-C-linked genes, killer cells from Lyt-2 congenic and Ig congenic strains were tested. Among B6 (Lyt-2 (Table V). Complement-dependent cytotoxicity of serum Ld#3 was tested on Ficoll-Hypaque-separated killer populations used in experiment 4. As shown in Fig. 2 After the initial discovery of the inhibitory ~erum (NI8-1), sera from individual mice were screened for CML-inhibiting activity, and only strongly positive sera were pooled (N18-4, N23-5). The immune sera from mouse Ld#3 showed extraordinarily high specific inhibitory activity, therefore the sera from this single mouse were pooled separately. Serum Iz~3 was used at 1:8 dilution in this experiment.
II Percent specific release in the presence of the indicated serum ± SD. ¶ This batch of normal C3H serum showed nonspecific inhibitory effect on CML. It also inhibited C3H killer cell function. However, the specific inhibition by serum N18-4 was clearly distinguishable as the data show. not possible to study the correlation between anti-Lyt-2 antibody activity and CMLblocking activity of these sera. Therefore, an attempt was made to raise CML-blocking sera in an Ly2 congenic combination (i.e., conventional anti-Ly2.2 antisera). Four (C3H X B6.Ly2.1)F1 mice were immunized with a mixture of thymocytes, spleen cells, and lymph node cells from normal B6 mice. After 10 immunizations, two animals started to produce detectable amounts of CML-inhibiting antibodies, whereas the other two did not. Individual sera obtained after 12 immunizations were examined for their CML-blocking activities and their anti-Lyt-2 antibody activity was determined by complement-mediated cytotoxicity on B6 thymocytes (Table VI). Sera from animals 596 and 597 did not show appreciable CML-blocking activities. The other two sera (particularly 599) caused significant specific reduction of target cell killing by B6 killer cells when added to CML culture. All four sera killed 80-90% of B6 thymocytes in the presence of rabbit complement with different titers. However, there was no apparent correlation between cytotoxicity on thymocytes and CML-blocking activity of the sera. For example, serum 597 showed poor CML-blocking activity, whereas it showed relatively high cytotoxic activity on B6 thymocytes. Serum 599, on the other hand, produced potent CML-blocking, although its cytotoxic activity was not as high as that of serum 597. This discrepancy could be a result of differences in the immunoglobulin class of the dominating antibodies in individual sera, or could indicate that CML-blocking activity is not a result of antibodies specific for conventional Lyt-2 molecules, but rather a result of antibodies directed to as yet undefined molecules encoded by Lyt-2-1inked genes.
Discussion
Allogeneic killer cells are generally resistant to treatment with allo-and xenoantisera in the absence of complement (6). Among the few positive reports of such treatments is that of Kimura (7), who reported that a rabbit xenoantiserum raised against in vivo sensitized mouse alloreactive cells inhibited the allogeneic CML of the same strain combination as used for immunization. The inhibitory activity of this antiserum was attributed to possible anti-idiotypic antibodies reactive with the antigen combining site of the relevant T-cell receptors (7), but the nature or genetics of the putative receptor were not elucidated. Recently, Redelman and Trefts obtained goat antirabbit xenoantibodies capable of inhibiting rabbit anti-mouse CML (8). In this laboratory, several xenogeneic anti-mouse lymphocyte sera were found to contain antibodies which inhibited mouse allogeneic CML in the absence of complement, and at least one component of such inhibitory activity was attributable to antibodies reacting with killer cells (N. Shinohara, unpublished observations). However, the use of xenogeneic antisera introduced numerous complexities making an analysis of molecules and mechanisms responsible for the observed inhibitory effect extremely difficult.
There have previously been no successful reports on attempts to block the effector function of allogeneic killer cells with alloantibodies in the absence of complement. Similarly, most of our attempts to raise such alloantibodies failed (Table I). Even antisera reactive with T cells with extraordinarily high titers did not affect allogeneic killing. Thus, because the coating of killer cells with antibodies to most cell surface alloantigens does not affect their function, our observations strongly suggest a functional significance of the molecules reactive with CML-blocking alloantibodies.
Among our attempts at alloimmunization, successful results have so far been obtained only in the combination C3H anti-B10.BR. Even in this combination, not all immune animals produced CML-inhibiting antibodies. Although two-thirds of the immunized animals started to produce detectable amounts of inhibitory antibodies after >10 immunizations, very few mice (10-15%) produced a sufficient amount of inhibitory antibodies to allow analytical experiments. The earliest production of inhibitory antibodies was observed after four to six immunizations in exceptional animals which became good producers later. Considering this experience, it seems likely that CML-inhibiting activities may have been lost by pooling immune sera before screening in some cases. Thus, it is possible that in combinations other than C3H anti-B 10.BR, CML-inhibiting antibodies could be raised if individual animals were screened.
The inhibitory effect we have studied is attributable to the reactivity of antibodies with a killer cell population rather than with target cells, because killer cells of the donor strain background are sensitive to the inhibitory activity, but those of recipient background are not. Killer cells of the sensitive strains are inhibited by the antisera irrespective of H-2 types of the targets they recognize. In addition, these non-H-2 antisera do not interfere with target cell lysis even when they react with target cells (Tables III-V).
The strain distribution of sensitivity of killer cells to the inhibitory effect of the C3H anti-B10.BR antisera is very well correlated with that of the Lyt-2 b allele.
Furthermore, the B6.Ly2.1 strain, which differs from the B6 strain at a chromosomal segment including the Lyt-2 locus, is insensitive to the inhibitory effect of these complex sera. The genetic specificity of this antibody activity is reinforced by the fact that CML-blocking antibodies could be raised in an Lyt-2 congenic pair (Table VI).
No apparent correlation was seen between complement-dependent cytotoxicity of antisera on spleen cells or on in vitro sensitized killer cell populations and inhibitory activity of the sera on killer cells of the same strain. These data indicate that the CML-inhibiting activity of these antisera is dependent only on their reaction with products of genes linked to or identical with the Lyt-2 locus.
The identity of the molecules responsible for the antibody-mediated inhibition of CML still remains to be studied. Earlier reports by other investigators have suggested that anti-Lyt-2 antibodies do not inhibit CML (6,13). The discrepancy between the present observation and those observations might imply that the molecules reactive with the inhibitory antibodies are not Lyt-2 molecules but rather distinct molecules encoded by Lyt-2-1inked genes. The discrepancy between anti-Lyt-2 cytotoxic antibody activity and CML-blocking activity in the sera raised in the Lyt-2 congenic combination we have studied also casts doubt on the identity of the molecules reactive with CML-blocking antibodies. However, this discrepancy may be explained by differences in the immunoglobulin class of the dominating antibodies in individual sera, a possibility we are presently investigating. It is important to note that if antibodies to products of a gene closely linked to Lyt-2 were responsible for the blocking observed, it is quite likely that such antibodies would be included as contaminants in conventional anti-Lyt-2 (and perhaps anti-Lyt-3) sera. Studies using monoclonal anti-Lyt-2 hybridoma antibody are also in progress to try to answer this question (N. Shinohara, U. Hammerling, and D. H. Sachs, manuscript in preparation).
At present, we can only speculate on the possible mechanisms of the inhibition of CML we have observed. The most trivial possibility might be agglutination of killer cells by antibodies. Massive agglutination could perhaps prevent killer cells from contacting target cells. Although this possibility has not been formally ruled out, it does not seem likely, because exposing killer ceils to high-titered antibodies of other specificities did not inhibit their killing function (Tables I, III Figs. 1 and 2).
A second possibility is alteration of the cell surface by antibodies which somehow results in the inability of killer cells to achieve normal function. Because many other antibodies reactive with T cells failed to cause inhibition, this model requires a certain peculiar nature of the molecules reactive with the inhibitory antibodies. Such a relationship could be a close physical association of the Lyt-2-related determinants with some functional molecule necessary for killing. Even without physical association of the two kinds of molecules, such specific interference might occur, as has been exemplified in blocking of B-cell Fc receptors by anti-Ia antibodies (14).
A third possibility is that the inhibitory antibodies react with functional molecules other than antigen receptors. After interacting with the antigen on target cells, killer cells may deliver a killing message to the target cells through mediators either expressed on their surfaces or released locally. If the antibodies reacted with such molecules, killing might not take place. The Lyt-2 antigen has been shown to be a marker for a subpopulation of T cells with certain functions including killer and suppressor cells (15)(16)(17). If Lyt-2 molecules were the mediators of killing, this might explain the correlation between this surface marker and the function of cells bearing the molecules. However, this explanation would not account for the fact that many immature peripheral T cells and thymocytes also bear Lyt-1, 2, and 3 (18,19).
Perhaps the most attractive possibility is that the observed inhibition reflects interaction of anti-receptor antibodies and antigen-recognition structures of killer T cells. In this model, the anti-receptor antibodies would be directed to a constant portion of the receptor, because they inhibit the function of killer cells with various different specificities, i.e., anti-H-2 d, anti-H-2 k and anti-H-2 b. Of course, this apparent lack of killer specificity could be explained by the presence of multiple anti-idiotypic antibodies in the serum and absorption studies would be required to distinguish between these possibilities. If Lyt-2 molecules were the antigen-receptor molecules, this would provide a possible explanation for the presence of Lyt-2 molecules on immature T cells and thymocytes. Recent investigations on the ontogeny of the T-cell repertoire indicate that precursor T cells differentiate and are selected in the thymus so that the repertoire of mature T cells is restricted in terms of selfreactivity (20). If this is true, precursor T ceils should express their clonal marker, i.e., receptors, on their surface so as to be subjected to such selection before full differentiation.
One of the difficulties with this model is the fact that certain populations of T cells lack the Lyt-2 marker (14)(15)(16). These include helper cells and/-region-specific mixed leukocyte reaction (MLR)-reactive cells. One would thus have to postulate the existence of two separate sets of genes coding for T-cell receptors, one for Lyt-23-cells and another for Lyt-23 + cells. Idiotypes of T-cell receptors have been intensively studied recently (reviewed in references 2 and 3), and these studies have suggested that at least a part of the antigen-combining portions of T-cell receptors is encoded by genes linked to the immunoglobulin heavy chain allotype locus. The T cells involved in these studies were helper cells and MLR-reactive cells, both of which functions are predominantly attributed to Lyt-23-cells in the mouse (15,16). So far, no genetic relationships of receptors of Lyt-1-23 + cells to Ig heavy chain genes have been demonstrated. It is tempting to speculate that different subsets of T cells could use different sets of genes for their receptors, Ig heavy chain-linked genes for the Lyt-1+23-population and Lyt-2-1inked genes, perhaps r-light chain-linked genes (21,22), for Lyt-23 + cells. In this regard, we intend to examine the effects of our alloantisera on other functional T-cell subsets. If similar blocking activities are found, it will be of interest to determine the genetic relationships of the relevant targets.
Summary
In an attempt to produce alloantibodies to cytotoxic T-cell receptors, hyperimmune anti-lymphocyte antisera have been raised in mice of various strain combinations, and have been tested for their ability to block allogeneic cell-mediated lymphocytotoxicity (CML) in the absence of complement at the T killer cell level. Most of the sera failed to show any significant and reproducible inhibitory effects. However, among C3H anti-B 10.BR antisera, some sera were found to be capable of significantly inhibiting CML. This effect was attributable to antibodies reacting with the killer population rather than the target cells, because the sera inhibited B 10 anti-C3H CML but not C3H anti-B10 CML. Among mouse strains tested, A/J, BALB/c, B10, and B6 strains were sensitive to the inhibitory effect of the sera whereas AKR, CBA, C3H, and DBA/2 strains were insensitive. This sensitivity of killer cells to the inhibitory effect correlated well with the strain distribution of the Lyt-2.2 antigen. In the presence of complement, these same sera were toxic to 100% of spleen cells of AKR, BALB/c, BI0, and DBA/2 strains, with comparable cytotoxic titers. Thus, the inhibitory activity of the sera could not be explained by nonspecific effects of hightitered antibodies. To study the relationship between the antigen(s) responsible for the blocking effect and Lyt-2-1inked genes, killer cells from Lyt-2 congenic strains were tested and conventional anti-Lyt-2.2 antisera were raised in an appropriate congenic strain combination. Killer cells from B6, but not from B6.Ly2.1 animals, were significantly sensitive to the blocking effects of the inhibitory C3H anti-B 10.BR sera. The conventional anti-Lyt.2.2 sera did produce CML blocking, although there was no apparent correlation between such blocking and the anti-Lyt-2.2 cytotoxic titer. These results thus indicate that the target molecules responsible for blocking of killer cells are encoded or regulated by genes that are closely linked to or identical with Lyt-2.
Received for publication 12 March 1979. | 2017-04-21T01:28:04.339Z | 1979-09-19T00:00:00.000 | {
"year": 1979,
"sha1": "5cab6b86c603d2553e38155f27fbcab457bc2135",
"oa_license": "CCBYNCSA",
"oa_url": "http://jem.rupress.org/content/150/3/432.full.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "bf302a0c8ca735f08a9111122f2ad7421a1be755",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.