id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
169041125 | pes2o/s2orc | v3-fos-license | Antimicrobial resistance and its relationship with biofilm production and virulence-related factors in Yersinia enterocolitica biotype 1A
The aim of the present study was to determine antimicrobial susceptibilities, biofilm production and, to discern a relationship between antimicrobial resistance, biofilm potential and virulence-related genes in strains of Yersinia entercocolitica biotype 1A. Thirty strains of Y. enterocolitica biotype 1A including clinical and non-clinical strains were investigated. Antimicrobial susceptibility for 15 antibiotics (representing different classes) was determined by disk-diffusion assay. Biofilm potential was determined on two different culture media using crystal violet assay. Also, a co-relation was studied between antimicrobial susceptibilities, biofilm production and virulence-related genes. All strains of biotype 1A produced biofilms and exhibited varied level of susceptibilities for different antibiotics. More than 60% of the strains were strong to moderate biofilm producers and, were exclusively associated with REP/ERIC clonal group B. Moderate and strong biofilm producers exhibited both sensitive and resistant phenotypes towards different antibiotics. Interestingly, weak biofilm producers were resistant to amoxicillin, amoxicillin-clavulanate and cefazolin. Analysis of antimicrobial susceptibilities, biofilm potential and virulence-related genes did not reveal any unequivocal relationships. The differential biofilm potential of Indian strains of Y. enterocolitica biotype 1A, suggests that biotype 1A strains are heterogeneous in nature.
Introduction
Bacteria which produce biofilms exhibit a significantly higher antimicrobial resistance and virulence than the planktonic forms [1,2]. This suggests that antimicrobial resistance, biofilms and enhanced virulence might be related to each other. Although multiple mechanisms underlie the biofilm-associated enhancement of bacterial virulence and antimicrobial resistance the exact mechanisms are not understood well [3].
Yersinia enterocolitica is a gastrointestinal enteric pathogen which causes a variety of diseases in humans [4]. It can be classified into more than fifty serotypes and six biotypes, of which five (1B, 2, 3, 4, 5) have been considered pathogenic [5]. Due to the absence of pYV (plasmid for Yersinia virulence) and major chromosomal virulence genes, strains of biotype 1A were initially considered non-pathogenic [6]. However, strains of biotype 1A have been reported from clinical samples across the globe which indicates the pathogenic nature of these strains [7,8]. Though, several studies have reported antimicrobial susceptibilities, virulence related genes and biofilm potential of many bacterial species, only a few have tried to discern the relationship between them in Y. enterocolitica. Further, information about a probable relationship between antimicrobial susceptibilities, virulence related genes and biofilm potential in Y. enterocolitica biotype 1A is scarce. Hence, in depth analysis of antimicrobial susceptibilities, virulence related genes and biofilm potential of Y. enterocolitica biotype 1A strains should be performed to understand the true pathogenic potential of these strains. Thus, the present study intended to study antimicrobial susceptibilities, virulence related genes and biofilm potential of Y. enterocolitica biotype 1A strains and, discern a relationship between them. Antimicrobial susceptibilities and biofilm potential of thirty strains of Y. enterocolitica biotype 1A were determined. The strains were isolated from various sources, like clinical samples, wastewater, pigs and pork. Though, the relationship between antimicrobials and biofilm formation has been studied in many members of the family Enterobacteriaceae, to the best of our knowledge, relationship between antimicrobials, biofilms and virulence has been reported in Y. enterocolitica biotype 1A strains for the first time.
Bacterial strains
In the present study, 30 well-characterized strains of Y. enterocolitica biotype 1A were examined. These strains were authenticated and serotyped by the Yersinia National Reference Laboratory and WHO Collaborating Center, Pasteur Institute, Paris (France). These strains have also been deposited at the Pasteur Institute, Paris (France) and at the National Repository i.e. Microbial Type Culture Collection (MTCC) and Gene Bank located at Institute of Microbial Technology, Chandigarh, India. The strains were maintained in our laboratory at University of Delhi South Campus, New Delhi, India, on tryptone soy agar at 4 C. These strains have been isolated from various sources and genotyped using repetitive extragenic palindromic sequence (REP) and enterobacterial repetitive intergenic consensus sequence (ERIC) typing which revealed that these strains belonged to two clonal groups A or B [9]. The details of these strains viz. laboratory accession numbers, serotypes, source of isolation and clonal groups are mentioned in Table 2. Y. enterocolitica subsp. enterocolitica strain ATCC® 23715™ was included as a reference strain.
Antimicrobial susceptibility testing
Antimicrobial susceptibilities of Y. enterocolitica biotype 1A strains were determined by disk diffusion test following the guidelines of Clinical and Laboratory Standards Institute [10]. Briefly, the bacteria were spread plated on Mac Conkey agar plate and the paper discs impregnated with different antibiotics were placed on the surface of agar plate. The plates were incubated at 37 C overnight and observed for the zone of inhibition (no bacterial growth) around the antibiotic disks. The diameter of the zone of inhibition around each antibiotic was measured and compared with a database of zone standards [10] and accordingly the bacterial strains were designated as antibiotic susceptible, intermediate susceptible or resistant (Fig. 1). The antibiotic disks (HiMedia, India) which were used in the present study represented major antibiotic classes, like β-lactam antibiotic -amoxicillin (AMX), β-lactamþβ-lactamase inhibitor combination -amoxicillin-clavulanate (AMC), a first generation cephalosporincefazolin (CZ), a second generation cephalosporincefuroxime (CXM), third generation cephalosporinscefoperazone (CPZ), cefixime (CFM), a fourth generation cephalosporincefepime (CPM) and a carbapenemimipenem (IPM). The β-lactams and cephalosporins kill bacteria by inhibiting the synthesis of bacterial cell wall. Quinolones (inhibit bacterial DNA replication) were represented by ciprofloxacin (CIP) and aminoglycosides (inhibit bacterial protein synthesis) by tobramycin (TOB), gentamycin (GEM), and kanamycin (K). Erythromycin (E) represented the macrolides (inhibit bacterial protein synthesis), and furazolidone (FR) represented the nitrofurans (inhibit many bacterial enzyme systems). The results were interpreted as per the guidelines of Clinical Laboratory Standards Institute, CLSI [10]. The antibiotic susceptibility breakpoints suggested by CLSI in 2017 for most of the antibiotics are the same as in previous years, except for CPM, IPM and CZ [10].
Assessment of biofilm formation
Assessment of biofilm formation by Y. enterocolitica biotype 1A strains was performed in two different broth media viz. Mueller-Hinton broth (MHB) and Tryptone Soya broth (TSB) following the published protocols, with slight modifications [11]. Briefly, 50 μl of overnight grown cultures at 28 C with a cell density adjusted to 1 Â 10 9 cells/ml were inoculated in 1.5 ml polypropylene microcentrifuge tubes (Tarsons, USA) containing 1 ml of MHB and TSB respectively, and incubated further at 28 C for 24 h and 48 h each, without shaking. The medium was removed after 24 h and 48 h, and the microcentrifuge tubes were dried at 55 C for 30 min. One ml of 0.1 % crystal violet (prepared in isopropanol: methanol: phosphate buffered saline in the ratio of 1:1:18, v/v) was added to all microcentrifuge tubes and incubated at room temperature for 30 min. After this, crystal violet was removed, followed by two washings with 1 ml of sterile distilled water. Microcentrifuge tubes were further dried at 55 C for 30 min. The dye bound to the biofilm was dissolved in a 200 μl mixture of ethanol and acetone (4:1 v/v) and 100 μl of this mixture was added to a 96-well microtiter plate. Optical density was measured at 540 nm using ELISA plate reader (Thermo Scientific, USA). The strains were classified as non-adherent, weakly, moderately or strongly adherent based upon the ODs of bacterial biofilms. The cut-off OD (ODc) was defined as three standard deviations above the mean OD of the negative control (0.20 AE 0.00 for MHB and 0.09 AE 0.00 for TSB). Strains were classified using the following criteria: ODc or ODc < -! 2x ODc: Non/weak biofilm producer 2x ODc < -4x ODc: Moderate biofilm producer > 4x ODc: Strong biofilm producer Y. enterocolitica strain ATCC® 23715™ was included as a positive control. The assay was performed for each isolate in biological and technical triplicates and the average result was reported. The statistical significance was calculated using the Mann-Whitney U-test with R statistical package. A p-value<0.05 was considered as significant. A representative picture of the crystal violet assay has been presented as Fig. 2.
Results and discussion
Results of antimicrobial susceptibility testing indicated that all the strains of biotype 1A were resistant to CZ -a first generation cephalosporin but sensitive to carbapenem -IPM, fluoroquinolone -CIP and an aminoglycoside-GEN (Table 1). The strains showed varied susceptibilities for other antibiotics. Among β-lactams, though only 8 (30%) strains were sensitive to AMX, a greater sensitivity (12 strains; 40%) was observed for β-lactamþβ-lactamase inhibitor -AMC. Y. enterocolitica biotype 1A strains showed a good level of susceptibility level for the second generation cephalosporin -CXM, because 23 (76%) strains were sensitive to it. Biotype 1A strains showed different susceptibilities for the third generation cephalosporins -CPZ and CFX. Most of them exhibited Results of the crystal violet assay indicated that all biotype 1A strains were capable of producing biofilms, even when cells were not exposed to any external stress, like depletion of iron, carbon, nitrogen or low concentration of oxygen. In MHB medium, after an incubation of 24 h, 11 (36.7%) strains were classified as strong biofilm producers, 12 (40%) strains as moderate biofilm producers, and 7 (23.3%) strains as weak biofilm producers. While after 48 h incubation in MHB medium, 14 (46.7%) strains were classified as strong biofilm producers, 13 (43.3%) strains as moderate biofilm producers, and 3 (10%) strains as weak biofilm producers (Table 2). In TSB medium, after an incubation of 24 h, only 1 (3.3%) strain was strong biofilm producer, 20 (66.7%) strains were moderate biofilm producers, and 9 (30%) strains were weak biofilm producers. After incubation of 48 h in TSB medium, 9 (30%) strains were found to be strong biofilm producers, 19 (63.3%) strains were moderate biofilm producers, and 2 (6.7%) strains were weak biofilm producers. A significant improvement in the biofilm forming capability of biotype 1A strains was observed after 48 h of incubation, in either type of culture medium (p < 0.05). The reference strain ATCC® 23715™ was found to be a moderate biofilm producer after 24 h incubation, but showed a strong biofilm forming potential after 48 h incubation, in both MHB and TSB. Thus, our results are in concordance with the results of earlier studies which reported that all strains of Y. enterocolitica produced biofilms [13,14,15]. A recent study however, has reported that strains of Y. enterocolitica biotype 1A isolated from meat samples either did not produce, or were weak producers of biofilms [8].
It was observed that those biotype 1A strains which were classified as weak biofilm producers were resistant to a β-lactam antibiotic -AMX, β-lactam and β-lactamase inhibitor combination -AMC, and the first generation cephalosporin -CZ. These strains however, displayed varied level of susceptibilities for other antibiotics. The strains classified as moderate or strong biofilm producers showed varied levels of susceptibility, exhibiting both sensitive and resistant phenotypes towards different antibiotics. A previous study reported also reported that biofilm forming Y. enterocolitica biotype 4 isolates were more resistant to antibiotics than the planktonic forms [15]. However, in the present study, it was observed that biotype 1A strains showed a range of susceptibilities to different antibiotics. Such associations of antimicrobial susceptibilities and biofilm formation in Y. enterocolitica biotype 1A strains emphasized an earlier suggestion that Y. enterocolitica biotype 1A represented a heterogeneous population of more than one subspecies [16].
Analysis of the biofilm forming potential of biotype 1A strains and genotyping using repetitive extragenic palindromic sequence (REP) and enterobacterial repetitive intergenic consensus sequence (ERIC) typing revealed an interesting observation. It was observed that, while REP/ ERIC clonal group A was associated with weak, moderate and strong biofilm producers while strains strong and moderate biofilm producers belonged exclusively to the REP/ERIC clonal group B (Table 2). In an earlier study, it was reported that four virulence-associated genes viz. subtilisin/kexin-like protease (hreP), fimbriae (myfA), Yersinia stable toxin B (ystB), and streptogramin acetyltransferase (sat) were exclusively associated with strains of clonal group A [12]. Thus, clonal group appears to encompass strains exhibiting a diverse biofilm potential greater number of virulence associated genes. more No co-relation was observed in the biofilm forming potential of biotype 1A strains and source of isolation (clinical versus nonclinical). Thus, it might be inferred that the biofilm forming potential of biotype 1A strains might be related to the clonal groups, rather than the source from which they were isolated. Various studies have shown that Y. enterocolitica biotype 1A was genetically the most heterogeneous biotype of Y. enterocolitica, encompassing strains of numerous serotypes [17,18]. Thus, it becomes reasonable to assume that biofilm formation in biotype 1A might be a strain-specific character which cannot be extrapolated to all strains of the same biotype, serotype, source of isolation.
despite the immense heterogeneity in the O-antigens, the strains clustered into limited clonal groups which suggest that there might be a limited genotypic diversity in biotype 1A strains studied [9]. Our study failed to reveal an unequivocal relationship between antimicrobial susceptibilities, biofilm production and virulence-related genes. However, our results indicated that there was a relationship between clonal groups and biofilm forming potential of Y. enterocolitica biotype 1A strains, with REP/ERIC clonal group B associated with strains exhibiting strong and moderate biofilm forming potential and REP/ERIC clonal group A with weak, moderate and strong biofilm producers. Also, it was observed that the biofilm potential of biotype 1A strains investigated in this study was different from biotype 1A strains isolated from other parts of the world [8]. These differences might be attributed to the heterogeneous nature of biotype 1A strains isolated from different parts of the world. However, further studies using serotypes of biotype 1A not represented in the present study are required to corroborate these findings and unravel the true virulence potential of biotype 1A strains.
Declarations
Author contribution statement | 2019-06-07T21:33:40.141Z | 2019-05-01T00:00:00.000 | {
"year": 2019,
"sha1": "7f6b24685e691ab4ab83644316a293e12b000aff",
"oa_license": "CCBYNCND",
"oa_url": "http://www.cell.com/article/S2405844019308734/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3ff6f05ce4a6c50843a1ec1c93cec9e2f88c5a5c",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
268817941 | pes2o/s2orc | v3-fos-license | Comparative Analysis of Gradient-Boosting Ensembles for Estimation of Compressive Strength of Quaternary Blend Concrete
Concrete compressive strength is usually determined 28 days after casting via crushing of samples. However, the design strength may not be achieved after this time‑consuming and tedious process. While the use of machine learning (ML) and other computational intelligence methods have become increasingly common in recent years, findings from pertinent literatures show that the gradient‑boosting ensemble models mostly outperform compara‑ tive methods while also allowing interpretable model. Contrary to comparison with other model types that has domi‑ nated existing studies, this study centres on a comprehensive comparative analysis of the performance of four widely used gradient‑boosting ensemble implementations [namely, gradient‑boosting regressor, light gradient‑boosting model (LightGBM), extreme gradient boosting (XGBoost), and CatBoost] for estimation of the compressive strength of quaternary blend concrete. Given components of cement, Blast Furnace Slag (GGBS), Fly Ash, water, superplas‑ ticizer, coarse aggregate, and fine aggregate in addition to the age of each concrete mixture as input features, the performance of each model based on R 2 , RMSE, MAPE and MAE across varying training–test ratios generally show a decreasing trend in model performance as test partition increases. Overall, the test results showed that Cat‑ Boost outperformed the other models with R 2 , RMSE, MAE and MAPE values of 0.9838, 2.0709, 1.5966 and 0.0629, respectively, with further statistical analysis showing the significance of these results. Although the age of each concrete mixture was found to be the most important input feature for all four boosting models, sensitivity analysis of each model shows that the compressive strength of the mixtures does increase significantly after 100 days. Finally, a comparison of the performance with results from different ML‑based methods in pertinent literature further shows the superiority of CatBoost over reported the methods.
Introduction
Climate change and global warming have accelerated due to increasing emissions of greenhouse gases (GHG).This has led to serious environmental problems, such as drought, flood, heat waves etc. (Pandey & Kumar, 2022).The production of concrete used in the construction industry remains one of the largest sources of GHG, and accounts for about 50% of global emissions (Allujami et al., 2022a(Allujami et al., , 2022b;;Di Filippo et al., 2019).GHG from concrete production is expected to increase as demand for concrete keeps surging due to human development.The production of Portland cement (PC) produces vast amount of CO 2 through a process called calcination of Calcium oxide (CaO).This calcination accounts for around 7% of the global CO 2 emissions to the atmosphere (Benhelal et al., 2019).This emission is expected to increase as the annual consumption of cement would rise from its present 4000 million tonnes to about 6000 million tonnes by the year 2060 (Moreira & Arrieta, 2019).These figures show the need for sustainable and more environmental-friendly materials to replace cement partially or fully, not only to meet the growing demand, but to reduce emissions of CO 2 (Ebid et al., 2022;Mikulčić et al., 2016).
In view of the abovementioned problems, industrial wastes have been used in production of concrete.This approach results in a drastic decrease in PC used in construction as well as prevents environmental degradation caused by disposal of these hazardous industrial waste (Agrawal et al., 2021;Hashim & Tantray, 2021).The use of industrial wastes can reduce about 80% of GHG emissions of normal concrete.The commonly used industrial wastes that act as supplementary cementitious material in concrete include fly ash (FA), ground granulated blast furnace slag (GGBS) and silica fume (SF) (Hammad et al., 2021;Hashmi et al., 2021;Okashah et al., 2020).They have been used as partial replacements for cement when producing improved and more sustainable concrete.This practice is favoured by the availability of large quantity of these industrial wastes as about 300 million tonnes of FA is produced annually with only 25% of this production being used up for concrete production (Dan et al., 2021).Similarly, annual global production of GGBS is around 280 million tonnes with less than 10% of this production being utilised in concrete production (Kamath et al., 2021).
In the production of concrete for structural usage, an in depth and accurate knowledge of the properties are required (Ebid & Deifalla, 2022;Salem & Deifalla, 2022;Song et al., 2021).Compressive strength, being the most important property can be improved by partial replacement of cement with these cementitious industrial wastes in the accurate proportions.The compressive strength is generally ascertained by testing (crushing) concrete specimens (cubes or cylinders), usually after 28 days of casting (Allujami et al., 2022a(Allujami et al., , 2022b;;Ebid & Deifalla, 2021).However, this method of obtaining the compressive strength of concrete is time consuming, tedious and expensive (Badra et al., 2022;Silva et al., 2020).In addition, the desired strengths are often not attained, thus being less effective (Deifalla & Salem, 2022;Salami et al., 2022).This has led researchers to the use of machine learning (ML) and artificial intelligence (AI) algorithms to obtain the mechanical properties of concrete.The use of AI and ML techniques, such as decision tree (DT), artificial neural network (ANN), support vector machine (SVM), and extreme learning machine (ELM), in estimating (predicting) concrete properties takes into account certain parameters of the concrete (such as concrete mix proportions and concrete age) and its constituents to achieve reliable estimations (Gupta et al., 2006;Mustapha et al., 2022).
Several ML approaches have been proposed over the years for accurate estimation of compressive strength of concrete.For example, Cook et al. (2019) presented a hybrid ML model that combined firefly algorithm (FFA) with random forests (RF) to predict the compressive strength of concrete.A correlation between the input variables and output was developed by training the hybrid (RF-FFA) model with two different categories of data sets.They concluded that the hybrid RF-FFA model performed better than standalone ML models, such as SVM, RF, M5Prime model-tree algorithm and multilayer perceptron-ANN (MLP-ANN).Shariati et al. (2020) presented a novel hybrid ML approach using grey wolf optimizer to predict the compressive strength of concrete with partial replacement of cement.The results were compared to those obtained via an adaptive neuro-fuzzy inference system (ANFIS), extreme learning machine (ELM), ANN, support vector regression (SVR) with radial basis function (RBF) kernel (SVR-RBF), and another SVR with a polynomial function kernel (SVR-Poly).Dao et al., (2020aDao et al., ( , 2020b) applied an optimized conventional ANN to predict the compressive strength of foamed concrete.Dry density was included as an input parameter, while the volume of foam was ignored in their study.The results showed a high correlation R 2 of 0.97 for the models.The authors referred to ANN as a black-box model, since it provides no practical information about the predicted model, and citing the vast hidden neurons as major impediments to developing an empirical relation between input and output parameters.Abellán-García ( 2020) presented an ANN model with four layers to predict the compressive strength of ultra-high-performance concrete (UHPC).A total of 927 data samples and 18 mixture design variables were used as input.While impressive results were similarly reported, the proposed approach shares a common shortcoming with other aforementioned approaches in that the knowledge of the contribution of each input feature in the model predictions of the concrete mixtures is lacking.Besides, the results reported in most of these studies are still open to further improvement.
The quest for more accurate estimation of compressive strength of HPC has inspired the use of nature inspired classifiers, such as genetic expression programming (GEP).For instance, Ullah et al. (2022) applied a database of 191 data points to develop a relationship between the mix design parameters and compressive strength of foamed concrete using gene expression programming (GEP).The input variables were cement content, sand content, water to cement ratio, foam volume, while the output parameters were the dry density and compressive strength.The results showed that 95% of the predicted compressive strength had error values that were less than 2%.Recently, Shah et al. (2022) presented a comparative analysis using different ML techniques to predict the compressive strength of sugarcane bagasse ash (SCBA) concrete.The ML techniques included random forest regression (RFR), GEP and SVM.The results were compared to experimental testing.The input variables were water-cement ratio, cement content, SCBA dosage (SCBA%), the quantity of fine aggregate and coarse aggregate.The results showed that the R 2 of all the ML techniques were all above 0.85, and the RRMSE and performance index (PI) were less than 10% and 0.2%, respectively, with GEP producing the most accurate results across the compared methods.While GEP allow generation of simple mathematical equations for built models, it can be computationally expensive.Besides, its performance has long been shown to be similar or lower than other existing genetic programming methods (Oltean & Grosan, 2003).In fact, recent studies on compressive strength estimation such as (Fakharian et al., 2023;Salami et al., 2022;Song et al., 2021) have shown via empirical results that ML methods such as ANN and classifier ensembles outperform GEP across several evaluation metrics.
Boosting methods are a class of ensemble machine learning methods that have found wide application in many real-life domains with impressive results (Babajide Mustapha & Saeed, 2016).They generally enhance learning by merging the predictions of several simple base learners into a composite whole (Tanha et al., 2020).Different implementations of boosting ensemble have also been employed by several researchers for compressive strength estimation.For example, Kaloop et al. (2020) investigated the use of a multivariate adaptive regression splines (MARS) model to extract the optimum inputs to use for compressive strength design of HPC.The extracted features were fed to a gradient-tree-boosting machine (GBM).While improved results over comparative methods were reported, the authors also found concrete age to be the most influential input parameter.Feng et al. (2020) applied an adaptive boosting algorithm (Adaboost) to predict the compressive strength of concrete given curing time and mixture contents as input variables.Using tenfold cross validation method for model validation, the authors reported notable improvement in performance over classical methods, such as ANN and SVM.Nguyen-Sy et al. (2020) demonstrated an accurate prediction of the compressive strength of concrete using an extreme gradient-boosting (XGBoost) model.Sensitivity analysis was carried out to optimize the numbers of estimators by varying them from 100 to 1000 while keeping the default values of other hyperparameters constant.An increase in the number of estimators was found to generally lead to increased model accuracy.
In another related study, Cui et al. (2021) proposed a novel XGBoost prediction model based on grey relation analysis (GRA) for the estimation of compressive strength of concrete containing slag and metakaolin.Empirical findings showed that XGBoost outperformed ANN and its genetic algorithm hybridized variant (GA-ANN).Similar study by Nguyen et al. (2021) concluded that XGBoost and gradient-boosting regressor (GBR) models outperformed the likes of SVM and MLP for prediction of compressive strength and tensile strength of HPC.
Apart from XGBoost, there are other gradient-boosting implementations that have found application in concrete property estimation.For instance, Alabdullah et al. (2022) 2022) investigated the use of LightGBM in the estimation of the compressive strength of UHPC with similarly high prediction accuracy.In another pertinent study, de-Prado-Gil et al. (2022) applied a CatBoost (CBT) model to predict the compressive strength of a self-compacting concrete.The study was conducted using 381 data samples.Experimental findings show that the cement content had the highest influence on model output.
There has also been a notable growth in the application of deep learning methods for compressive strength estimation in recent years.Jang et al. (Jang et al., 2019) proposed image-based compressive strength estimation of concrete using three deep neural network (DNN) architectures, namely, ResNet, GoogLeNet, and AlexNet.Images of the surfaces of specially produced specimens were captured with a portable digital microscope and used to train each model for compressive strength estimation.Empirical results show that the DNN models outperformed the fully connected ANNs with ResNet showing the best performance.In addition, a deep learning-based estimation of compressive strength of fiber-reinforced concrete at elevated temperatures was proposed in (Chen et al., 2021).Using the concrete mix, heating profile, and fiber properties as model inputs, three variations of convolutional neural networks (CNN) models were shown to outperform several models that include SVR, ANN and Adaboost.In addition, deep learning models such as CNN have been hybridized with evolutionary algorithms, such as GA for improved performance (Ranjbar et al., 2022).More recently, Hoang (2023) proposed a deep learning-based estimation of the compressive strength of rice husk ash-blended concrete using an asymmetric loss function.Results from this study showed better performance than ANN and multivariate adaptive regression splines.
The pursuit of accurate estimation of compressive strength of concrete has inspired myriad of research studies over the years, each seeking to achieve this goal via some machine learning methods.However, findings as indicated from the foregoing show that the gradient-boosting ensembles and DNN-based approaches stand out, mostly performing better than popular methods, such as SVR, classical ANN, GEP, KNN and their hybrid variants amongst others.The gradient ensembles methods are particularly the focus of this study, given their high accuracy and interpretability.Besides, a comprehensive comparative study on gradient-boosting algorithms for prediction of compressive strength of quaternary blend concrete remains lacking.Such study has the potentials of guiding field engineers on the choice of computational tools for accurate and reliable estimation of properties when designing concrete.
Thus, this study aims to compare the performance of four gradient-boosting algorithms in estimating the compressive strength of quaternary blend concrete.The algorithms are gradient-boosting regressor (GBR), light gradient-boosting model (LGBM), eXtreme gradient boosting (XGB), and CatBoost (CBT).In the training phase, hyperparameter optimization of each algorithm is first carried out using fivefold cross validation to ensure optimal model performance.Twenty optimal models were built, five for each gradient-boosting algorithm, using different training-test splits to obtain best performing model in terms of mean squared error.The input variable are the proportions of cement, ground granulated blast furnace slag (GGBS), fly ash (FA), water, superplasticizer, coarse aggregate, fine aggregate, and concrete age.The performance of each of the final model is evaluated using four popularly used statistical measures, namely, root mean squared error (RMSE), mean absolute error (MAE), mean absolute percentage error (MAPE) and coefficient of determination (R 2 ).A sensitivity analysis is carried out to understand the importance and contribution of the input/predictor variables.Finally, a comparison of the obtained results with results in previous literatures (other methods).
The key contributions of this study are highlighted as follows: • Prediction of the compressive strength of quaternary blend concrete using CBT.• A comprehensive comparative analysis of gradientboosting algorithms (GBR, CBT, XGB and LGBM) for the estimation of quaternary blend concrete.• An intuitive insight into the importance and contribution of input features for the estimation of quaternary blend concrete.• Comparison of performance of gradient-boosting algorithms with results from previous studies.
Computational Methods
The gradient-boosting ensembles considered in this research are gradient-boosting regressor (GBR), light gradient-boosting model (LGBM), eXtreme gradient boosting (XGB), and CatBoost (CBT).These models have been selected based on their performance in pertinent studies relating to estimation of mechanical properties of concrete.The advantage model interpretability offers makes it especially useful for field engineers, allowing them to understand the impact of input parameters without undergoing tedious and time-consuming laboratory experiments.Each of the selected methods are detailed in what follows.
Gradient-Boosting Regressor
Gradient-boosted decision trees (GBDT) have been widely used in machine learning.However, gradientboosting regressor (GBR) (Friedman, 2002) is arguably the earliest well-known implementation of the idea of gradient descent boosting of decision trees that optimizes an arbitrary differentiable loss function via stagewise additive approach in model building.Every iteration of the model building process involves fitting a classification and regression tree (CART) on the negative gradient (i.e., the residual error between the estimated and the target output) of an arbitrary loss function (Friedman, 2002).Gradient boosting of decision trees has been shown to be robust to overfitting while producing highly competitive results especially while modelling noisy data.In addition, it is also interpretable as it offers relative importance of input features used in model building.The two main hyperparameters for optimal Gradient boosting are the number of boosting stages and the shrinkage parameter, also known as the learning rate (Friedman, 2001).
In general, in GBR, the model is initialized with a constant value γ (A tree with just one leaf node) that minimizes the loss over all the samples as in the following equation: This is followed by several iterations of negative gradient computation of the loss function L and its subsequent usage to fit a decision tree and addition of a new model to the ensemble as in the following equation: where v is the shrinkage parameter used to control over- fitting.Although, GBR is used for regression problem in the present study, it is also suitable for classification problems.Extensive details of the theoretical foundation of gradient-boosting regressor can be found in (Friedman, 2001(Friedman, , 2002)).
XGBoost
Another gradient-boosting implementation that is considered in this study is the extreme gradient-boosting (XGBoost) algorithm.XGBoost is an optimized variant of gradient boosting that combines the predictions of several "weak" classification and regression tree (CART) learners to develop a "strong" learner using additive training strategies (Chen et al., 2015).XGBoost is especially known for preventing overfitting efficiently through a simplified objective function that combines the loss and regularization terms.The regularized optimization objective is as in the following equation: where l is the loss function that measures the difference between the experimental, y m , and the estimated y m out- put; is the regularization term given as the following equation: where T and w are the number of leaves and the score on each leaf, respectively; γ and are constants for con- trolling the degree of regularization.Although used for (1) regression problem in this study, XGBoost is suitable for all types of supervised learning problems.See Chen et al. (2015) for detailed background on this algorithm.
LightGBM
Another novel implementation of gradient-boosted decision tree (GBDT) that has been proposed to address the scalability and efficiency problem of its traditional counterpart is LightGBM (LGBM) (Ke et al., 2017).Unlike the traditional GBDT which entails the time-consuming process of scanning all data samples to estimate the information gain of all possible split points for each tree node, LGBM proposes two novel techniques called gradient-based one-side sampling (GOSS) and exclusive feature bundling (EFB).In the GOSS, only samples with large gradients are considered important and used in the estimation of information gain for split point selection.Thus, a significant proportion of data samples are excluded when estimating the information with little or no impact on the accuracy of estimated gain.On the other hand, the EFB technique carries out the NP-hard problem of bundling mutually exclusive features (i.e., they rarely take nonzero values simultaneously) to reduce the number of features with negligible impact on the split point determination accuracy.Although used for regression problem in this study, LGBM is suitable for all supervised learning problems.Further details on LGBM can be found in (Ke et al., 2017).
CatBoost
Similar to the aforestated GBDT algorithms, CatBoost (CBT) is also a machine learning algorithm that leverages gradient boosting on decision trees.CBT is a unique GBDT implementation that is known for its categorical feature handling capability (Dorogush et al., 2018).The two main algorithmic advances introduced in CBT are the implementation of ordered boosting which is a permutation-driven alternative to the classic algorithm, and an innovative algorithm for processing categorical features.Both techniques were created to fight a prediction shift caused by a special kind of target leakage present in all currently existing implementations of gradient-boosting algorithms.Likewise, CBT has the advantage of using a new schema for leaf values calculation when selecting tree structures, which greatly alleviates the problem of overfitting.Although used for regression problem in this study, CBT is suitable for all supervised learning problems.Extensive details on CBT can be found in (Dorogush et al., 2018).
Data Description
The quaternary concrete data applied in this study are experimental results obtained from (Lichman, 2013).The compressive strength which is the most important property of concrete should be accurately and reliably modelled for a quaternary concrete.Thus, the data has been carefully selected to cover the compressive strength for a wide range of days, ranging from 1 to 365 days.To the best of our knowledge, these data are the largest and most widely used data set for compressive strength estimation.Hence, its popularity makes the results of this experiment comparable to a wide range of previous studies.The variables used as input in the modelling are age (days), portions of cement (Kg/m 3 ), GGBS (Kg/m 3 ), FA (Kg/m 3 ), water (Kg/m 3 ), super plasticizer (Kg/m 3 ), fine aggregate (kg/m 3 ) and coarse aggregate (Kg/m 3 ).Fig. 1 presents a visual distribution of each feature.The numerical values of the basic statistics of the features of the 1030 data samples are also presented in Table 1.The statistics of the data set show the mean, standard deviation, minimum value, lower quartile, middle quartile (median), upper quartile, and maximum value to indicate consistency and suitability for use in this study.
In addition, a correlation analysis of all the input variables to the output, the compressive strength, is also presented to understand how changes in each input variable bring about corresponding changes in output.Correlation Coefficient (CC) was used to assess the sensitivity of each component (feature) of the concrete mixture to the compressive strength (MPa) (Mustapha et al., 2022;Salami et al., 2021).From Fig. 2, it can be observed that the input variables (cement, GGBS, water, superplasticizer, coarse aggregate, fine aggregate and age) have varying degrees of correlation with the output.Four of the input variables (cement, GGBS, superplasticizer and age) are positively correlated with the output, whereas the remaining four (fly ash, water, coarse aggregate and fine aggregate) are inversely correlated.Positive correlation here implies that an increase or decrease in these input variables result in corresponding increment or decrement in the compressive strength, respectively.On the other hand, increase in the inversely correlated variables leads to decrease in the compressive strength of concrete and vice versa.
Experimental Setup
The steps involved in the experimental setup of this research is depicted in Fig. 3. Following the statistical description of each variable of the data set (see Sect.Cross validation is often used to assess the generalization capability of models in ML by splitting a given data set into two parts, where a portion is used for model training and the other is used to test how well the trained model is likely to generalise to an unseen data.However, due to varying ratios of training-test splits that have been reported in the literature, the performance of GBR, XGB, LGBM and CBT with optimized hyperparameters are initially examined across five training-test ratios that include 90:10, 85:15, 80:20, 75:25 and 70:30.The experimental results of this process are presented and discussed in Sect.5.2.The hyperparameter optimization for each model is carried out using only the training split of the data set to ensure that each model does not have access to the test partition prior to testing as in real-life application of machine learning.Each of the gradientboosting algorithms considered in this study has a wide range of tuneable hyperparameters for optimal model performance; however, only a few have been selected for optimization.An exhaustive search of every possible combination of values within a specified range for each selected hyperparameters is used to train each model using fivefold cross validation.In other words, the training data are further divided into 5 equal partitions, each of which is, respectively, used to test the performance of a model trained with the remaining four partitions using a combination of hyperparameters at a time.The combination of hyperparameters that produce the best (lowest) average mean squared error over this process is deemed the optimal model parameters that is used to train the model on the entire training set before testing with the test partition that was initially set aside.As reported by Nguyen-Sy et al. (2020), increasing the number of estimators is similarly found to generally result in improved model performance.Hence, a search space of 10 to 1000 estimators is considered in this study.
Table 2 shows the hyperparameter search space and the optimal combination of hyperparameters for the 90:10 training-test split for all models.The model trained with optimal parameters is then evaluated using the evaluation metrics described in Sect.3.3.All experiments were performed using python programming language.The Scikitlearn (Pedregosa et al., 2011) implementation of gradient-boosting regressor was used for GBR model, whereas the official python implementations of XGB, LGBM and CBT were similarly used for the respective model implementations.
Evaluation Metrics
To evaluate the performance of the developed machine learning models in this study, widely accepted statistical metrics such as the coefficient of determination (R 2 ), root where y m is the experimental output, y m is the model estimated output, y is the mean of the experimental output, y is the mean of the estimated output, n is the number of samples.MAPE also has ε which stands for an arbitrarily positive small constant to avoid division by zero when y m is zero.For each of MAPE, RMSE and MAE, the lower the value, the better the model.On the contrary, achieving an R 2 value close to 1 is the goal of the learning algorithm, i.e., the closer the R 2 value to 1, the better.A baseline model which always predict the mean of the experimental output y will have an R 2 of value 0, whereas a worse model than the baseline will produce a negative R 2 value.
In addition to the results based on these evaluation metrics, a ranking test using Friedman's test (Friedman, 1940) is also carried out to test the null hypothesis that the means of the results of the gradient-boosting ensemble (5) y m − y m max ε, y m methods are the same at significance level of 0.05.If this null hypothesis is rejected, Holms's test (Holm, 1979) is performed as a post-hoc analysis of the pairwise comparison of the performance these methods is carried to establish if one is significantly better.The null hypothesis of the Holms's test is that the mean of the results of a pair of groups is equal.All statistical analysis were carried out on the STAC web platform for statistical analysis (Rodríguez-Fdez et al., 2015).
Model Performance Across Varying Training-Test Splits
As hinted in Sect.3.2, the lack of a globally accepted training-test split ratio inspired a preliminary study on five popular training-test ratios that include 90:10, 85:15, 80:20, 75:25 and 70:30 (e.g., 75:25 implies that 75% of the data set is used for training, while the remaining 25% is used for testing).For each training-test ratio and learning algorithm, hyperparameter optimization is first carried out as described in Sect.4.2 before model training and testing for estimation of compressive strength.The training and test performance of GBR, XGB, LGBM and CBT for the different training-test splits in terms of RMSE, R 2 , MAPE and MAE is presented in Fig. 4. As expected, the training performance of the model for the different training-test splits is generally better than their respective test performance across the evaluation metrics.However, being the true measure of the performance of the models, the test performances are relatively impressive given the marginal difference between the training and test scores.The general trend from Fig. 4 shows that as the test fraction of the training-test ratio increases, the models' respective performance tends to decrease across the evaluation metrics.Moreover, unlike the remaining training-test ratios, 90:10 consistently produced the best performance across the evaluation metrics for each learning algorithm; corroborating what was reported in (Salami et al., 2021).Hence, the result of the 90:10 training-test ratio for each of GBR, XGB, LGBM and CBT is selected and discussed in detail in the next section.The mean of the training and test scores for each model (with standard deviation) over the different ratios are also presented for each evaluation metric in Table A of the supplementary material.
Performance Comparison of Best Performing Model for Each Algorithm
Table 3 presents the training and test scores based on the evaluation metrics for compressive strength estimation using the ML methods under study.The best result for each metric is highlighted in bold.In terms of R 2 which measures how well the models approximate the experimental compressive strengths of each concrete mixture, the training and test performances of GBR (0.9950 and 0.9731), XGB (0.9909 and 0.9764), LGBM (0.989 and 0.9745) and CBT (0.993 and 0.9838) are, respectively, very impressive given the small generalization gaps of 0.0219, 0.0145, 0.0145, and 0.0092 between the training and test performances of the respective models.This implies that despite fitting the training data to near perfection, the models are still able to generalize their training performance quite well.However, comparatively, the test R 2 score of 0.9838 achieved by CBT is better than 0.9731, 0.9764 and 0.9745 produced by GBR, XGB and LGBM, respectively.This indicates a performance improvement of 1.1%, 0.75% and 0.95% over the trio, respectively.A comparison of the experimental and estimated compressive strengths by the gradient-boosted ML models are presented in Fig. 5. Fig. 5 shows the scatter plots of the estimated compressive strengths plotted against experimental ones with the respective line of best fit for the training and test phases of each of GBR, CBT, XGB and LGBM models.The plots intuitively illustrate how correlated the model estimations are to the experimental values.The corresponding R 2 (i.e., coefficient of determination) value on each plot summarises its performance with a single score.In general, the plots show that despite producing a more correlated training estimations of the compressive strength, GBR produced the least correlated estimates in the test phase.The test compressive strength estimations of CBT are most correlated with the experimental values, followed by the XGB then LGBM.for the respective models.Amongst these models, the GBR model produced the largest differences between the training and test scores, hence the least generalization despite fitting the training data best.On the other hand, the CBT model generalizes best while also producing the best test performance across the different metrics.Although, the GBR model fits the training data best, in terms of test performance which is the true measure of model performance, CBT produced a superior performance to GBR, XGB and LGBM across all the error-based evaluation metrics with a performance improvement ranging from 17% to 22%, 16% to 20.4% and 12% to 20% in terms of RMSE,MAE and MAPE,respectively. Presented in Figs. 6,7,8 and 9 are the superimposed line plots of experimental and estimated compressive strengths for the training and test phases (a and b) alongside the corresponding error plots (c and d) for each of the considered gradient-boosting models.The errors for the training and test phases of each model are obtained by subtracting the estimated value of compressive strength for each data sample from its corresponding experimental value in the data sets.Since the aim of the model is to estimate the actual compressive strength as closely as possible, the lesser the deviation of the error plot from zero, the better.
It can be observed from the test error plots (Fig. 6d) that the CBT model shows the least deviation as it only deviates by error more than an |5| at only two occasions (sample indexes 66 and 87) compared to seven, five and three cases in GBR (sample indexes 35,64,66,69,71,75 and 86 as in Fig. 9d), XGB (sample indexes 17, 35, 58, 66 and 69, as shown in Fig. 7d) and LGBM (sample indexes 35, 66 and 75 as in Fig. 8d) models, respectively.
It is noteworthy that while all the models, respectively, exceeded |5| error mark on sample index 66, the GBR model notably deviated by |11| on this sample index; making it the least performing model in this regard.
Average Performance of Models
To further ensure that the performance of the gradientboosted machine learning algorithms compared in this study is not by chance, the same experiment was repeated 100 times for each of the models using the same set of optimal hyperparameters presented in Table 2.The original data was repeatedly split into training-test partitions for different repetitions of the experiment using different random seeds to ensure that different sets of training and test samples were used each time over the whole process.The mean and standard deviation of the training and test performances of each of GBR, XGB, LGBM and CBT over the 100 repetitions are presented in Fig. 10 for each statistical evaluation measures.As expected, and hinted earlier, the average training performance of each model is generally better than the corresponding average test performance across the evaluation metrics with GBR mostly performing best in this regard followed by CBT.
Similarly, the training performance shows minimal deviation from their respective means compared to the test performance.In terms of the test performance, CBT (R 2 = 0.9506, RMSE = 3.6051, MAE = 2.2462, MAPE = 0.0774) generally produced the best average performance based on all evaluation metrics, whereas GBR (R 2 = 0.9444, RMSE = 3.8406, MAE = 2.4247, MAPE = 0.0836) ranks lowest in all but MAE and MAPE, where it shows comparable or slightly better performance than LGBM (R 2 = 0.9467, RMSE = 3.7644, MAE = 2.4386, MAPE = 0.0862) and XGB (R 2 = 0.9468, RMSE = 3.7638, MAE = 2.4371, MAPE = 0.0854) on average.Although, XGB marginally outperform LGBM on the specific result presented in Table 3, the average performance of XGB and LGBM are mostly similar with XGB slightly performing better over the hundred repetitions.Overall, CBT ranks best on the average, followed by XGB, LGBM, then GBR across all the evaluation measures.
Statistical Analysis of Results
In addition, a statistical analysis of the obtained results in terms of R 2 and RMSE is presented here.Using the test results from 100 repetitions of experiments from the preceding section, the null hypothesis of the Friedman's test is rejected given p values of 0.00000 (less than significance level of 0.05) for both R 2 and RMSE results, respectively.The Friedman's ranking tests for both R 2 and RMSE rank the gradient boosting ensembles algorithms similarly in descending order as follows, CBT > XGB > LGBM > GBR.While this ranking signifies that CBT
Sample Index
LGBM for all pairwise combination except LGBM vs XGB for both evaluation metrics.This shows that, although XGB ranks higher than LGBM, the difference between them is not statistically significant.Conversely, CBT is significantly better than any other methods (Table 4).
Feature Importance
Being able to understand or interpret the decision or the cause of the decision a machine learning model makes is integral to improved human understanding of the data, the model and relationship between them.The quest for this has paved way for a whole new active area of research known as interpretable machine learning (Murdoch et al., 2019).Similarly, this section seeks to provide insight into the decision of each of the considered machine learning models in this study relative to the data set.While earlier works on compressive strength estimation have rarely explored this line of research, there has been a notable increase in studies exploring this line of research.Some of which have investigated the importance of input features in the prediction of mechanical properties of pervious concrete using extreme gradient boosting and support vector regression as well as Adaboost (Feng et al., 2020;Güçlüer et al., 2021;Mustapha et al., 2022).In this study, the feature importance function which can be called on each of the fitted models of the Python implementations of CatBoost, LightGBM, XGBoost and gradient-boosting regressor is used to get the contribution of each input feature to the respective models.Figs. 11,12,13 and 14, respectively, present a ranking of the input features for CBT, LGBM, XGB and GBR in LGBM vs XGB 0.14617 0.88378 Accepted descending order of importance.There is consensus amongst all the models that the top three most important feature to the estimation of compressive strength are the Age (in days) of each of the concrete mixtures followed by the quantity of cement (in kg/m 3 ), then water (in kg/m 3 ).This confirms what has been reported in earlier studies that the compressive strength of concrete increases with time (Abdulkareem et al., 2019;Sharmila & Dhinakaran, 2016).At the bottom end of the feature importance ranking is coarse aggregate (in kg/m 3 ) with the least relevance to the predictive performance of XGB and LGBM, whereas the fly ash (in kg/m 3 ) component of each mixture has the least contribution to the predictive decision of the GBR and CBT models.These findings further corroborate what has been reported in pertinent works relating the importance of age, cement as well as water quantity in the estimation of compressive strength of concrete (Cakiroglu et al., 2023;Feng et al., 2020;Güçlüer et al., 2021).
Sensitivity Analysis
A sensitivity analysis of all the input variables employed in estimating the compressive strength is presented here to understand how changes in each input variable bring about corresponding changes in the estimated model outputs.It is noteworthy that while the correlation analysis presented in Fig. 2 can be viewed as a form of sensitivity analysis, it only represents the static relationship between each input variable and the output irrespective of the model.Here, the relationship between the input variables and the estimated output from the perspective of each model is presented.This is achieved by showing the marginal effect each feature has on the predicted outcome of GBR, CBT, LGBM and XGB models with the aid of partial dependence plots (PDP) (Hastie et al., 2009).
The PDP is a global method that considers all instances and gives a statement about the global relationship of a feature with the predicted outcome.In the current study, each gradient-boosting ensemble model has been fitted to estimate the compressive strength of concrete mixtures and PDP is used to visualize the relationships each model has learnt as presented in Fig. 15a-d for CBT, GBR, LGBM and XGB, respectively.It is interesting to note that the relationship between each input feature and the estimated output (compressive strength) exhibit similar trend across the gradientboosting models.For instance, the relationship between cement quantity and the estimated compressive strength is linear for all models, with increasing cement quantity yielding corresponding increase in compressive strength across the models.Similar pattern can be observed in relation to the age of the concrete mixtures albeit the compressive strength plateaus after about 100 days, indicating no significant increase in the compressive strength of the mixtures after this period.While the range of training compressive strength values (which is2.33-82.6MPa in this study) used for model building in highly influential to model estimations, representative works such as (Abdulkareem et al., 2019;Sharmila & Dhinakaran, 2016) alluded to slower increase in compressive strength of concrete mixtures after the first 3 months.On the other hand, an inverse relationship exists between the model estimations and water quantity across the models, with increase in water quantity from 150 to 200 kg/m 3 resulting in decrease in compressive strength.Interestingly, the estimated compressive strength does not decrease across the models when water quantity increases beyond 200 kg/m 3 .For other input features, such as fine aggregate and blast furnace slag, the estimated compressive strength slowly and marginally decreases as the former increases, while a marginally decreasing trend can be observed as the latter increases.The intuitive nature of the input-output relationships shown by the models reflect well the models learn from the given data.
Comparison with Previous Works
Given that compressive strength is one of the most important structural material properties in concrete research and design, several studies have developed intelligent approaches for its accurate estimation over the past years.A considerable number of these studies have used either part or whole of the Lichman (2013) data set used in this research.Hence, it is considered worthwhile to compare the results obtained herein with the best results that have been reported in pertinent studies.Admittedly, ensuring an objective comparison of performance with previous studies can be challenging, given the differences in statistical evaluation metrics, training-test split ratios (e.g., some may use 90:10 ratio, while others may use 70:30), sample size (e.g., some may use a subset of the data set, while others use the complete 1030 samples) and the general experimental setup.Notwithstanding, the comprehensive nature of the experiments carried out in this study naturally answers some of these concerns.Table 5 presents details of the representative studies grouped by experimental design, algorithm, under the average performance category with the best results from studies in which experimental results were conducted using k-fold cross validation and the average performance reported, whereas the best results from studies that evaluate their models based on training-test cross validation are grouped under the cross validation category and compared with results presented in Table 3. Table 5 presents the comparison of obtained results with the best from previous studies.A general observation from the table is the extensive use of ensemble models and paucity of gradient-boosted models in compressive strength estimation of quaternary blend concrete.In terms of average performance, the best performance found in relevant studies was reported in Feng et al. (Feng et al., 2020), where the proposed Adaboost model yielded R 2 = 0.952, RMSE = 4.856 MPa, MAE = 3.205 MPa and MAPE = 0.114.Compared to the best average performance obtained in this study, the CBT model produced a better result in all the evaluation metrics (25.76%RMSE, 29.92% MAE and 32.46% MAPE improvements, respectively) except in terms of R 2 , where the score of 0.952 reported is marginally better than that average R 2 of 0.951 obtained over 100 repetitions (about 0.1% improvement).It should also be noted the average performances of GBR, XGB and LGBM in terms of RMSE, MAE and MAPE are also better than what was reported in (Feng et al., 2020).Likewise, the best cross validation performance found in the literature is R 2 = 0.982, RMSE = 2.20 MPa, MAE = 1.64 MPa and MAPE = 0.0678 reported in Feng et al. (Feng et al., 2020).In comparison with the best results obtained in this study, the R 2 , RMSE.MAE and MAPE values of 0.984, 2.071 MPa, 1.597 MPa and 0.063 are better with performance improvement of 0.2%, 5.86%, 2.62% and 0.48%, respectively.
The impressive performance of the gradient-boosting models presented in this study generally reflect the robustness each of each model to different evaluation approaches for compressive strength of quaternary blend concrete estimation.However, it should be noted the performance reported in this study is limited to 1030 concrete mix with age ranging from 1 to 365 days.
Conclusion
A comparative analysis of prediction of compressive strength of quaternary blend concrete with gradientboosted ensembles is presented in this study.Four popular gradient-boosting implementations, namely, gradient-boosting regressor (GBR), light gradient-boosting model (LGBM), extreme gradient boosting (XGB) and CatBoost (CBT) were, respectively, used to build models for compressive strength estimation and results based on an out-of-sample test set as well as average cross validation are presented.Four popular evaluation metrics were used for performance evaluation with results showing that CBT outperformed other methods across all the metrics with values of 0.9838, 2.0709, 1.5966 and 0.0629 as the R 2 , RMSE, MAE and MAPE values, respectively.An analysis of the most important features to model performance also shows that the age, quantity of cement and water in the concrete mixture have highest contributions to the compressive strength estimation of each model.In addition, a sensitivity analysis of the model prediction with varying values of input features confirms the importance of these features, notably showing no significant increase in compressive strength estimations after the first 100 days.Moreover, a comparison of results with findings from previous studies also shows the superiority of CBT and the other gradient-boosting models in estimating compressive strength.CBT not only outperform the models on single evaluation with an out of sample test but also in terms of average performance.It is hoped that these findings will further increase the awareness of the predictive capabilities of CBT amongst and thus, increase its use alongside the growing computational tools at their disposal.
This study, though comprehensive, is not without limitations.In relation to the data set, though, a fairly large representative one in concrete properties estimation, we acknowledge that machine learning models are only as good as their training data.Hence, the findings reported are based on the range of values reported in Sect.3.1.Besides, the data set is not representative of all types of concrete mixtures, such as the rubberized recycled aggregate concretes and heat-treated concretes (Cakiroglu et al., 2023;Chen et al., 2021).These are viable areas for future investigation.
In addition, the relentless quest for improved accuracy of concrete properties and specifically compressive strength estimation has led to innovative learning methods, such as advanced deep learning algorithms with specialised loss functions Hoang (2023) and metaheuristic optimized DNN (Ranjbar et al., 2022) as well as ensemble of ensemble models (Lee et al., 2023).While these methods have potential shortcomings that relates to computational cost and overfitting, future works will explore feature selection, using only top-ranking features that contribute most to each model performance as shown in the feature importance and sensitivity analysis.
Fig. 1
Fig. 1 Boxplots of distribution of compressive strength and input features of data sets 3.1) is data normalization.This is a common pre-processing stage in most machine learning pipeline to avoid numerical overflow while keeping the input variables within a uniform range.Due care has been taken to split the data into training and test partitions before data normalization to avoid data leakage(O'Neil & Schutt, 2013).All input variables were normalized, such that the values are within the range of -1 and 1.
Fig. 4
Fig. 4 Training and test performance of ML models with different training-test splits
Fig. 5
Fig. 5 Comparison of experimental and estimated compressive strength for the training and test phases of each model
Fig. 6
Fig. 6 Superimposed line plots of experimental and estimated compressive strength for a training and b test phases and corresponding error plots over the c training and d test data for CatBoost
Fig. 7
Fig. 7 Superimposed line plots of experimental and estimated compressive strength for a training and b test phases and corresponding error plots over the c training and d test data for LightGBM
Fig. 8
Fig. 8 Superimposed line plots of experimental and estimated compressive strength for a training and b test phases and corresponding error plots over the c training and d test data for XGBoost
Fig. 10
Fig. 10 Mean (± Standard Deviation) performance of gradient-boosted models over 100 repetitions of experiments
Fig. 15
Fig. 15 Partial dependence plots for the a CBT, b GBR, c LGBM and d XGB compressive strength estimation models
Table 1
Descriptive statistics of variables used in modelling
Table 2
Optimal hyperparameters for gradient-boosted models
Table 3
Training and testing performance of the models (↑ Higher is better, ↓ lower is better) Table 3 for each model are the respective training and test performances in terms of RMSE, MAE and MAPE.It is worthy of note that unlike R 2 , these statistical evaluation measures seek to approximate the errors between the experimental values and model estimations as described in Sect.4.3.Based on these metrics, the respective training and test performances of GBR (RMSE = 1.1826MPa and 2.6642 MPa; MAE = 0.4259 MPa and 1.9013 MPa; MAPE = 0.0148 and 0.0717), XGB (RMSE = 1.6016MPa and 2.4972 MPa; MAE = 0.9246 MPa and 1.9032 MPa; MAPE = 0.033 and 0.0744), LGBM (RMSE = 1.7578MPa and 2.5963 MPa; MAE = 1.0599MPa and 2.0067 MPa; MAPE = 0.0392 and 0.0788) and CBT (RMSE = 1.4045MPa and 2.0709 MPa; MAE = 0.7218 MPa and 1.5966 MPa; MAPE = 0.0256 and 0.0629) are very impressive given the respective generalization gaps of 1.4816 MPa, 0.8956 MPa, 0.8385 MPa and 0.6664 MPa in terms of RMSE, 1.4754 MPa, 0.9786 MPa, 0.9468 MPa and 0.8748 MPa in terms of MAE as well as 0.0569, 0.0414, 0.0396 and 0.0373 in terms of MAPE
Table 3 .
It can be observed that the null hypothesis at significance level of 0.05 is rejected
Table 4
Results of pairwise post-hoc analysis using Holm's test
Table 5
Comparison with previous studies | 2024-04-02T13:03:45.034Z | 2024-04-02T00:00:00.000 | {
"year": 2024,
"sha1": "dc6ec68bd6f51c8505a16ac41dbaf629c38d32cb",
"oa_license": "CCBY",
"oa_url": "https://ijcsm.springeropen.com/counter/pdf/10.1186/s40069-023-00653-w",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "51e8d92b5256efb8fa091698d570a6f20102bca7",
"s2fieldsofstudy": [
"Engineering",
"Materials Science",
"Computer Science"
],
"extfieldsofstudy": []
} |
251771796 | pes2o/s2orc | v3-fos-license | Deconvolution of tumor composition using partially available DNA methylation data
Background Deciphering proportions of constitutional cell types in tumor tissues is a crucial step for the analysis of tumor heterogeneity and the prediction of response to immunotherapy. In the process of measuring cell population proportions, traditional experimental methods have been greatly hampered by the cost and extensive dropout events. At present, the public availability of large amounts of DNA methylation data makes it possible to use computational methods to predict proportions. Results In this paper, we proposed PRMeth, a method to deconvolve tumor mixtures using partially available DNA methylation data. By adopting an iteratively optimized non-negative matrix factorization framework, PRMeth took DNA methylation profiles of a portion of the cell types in the tissue mixtures (including blood and solid tumors) as input to estimate the proportions of all cell types as well as the methylation profiles of unknown cell types simultaneously. We compared PRMeth with five different methods through three benchmark datasets and the results show that PRMeth could infer the proportions of all cell types and recover the methylation profiles of unknown cell types effectively. Then, applying PRMeth to four types of tumors from The Cancer Genome Atlas (TCGA) database, we found that the immune cell proportions estimated by PRMeth were largely consistent with previous studies and met biological significance. Conclusions Our method can circumvent the difficulty of obtaining complete DNA methylation reference data and obtain satisfactory deconvolution accuracy, which will be conducive to exploring the new directions of cancer immunotherapy. PRMeth is implemented in R and is freely available from GitHub (https://github.com/hedingqin/PRMeth). Supplementary Information The online version contains supplementary material available at 10.1186/s12859-022-04893-7.
type proportion prediction is an important task for multi-omic data analysis or clinical studies. For example, accounting for cell type proportions is proven to be helpful for Epigenome-Wide Association Study (EWAS) [6], and the composition of infiltrating immune cells in tumor tissues is predictive of the response to checkpoint inhibitor immunotherapy [7].
Currently, experimental techniques including flow cytometry and single-cell techniques such as Drop-seq [8], 10X Genomics, and sci-RNA-seq [9] have been used to study cellular components in complex tissues, but they are costly [10] and sensitive to technical changes during cell isolation. Thus, in recent years, computational estimation of cellular components using gene expression or DNA methylation data has become a hot topic in computational biology [10][11][12][13][14][15][16][17][18][19][20][21][22][23][24][25][26][27]. Compared to gene expression, DNA methylation has the advantage of being more stable [28], highly cell-type specific [29], and easier to measure in formalin-fixed paraffin-embedded (FFPE) tissues [30]. As a result, DNA methylation is more suitable for studying cellular components. Currently, the methods based on DNA methylation can be broadly classified into two categories: reference-based methods and reference-free methods. Among the reference-based methods, Houseman et al. [11] proposed a linear regression method (QP) based on DNA methylation, which uses quadratic programming to ensure that the regression coefficients are non-negative. Teschendorff et al. [16] developed EpiDISH, which uses non-constrained weighted linear regression rather than linear regression to reduce the weights of data points with large residuals. Altboum et al. proposed DCQ [13], which modifies the deconvolution approach into a regularized regression model to reduce the number of model parameters. Inspired by the success of CIBERSORT [14] in gene expression decomposition, Chakravarthy et al. [10] analyzed the cell type composition of complex mixtures using support vector regression based on DNA methylation data and obtained more accurate estimates. The latest reference method based on DNA methylation is Emeth [21], which uses a mixture distribution based on ICeD-T [20] to identify CpG sites whose DNA methylation in tumor samples is inconsistent with the reference methylation profiles and to reduce the contribution of these aberrant sites in cell type abundance estimation.
The general limitation of the above reference methods is that they require DNA methylation profiles of specific cell types as input, but in practice, it is difficult to obtain DNA methylation profiles of all cellular components in tumor tissues [31]. To overcome this limitation, many researchers have developed reference-free methods. For example, James et al. [22] proposed a combination (FAST-LMM-eWasher) of linear mixed models and principal components to correct the composition of cell types automatically. Houseman et al. [23] applied an iterative quadratic programming framework (RF) to DNA methylation for cell type analysis. Motivated by previous research, Lutsik et al. [26] developed MeDeCom by combining constrained non-negative matrix factorization with a new biologically relevant regularization function. Such methods do not rely on reference information and aim to estimate molecular profiles and proportions of all cell types simultaneously, unfortunately, their prediction accuracies are far from satisfactory. However, in real clinical practice, gene expression or DNA methylation is often available for only a small fraction of cell types, and reference information for the remaining cell types is unknown. To overcome these limitations, easily available data for a portion of cell types in a tumor mixture can be used as a reference to deconvolute the entire tumor mixture.
In this paper, we proposed a method for partially-reference cell type decomposition using DNA methylation data (PRMeth). PRMeth used an iteratively optimized non-negative matrix factorization framework, which took DNA methylation profiles of a portion of the cell types in the tissue mixtures (including blood and solid tumors) as input to estimate the proportions of all cell types as well as the methylation profiles of unknown cell types simultaneously. Based on three benchmark datasets, we compared PRMeth with five different methods (i.e., Reference-Free (RF) [23], Quadratic Programming (QP) [11], CIBERSORT (CBS) [14], Digital cell quantification (DCQ) [13], and Epigenetic Dissection of Intra-Sample Heterogeneity (EpiDISH) [16]). The results showed that PRMeth outperformed the other five methods. PRMeth was then applied to four types of tumors from The Cancer Genome Atlas (TCGA) [32] database, i.e., skin cutaneous melanoma (SKCM), invasive breast carcinoma (BRCA), acute myeloid leukemia (LAML), and thymoma (THYM). The experimental results revealed that immune cell proportions estimated by PRMeth were in good agreement with previous studies and PRMeth could provide new insights into tumor heterogeneity and immunotherapy.
Simulation data
The simulation dataset was constructed from five immune cells (including neutrophils, CD4 + T cells, CD8 + T cells, natural killer cells (NK), CD19 + B cells) (GSE88824), one non-small cell lung cancer cell (A549), and one normal human bronchial epithelial cell (NHBEC) (GSE92843) available from the Gene Expression Omnibus (GEO) [33]. To obtain the methylation profiles of the cell types, we loaded their respective IDAT files using the champ.load (ChAMP package in R) and filtered out 79,818 probes with a detection p value > 0.01, a beadcount < 3 in at least 5% of samples, non-CPGs, SNPs, MultiHit, and locating on X, Y chromosome. Then, the filtered data were normalized by the champ.norm and their batch effects were eliminated by the champ.runCombat. Finally, we were able to obtain the methylation profiles for seven different cell types (recorded as base profiles).
Next, the base profiles were employed to generate the methylation profiles of nonsmall cell lung cancer (NSCLC) samples with different cell type proportions and levels of noise. In the first step, we randomly generated the proportions of all cell types for each NSCLC sample based on the Dirichlet distribution. In detail, the proportions of A549 cell, NHBEC, and immune cells are 60%, 10%, and 30%, respectively. These proportions are in accordance with the true proportions of the cell types found in NSCLC samples [25]. In the second step, we generated methylation profiles of the cell types with different levels of noise from an independent beta distribution with mean and variance inferred from the base profiles (see Results for details). In the third step, the methylation profiles of the cell types with different noise levels were linearly combined according to the above ratios as the methylation profiles of NSCLC samples. In the end, the methylation profiles of 100 NSCLC samples were obtained. We used the Dirichlet distribution to generate proportions of cell types 20 times randomly and then obtained 20 simulation datasets at each noise level. The 20 simulation datasets are used to validate the performance of the proposed method PRMeth.
Real data obtained from experiments
Besides the simulation dataset, we also applied our method to the following three datasets. In the first dataset, the methylation profiles of 100 mixture samples, the methylation profiles of seven types of immune cells (including CD4 + T cells, CD8 + T cells, monocytes, B cells, NK cells, neutrophils, and T regulatory cells) constituting mixture samples, and the proportions of all cell types for each sample were provided by Zhang et al. [21]. This dataset is referred to as the Zhang dataset in this paper.
In the second dataset, the methylation profiles of six whole blood samples and their constitutional cell types (including CD4 + T cells, CD8 + T cells, monocytes, B cells, NK cells, neutrophils, and eosinophils) were obtained from Chakravarthy et al. [10] via the GEO accession number GSE35069, and the proportions of each cell type were measured by flow cytometry as provided by the authors [34].
In the third dataset, the methylation profiles of skin cutaneous melanoma (SKCM), invasive breast carcinoma (BRCA), acute myeloid leukemia (LAML), and thymoma (THYM) samples were downloaded from the TCGA database. To facilitate the comparison, 100 tumor samples were randomly selected for each cancer type. As the reference for deconvolution, the methylation profiles of seven immune cells (including monocytes, dendritic cells, macrophages, eosinophils, naive T cells, CD8 + T cells, and NK cells) were obtained from Arneson et al. [35] via the GEO accession numbers GSE35069, GSE59250, and GSE71837. Meanwhile, the batch effects between the methylation profiles of tumor samples and those of immune cell types were eliminated by the ComBat function in sva package of R.
PRMeth model construction
The framework of PRMeth is illustrated in Fig. 1. It is assumed that the methylation profiles of tumor tissues are mixture signals from their constitutional cell types, where only a part of them have available methylation profiles. We proposed a non-negative matrix factorization scheme (Fig. 1A) and an iterative algorithm ( Fig. 1B) to estimate the proportions of all cell types and the methylation profiles of unknown cell types simultaneously.
We denote Y ∈ R m×n + as the methylation profiles of m CpG sites in n tumor mixtures. Suppose that the tumor mixtures are made up of K cell types with a certain proportion. According to the deconvolution model: denote the methylation profiles of K 1 known cell types and K 2 unknown cell types ( K = K 1 + K 2 ), H 1 ∈ R K 1 ×n + and H 2 ∈ R K 2 ×n + denote the proportions of known and unknown cell types, respectively. ε is an m × n error matrix. Observing that y ij ∈ Y , w 1(ij) ∈ W 1 and w 2(ij) ∈ W 2 represent the DNA methylation level (i.e., beta value) of a CpG site, then 0 ≤ y ij , h 1(ij) , h 2(ij) ≤ 1 . And the proportions In this model, the methylation profiles Y of the mixtures and the methylation profiles W 1 of the partial cell types were known, and we aimed to estimate the proportions H 1 H 2 of all cell types and the methylation profiles W 2 of unknown cell types, which could be obtained by solving for the minimization error sum of squares, thus transforming Eq. (1) into: where || · || 2 F denotes the Frobenius norm. where t is the number of iterations. In step a, we employed the RPMM [36] algorithm to initialize the methylation profiles W 2 of unknown cell types. In detail, RPMM is a clustering algorithm that clusters the methylation profiles Y of tumor samples into K 2 clusters by the binary distance formula and takes the clustering centers as the initial value of W 2 . Furthermore, we compared RPMM with six initialization approaches including five different clustering algorithms (i.e., canberra, euclidean, manhattan, maximum, and minkowski) and a random generation algorithm (random). As shown in Additional file 1: Figure S1, there were no significant differences between the seven methods, but RPMM outperformed the other approaches in estimating proportions on the simulation dataset.
For PRMeth, if profiles of all constitutional cell types are available (i.e., K 1 = K ), it is actually the QP method. On the contrary, if none of the constitutional cell types is known ( i.e., K 1 = 0 ), the PRMeth method turns to the RF method. Therefore, the PRMeth method is a more general framework that includes the reference-based and reference-free methods as two special cases.
CpG site selection
The total number of CpG sites in the human genome is very huge. To reduce the potential noise and improve the computational efficiency, we selected CpG sites with high methylation variation in tumor samples by the coefficient of variation ( c v ) as follows: where σ and µ denote the standard deviation and mean of a CpG site in Y , respectively. We sorted these sites according to c v and then selected the top n with the highest c v values as input features.
Cell type number prediction
In our method, the number K of cell types in tumor mixtures needs to be specified. The Bayesian information criterion (BIC) [37] is an important measure of model superiority that can give the optimal number of parameters in the model. Therefore, BIC was selected to identify K in the tumor mixtures. Furthermore, in order to weaken the penalty, a penalty factor wasintroduced . _BIC is defined by the formula: where N denotes the sample size, P denotes the number of model parameters, SSR denotes the residual sum of squares between the true and estimated methylation profiles of the tumor mixtures, and denotes the penalty factor, whose size is restricted to (0, 1) . In the PRMeth model, N = n × m as well as P = K (n + m) − nK 1 , where n , m , K and K 1 denote the number of tumor mixtures, the number of CpG sites, the total number of cell types, and the number of known cell types, respectively. Different K values correspond to different _BIC values, and the K value corresponding to the smallest _BIC value is the optimal number of cell types for the tumor mixtures.
Research design
The five methods, i.e., QP, DCQ, EpiDISH, RF, and CBS, are state-of-the-art methods for the DNA methylation deconvolution task. Among them, RF, QP, DCQ, and EpiDISH used the linear model and CBS used the most popular non-linear model (support vector regression). The two models were also adopted by the other deconvolution methods introduced in the Background section, so we compared PRMeth with the five methods.
1. RF [23], a reference-free method for solving cell type proportions and cell type methylation profiles using iterative quadratic programming; 2. QP [11], a reference-based method for solving cell type proportions using quadratic programming; 3. CBS [14], a reference-based method for inferring the proportions of tumor-infiltrating immune cells using support vector regression; 4. DCQ [13], a reference-based method for inferring the global dynamics of the number of immune cells in complex tissues using elastic net regularization. 5. EpiDISH [16], a reference-based method for estimating cell type proportions using non-constrained weighted linear regression.
The mean absolute error (MAE) and Pearson correlation coefficient (PCC) were used to evaluate the performance of different methods. In detail, MAE measures the mean absolute error between the estimated and true values of cell type proportions or cell type methylation profiles, and PCC quantifies the correlation coefficient between the estimated and true values of cell type proportions or cell type methylation profiles, with values ranging from [−1, 1].
Determination of the number of cell types
The number of cell types should be specified first for PRMeth. However, it is not a trivial task since we are infeasible to know the exact number without a single-cell sequencing experiment. We here determined the number K of cell types in mixture samples using _BIC , a modified Bayesian information criterion (see Methods for details). Assuming that the methylation profiles Y of tumor mixtures and the methylation profiles W 1 of K 1 cell types are known, penalty factor is taken as 0.1, 0.2, . . . , 0.9 , and K is chosen as K 1 + 1, K 1 + 2, . . . , K 1 + k , where k ≤ 30 . All and K were traversed to calculate their (6) corresponding _BIC values. The optimal number of cell types was determined as the K with the smallest _BIC value. _BIC was tested on the Zhang dataset with in total 7 cell types by setting K 1 as 2, 3, 4, and 5, respectively. It was observed that the smallest _BIC values corresponded to = 0.3, 0.4, 0.4, 0.5 , and K = 7 for all K 1 . When was fixed as 0.3, 0.4, 0.4, or 0.5 , we plotted the _BIC values with K as shown in Fig. 2. As expected, all _BIC values decreased first and then increased with the increase of K , and PRMeth could successfully predict the correct number ( K = 7 ) of cell types in all scenarios.
Evaluation of different methods using simulation data
After successfully determining the total number of cell types, we next evaluated the estimation accuracy of PRMeth on the simulation dataset. First, the top 1000 CpG sites with the highest coefficient of variation ( c v ) were selected as the input for six methods. Then, we calculated the mean absolute error (MAE) between the true and predicted proportions of available cell types for each method at different noise levels. Here, the random noise was generated by a beta distribution whose mean is the methylation level of each site for each cell type in the base profiles and whose variance is a certain percentage of the maximum variance (i.e., mean * (1 − mean) ) calculated by the above mean. In detail, we took 10%, 20%, 30%, and 40% of the maximum variance when processing lung cancer cell types and 5%, 10%, 15%, and 20% for normal cell types. As shown in Fig. 3A, the MAE of all six methods increased with the increase of the noise level. Compared to other methods, PRMeth consistently obtained the lowest bias and relatively stable results at all noise levels. When the noise level was (0.1, 0.05), we evaluated the performance of PRMeth in estimating the proportions of cell types at different numbers ( K 2 = 2, 3, 4, 5 ) of unknown cell types. It is shown that PRMeth always obtained the lowest and most stable bias, however, the MAE of the remaining methods all gradually increased with the increasing number of unknown cell types (Fig. 3B). For the three remaining noise levels (0.2, 0.1), (0.3, 0.15), and (0.4, 0.2), PRMeth performed similarly well (Additional file 1: Figure S2). In addition to proportion prediction, PRMeth (as well as RF) can also infer the methylation profiles of cell types. Figure 3C-F show the MAE and Pearson correlation coefficient (PCC) between the true and predicted cell type methylation profiles calculated by the two methods at different noise levels or different numbers of unknown cell types. At all noise levels, PRMeth achieved consistently higher accuracy (Fig. 3C) and correlation (Fig. 3E) compared to RF. Furthermore, when the noise level was (0.1, 0.05), the MAE of PRMeth gradually increased but remained lower than RF (Fig. 3D) and its PCC decreased gradually but remained higher than RF (Fig. 3F) with the increasing number of unknown cell types. PRMeth exhibited the same results as Fig. 3D, F compared to the reference-free method at the three remaining noise levels (Additional file 1: Figure S3).
We also evaluated the computational performance of these six methods. As shown in Additional file 1: Table S1, executing 20 times at 100 samples and 1000 CpG sites, both the running time and memory usage of PRMeth is a little higher than the other methods. This is because many iterations are required to reach the optimal solution. In addition, we analyzed the running time and memory usage of PRMeth when the number of samples and features gradually increased. This reveals that the running time of PRMeth increased as the number of samples and features gradually increased, but there was no clear pattern in its memory usage (Additional file 1: Table S2).
Evaluation of different methods using Zhang data
We then evaluated different methods on the Zhang dataset from three aspects, i.e., the accuracies of six methods in estimating the proportions of known cell types, the accuracies of PRMeth and RF in estimating the proportions of all cell types, and the overall performance of proportion estimates at different numbers of unknown cell types. First, by setting K 1 as 4, we calculated the MAE between the true and predicted proportions of each of the four cell types using the six methods. Figure 4A, B demonstrate that PRMeth had the lowest MAE at both CD4 + T cells and monocytes compared to other methods. Figure 4C shows that RF had the lowest bias ( MAE RF = 0.0631 ) at CD8 + T cells, followed by PRMeth ( MAE PRMeth = 0.0775 ). About the MAE of B cells, PRMeth ranked fourth, which was slightly higher than EpiDISH, CBS, and QP (Fig. 4D). In general, PRMeth had better results for the proportion estimates of a single cell type compared to other methods. A similar performance was obtained by PRMeth when K 1 = 2, 3, 5 (Additional file 1: Figures S4, S5 and S6). Second, we obtained the MAE between the true and predicted proportions for each of all cell types using PRMeth and RF when K 1 = 3 . Except for CD8 + T cells, the MAE of PRMeth was lower than RF for the remaining six cell types (Fig. 4E). Overall, our method had higher accuracy in predicting the proportions of each cell type compared to RF when K 1 = 2, 3, 4, 5 ( Fig. 4E and Additional file 1: Figure S7). Finally, the PCC between the true and predicted proportions of known cell types obtained by the six methods at different numbers of unknown cell types is shown in Fig. 4F. As the number of unknown cell types increased, the PCC of both PRMeth and reference-based methods decreased. An exception is RF, which does not require reference data as input. It is clear that the PCC of PRMeth was always the highest and that of the reference-free method was always the lowest. When calculating the MAE between the true and predicted proportions of known cell types using the six methods at different numbers of unknown cell types, it is found that PRMeth consistently showed superiority over other methods (Additional file 1: Figure S8).
In addition, we estimated the methylation profiles of cell types using PRMeth and RF. We found that the accuracy and correlation of the methylation profiles obtained by PRMeth at different numbers of unknown cell types were higher than RF (Additional file 1: Figure S9).
Evaluation of different methods using whole blood data
Next, we further validated our method on whole blood samples. We calculated MAE between the true and estimated proportions of known cell types by the six methods at K 1 = 2, 3, 4, 5 . As shown in Fig. 5A-C, and Additional file 1: Figure S10, PRMeth showed the lowest bias at all values of K 1 . We then compared all cell type proportions predicted by PRMeth with the true proportions measured by flow cytometry. This reveals that the estimation accuracy of PRMeth increased with increasing K 1 (Additional file 1: Figure S11 and Fig. 5D) and only a few predictions deviated from the true values at K 1 = 5 (Fig. 5D).
Similarly, we also estimated the cell type methylation profiles and found that the accuracy and correlation of PRMeth were consistently higher than RF (Additional file 1: Figure S12). Application to TCGA data Finally, we applied PRMeth to real tumor samples from TCGA. We selected seven types of immune cells (including monocytes, dendritic cells, macrophages, eosinophils, naive T cells, CD8 + T cells, and natural killer cells) as known partial reference data, and then deconvolved 400 tumor samples including 100 SKCM samples, 100 BRCA samples, 100 LAML samples, and 100 THYM samples. We first determined the total number of cell types in the four types of tumor samples using _BIC and the K were 32, 29, 24, and 22, respectively. Because tumor tissue is a mixture of different cell types with a laminated structure that contains multiple cell types with different morphologies in each layer [38], we combined some cell types and assumed that the total numbers of cell types were 18, 16, 12, and 11 for SKCM, BRCA, LAML, and THYM, respectively. We then estimated the proportions of all cell types in these tumor samples using PRMeth and converted the absolute proportions of immune cells into relative proportions of each immune cell to all immune cells. As expected, different tumor samples showed different infiltration patterns of immune cells (Fig. 6A). In invasive breast carcinoma samples, macrophages occupied the highest proportion among all immune cells, which was consistent with previous literature findings [39] that a hallmark of breast cancer is high infiltration of M2 tumor-associated macrophages. The high infiltration levels of CD8 + T cells and macrophages in skin cutaneous melanoma samples were consistent with the study [40]. Acute myeloid leukemia and thymoma samples had high proportions of monocytes [35] and naive T cells [41], respectively. To investigate the relationship between cell type proportions and tumor types, we used the Shannon index [42] representing the diversity of biomes to describe the heterogeneity degree of tumor samples. As shown in Fig. 6B, the heterogeneity scores (i.e., 1.6807, 1.6555, 1.3524, and 1.2401) of BRCA, SKCM, LAML, and THYM were significantly different, which illustrates the estimated proportions from PRMeth met the biological significance. We also analyzed the impact of the predicted proportions of cell types on the survival of cancer patients. We first used the surv_cutpoint function in the survminer package of R to divide cancer patients into high-and low-infiltrating groups based on the proportions of specific cell types (including known immune cells and estimated unknown cells), and then used Cox proportional hazards regression to calculate the survival rates of these two groups. We found that SKCM patients with a high infiltration level of CD8 + T cells and THYM patients with a high infiltration level of macrophages both had good overall survival (p = 0.0022 and 0.02, Fig. 6C, E), which was consistent with previous findings by Ma et al. [40] and Yang et al. [43]. In contrast, LAML patients with a high infiltration level of NK cells had poorer overall survival than those with a low infiltration level (p = 0.0449, Fig. 6D), which was consistent with the study's results [44] that the NK cells activated with high expression were associated with a poor prognosis. In addition, we also found that several unknown cell types had an impact on the survival of cancer patients (Fig. 6F-J).
Discussion
In this paper, we proposed a cell type decomposition model (PRMeth) based on partially available DNA methylation data, which employs a non-negative matrix factorization and an iterative optimization algorithm. Given reasonable parameter settings, PRMeth could infer the proportions of all cell types and recover the methylation profiles of unknown cell types effectively. The study on the TCGA dataset showed that the immune cell proportions estimated by PRMeth were largely consistent with previous studies and met the biological significance. Compared to existing methods, the advantages of PRMeth are mainly reflected in the following points. First, PRMeth is applied to DNA methylation data that are relatively stable and easier to measure. Second, using partial DNA methylation data as a reference can reduce the difficulty of obtaining complete DNA methylation data. Third, PRMeth can infer not only the proportions of known cell types but also those of unknown cell types. Fourth, although the PRMeth method is driven by cancer research, it can be applied to other tissues, such as blood, to study the composition of cell types associated with other diseases, such as autoimmune diseases.
Despite its advantages, our study also suffers from the following limitations. First, our method requires the total number of cell types as input. The results on the Zhang dataset show that our method could obtain the exact total number of cell types using _BIC . However, the total number of cell types is often uncertain because all cells of a complex tumor tissue form a laminated structure. In other words, cells are grouped by similarities so the total number of cell types can be determined by different groupings. Therefore, we encourage users to conduct downstream association analysis by choosing a reasonable K in their study. Second, PRMeth does not apply to the estimation of cell type proportions for a single sample. In the future, we will expand the applicability of PRMeth and explore the relationship between cell type proportions and tumor subtypes, which may help to determine the optimal treatment regimen for a specific patient and predict potential targets for cancer immunotherapy.
Conclusion
Different from the available reference-based and reference-free methods, the proposed method PRMeth is based on partial reference information, which is more in line with real clinical practice. It not only circumvents the difficulty of obtaining complete DNA methylation reference data but also obtains satisfactory deconvolution accuracy, which will be conducive to the reduction of medical costs, the analysis of tumor heterogeneity, and the exploration of new directions of cancer immunotherapy. | 2022-08-25T13:09:05.800Z | 2022-08-24T00:00:00.000 | {
"year": 2022,
"sha1": "9563663081bb44de5d24ab64b64785d5bf3e8d20",
"oa_license": "CCBY",
"oa_url": "https://bmcbioinformatics.biomedcentral.com/track/pdf/10.1186/s12859-022-04893-7",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "eab7f22c459706daf18fbe587ee319ae9e213e9f",
"s2fieldsofstudy": [
"Medicine",
"Computer Science",
"Biology"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
4593430 | pes2o/s2orc | v3-fos-license | LncRNAs regulate the cytoskeleton and related Rho/ROCK signaling in cancer metastasis
Some of the key steps in cancer metastasis are the migration and invasion of tumor cells; these processes require rearrangement of the cytoskeleton. Actin filaments, microtubules, and intermediate filaments involved in the formation of cytoskeletal structures, such as stress fibers and pseudopodia, promote the invasion and metastasis of tumor cells. Therefore, it is important to explore the mechanisms underlying cytoskeletal regulation. The ras homolog family (Rho) and Rho-associated coiled-coil containing protein serine/threonine kinase (ROCK) signaling pathway is involved in the regulation of the cytoskeleton. Moreover, long noncoding RNAs (lncRNAs) have essential roles in tumor migration and guide gene regulation during cancer progression. LncRNAs can regulate the cytoskeleton directly or may influence the cytoskeleton via Rho/ROCK signaling during tumor migration. In this review, we focus on the regulatory association between lncRNAs and the cytoskeleton and discuss the pathways and mechanisms involved in the regulation of cancer metastasis.
Background
Cell migration requires cytoskeletal reorganization, which plays a critical role in cancer metastasis [1,2]. In particular, some classical signaling pathways are involved during cytoskeletal reorganization, such as the ras homolog family (Rho) and Rho-associated coiled-coil protein kinase (ROCK) signaling pathway [3,4].
Approximately 75% of human genomic DNA can be transcribed into RNA, but only 2-3% of the genes encode a protein; therefore, in the entire transcriptome, the proportion of noncoding RNAs is far higher than the proportion of protein-coding mRNAs [5]. Among the former, lncRNA is a transcript longer than 200 nucleotides and cannot code for a protein; it can play an important role in the process of tumor metastasis through multiple mechanisms [6]. The abnormal expression of lncRNAs is related to the poor prognosis of patients with cancer. Studies have revealed that the lncRNA AFAP1-AS1 promotes the invasion and migration of nasopharyngeal carcinoma (NPC) cells by stimulating stress fiber formation [7]. The lncRNA Pvt1 oncogene (PVT1) is related to poor prognosis by inhibiting apoptosis in colorectal cancers [8]. PVT1 is an indicator of poor prognosis for gastric cancer and promotes cell proliferation by regulating the epigenetic expression of p15 and p16 [9]. Depletion of the lncRNA urothelial cancer associated 1 (UCA1) induces radiosensitivity and decreases proliferative capacity [10]. Thus, lncRNAs can promote tumor cell invasion, migration, proliferation, and radiosensitivity and inhibit tumor cell apoptosis, thereby accelerating tumor progression, predisposing cancer patients to distant metastasis, and resulting in poor prognosis.
LncRNAs can directly interact with cytoskeletal proteins to change the three-dimensional structure of cells and can regulate the cytoskeleton through the Rho/ROCK signaling pathway. Therefore, the interactions among lncRNAs, Rho/ROCK signaling, and the cytoskeleton underlie the ability of a cell to become motile, eventually leading to tumor migration [7,11]. This review describes the current knowledge about the mechanisms underlying cytoskeleton reprogramming, followed by discussion of the roles of lncRNAs. The interaction between lncRNAs and cell migration suggests new therapeutic targets in cancer metastasis.
Cytoskeletal reorganization and cancer cell movement
The cytoskeleton refers to the structure of the protein fiber network in eukaryotic cells. It plays an important role in the maintenance of cell shape, cell movement, transportation of substances in cells, and cytokinesis. Eukaryotic cells contain three main types of cytoskeletal filaments: microfilaments, microtubules, and intermediate filaments [12].
Microfilaments are also called actin filaments. They are approximately 7 nm in diameter and are composed of two strands of spiral fibers, which are formed by actin polymerization [13]. The basic unit of the microfilament is globular actin (also known as G-actin). Actin monomers, one after another, link to form an actin chain, and two actin chains twist around each other to form a strand of microfilament [14]. This actin polymer is filamentous actin (F-actin). The main functions of microfilaments are the formation of stress fibers, cell movement, and cytokinesis [15].
The structure and function of microfilaments is regulated by a variety of proteins. These proteins are associated with the microfilament and are known as microfilamentassociated proteins; some examples include capping proteins, Arp2/3 complex, cofilin, etc. [16,17].
The capping protein can selectively block one end of filamentous F-actin, resulting in the shortening or extension of the microfilament, which is vital for the high motility of cancer cells [18]. Research has shown that the capping protein scinderin (SCIN) regulates actin and then participates in the migration of tumor cells. In addition, the silencing of SCIN in vitro and in vivo significantly inhibits the formation of filopodia and reduces the migratory ability of gastric cancer cells [19]. The Arp2/3 complex is an actin-related protein and can nucleate the microfilament (nucleation) [20]. The entire complex can be bound to the microfilament to allow new microfilaments to be generated. In tumor cells, the function of the Arp2/3 complex is related to the movement of the lamellipodia [21]. Cofilin is a family of actin-binding proteins that disassemble microfilaments [22]. The depolymerization of cofilin can change the adhesion between cells and the extracellular matrix and ultimately promote cell migration [23,24]. Fascin is a type of actin-bundling protein [25]. The primary functions of fascin are related to adhesion and the formation of filopodia in the movement of cancer cells [26].
Microtubules have a diameter of 25 nm [27]. They are composed of α and β tubulin subunits that form tubulin dimers [28]. Microtubules function by resisting compression and bending to maintaining cell morphology [29]. During cell migration, microtubules, together with the attached dynein, may release a signal to promote focal adhesion depolymerization [30].
The diameter of an intermediate filament is 10 nm, which is the size between microtubules and microfilaments. This structure is the most stable cytoskeletal component and plays a major role in supporting the cell [31]. Microtubules and microfilaments are assembled by spherical proteins, and intermediate fibers are assembled by long, rod-shaped proteins [32]. The intermediate filament proteins can be divided into six types based on similarities in amino acid sequence and protein structure: acidic and basic keratins, vimentin, neurofilaments, nuclear lamin, and nestin. Vimentin is the most widely distributed of all intermediate filament proteins [33][34][35]. Vimentin regulates cell adhesion molecules and other components and is involved in tumor cell adhesion, the epithelial-mesenchymal transition (EMT), migration, and invasion [36][37][38].
When invading the dense extracellular matrix, cancer cells need to have a high degree of deformability. The network structure of fibers that is composed of microfilaments and various microfilament binding proteins under the plasma membrane is called the cell cortex [39]. The cell cortex can push the cell membrane to form protrusions, including lamellipodia, filopodia, and invadopodia [40].
Lamellipodia are at the front end of a migrating cell and contain actin filament branching structures that form an actomyosin branch, which is more mature, and then extend to form lamellar protrusions. Lamellipodia usually help the cell move forward and can also drive random or continuous cell migration that is associated with the migratory phenotype of tumor cells [41]. Lamellipodia generate the driving force for cell migration [42].
Filopodia are small fingerlike cell protrusions. They contain an F-actin parallel fascicular arrangement. Filopodia are formed by actin polymerization, extending to the front end of the cell, and are especially prominent in migrating cells [43]. The contractility of filopodia is weaker than that of lamellipodia. Nonetheless, in some highly invasive tumor cells, lamellipodia are not formed, but a large number of filopodia can be observed [44,45]. Therefore, filopodia may be associated with the invasive phenotype of tumor cells.
The invadopodia is rich in actin microfilaments and adhesion proteins, such as integrin, focal adhesion kinase (FAK), and vinculin, and forms a ring around the actin bundles [13]. The formation of invadopodia releases a variety of matrix metalloproteinases (MMPs) and degrades the extracellular matrix (ECM) to promote cell invasion [46]. ECM degradation mediated by invadopodia can also generate cell traction, which helps tumor cells to invade new sites [47].
Interestingly, fast moving cells mainly form lamellipodia, while nearly immobile cells have lamellipodia and filopodia. Lamellipodia, filopodia, and invadopodia can remodel the cytoskeleton by changing microfilaments, microtubules, and intermediate filaments. In the process of tumor cell migration, cytoskeletal reorganization is regulated by non-coding RNA and signaling pathways [48,49] (Fig. 1).
LncRNAs regulate the cytoskeleton in cancer
lncRNAs can affect the migration of tumor cells by regulating the cytoskeleton or related proteins [50][51][52]. The most direct evidence comes from the function of a lncRNA called "downregulated in hepatocellular carcinoma" (Dreh). The expression of Dreh is lower in cancer tissues than that in normal tissues. Patients with high expression of Dreh show a low recurrence rate and long survival time. Dreh can inhibit cell migration, and further research showed that Dreh binds to and inhibits intermediate filaments and prevents cancer cell metastasis by changing the cytoskeletal structure and cell morphology [53].
Another cytoskeleton-related lncRNA is LINC00152, which also known as cytoskeleton regulator RNA. The expression of LINC00152 is upregulated in tissue samples from various cancers. For example, Chen et al. showed that LINC00152 expression is increased in 60 human lung adenocarcinoma tissue samples relative to paired normal tissues [54]. Müller et al. reported the upregulation of LINC00152 in pancreatic cancer tissue [55]. LINC00152 is associated with a poor prognosis of tongue carcinoma with invasion and metastasis and is overexpressed in a variety of tumors, such as breast cancer, lung cancer, gastric cancer, liver cancer, gallbladder cancer, and colorectal cancer. Therefore, LINC00152 can serve as a new tumor marker [54,[56][57][58][59][60][61]. In the breast cancer cell line MDA-MB-231, LINC00152 regulates target genes involved in cytoskeletal remodeling, including tubulin tyrosine ligase, Rho guanosine triphosphatase (GTPase) Rhobtb3, and plakophilin 4 [60]. An Ingenuity Pathway Analysis revealed that some of the target genes of LINC00152 are closely related to the cell spreading pathway, including actin polymerization-driven processes, the Rho family of GTPase promoters, and the mTORC2 complex [62,63]. Therefore, in experiments that knocked down LINC00152 expression using a locked nucleic acid or that used fluorescently labeled F-actin, cells treated with LINC00152 are smaller, rounder, show actin reorganization, a reduction in stress fibers, and the appearance of thick actin fibers in the cortex compared with the control group [60]. In addition, according to the literature, LINC00152 may affect F-actin reorganization by regulating the expression of Golgi phosphoprotein 3 (GOLPH3) [64].
Vimentin is responsible for maintaining the integrity of the cytoskeleton and cell shape [65]. Many lncRNAs have been shown to affect vimentin, including HOX transcript antisense RNA (HOTAIR), LOC344887, and colon cancer associated transcript 2 (CCAT2). HOTAIR is relevant to small cell lung cancer invasiveness by suppressing cell adhesion-related genes such as astrotactin 1 (ASTN1) and protocadherin alpha 1 (PCDHA1) [66]. In cervical carcinoma, HOTAIR promotes the migration and invasiveness of HeLa cells by regulating the expression and organization of vimentin [51]. The inhibition of HOTAIR significantly promotes the collapse of the vimentin intermediate-filament network to lead to a decrease in cell migration and invasion [51].
Vimentin is a marker of EMT that appears during cancer metastasis. LOC344887 increases the migration and invasion of gallbladder cancer cells. In particular, LOC344887 leads to EMT by increasing the protein expression of vimentin [67]. Similarly, the lncRNA CCAT2 promotes hepatocellular cancer metastasis by positively regulating vimentin and inducing EMT [68]. Vimentin intermediate filaments are strongly involved in cell shape, focal adhesion, and motility by altering these characteristics during EMT [69].
The lncRNA papillary thyroid cancer susceptibility candidate 2 (PTCSC2) has one unspliced isoform and several spliced isoforms, all of which show thyroidspecific expression. Myosin-9 (MYH9) interacts with the lncRNA PTCSC2 [70]. Further studies have shown that MYH9 binds to the FOXE1 promoter region, PTCSC2, to regulate FOXE1 promoter activity. MYH9 participates in the generation of cell polarity, cell migration, cell-cell adhesion processes, and cytoskeleton maintenance by binding to actin filaments [71].
The expression of the lncRNA growth arrest-specific 5 (GAS5) is downregulated in glioma tissues. GAS5 inhibits the migration and invasion of U87 and U251 human glioma cell lines [72]. Mechanistically, overexpression of GAS5 increases the expression of plexin C1, which encodes a member of the plexin family. Plexins are transmembrane receptors for semaphorins, which regulate cell motility and migration by downregulating miR-222 [72]. Furthermore, plexin C1 targets cofilin by inducing cofilin inactivation rather than by decreasing the cofilin amount.
Cofilin stimulates microfilament disassembly to promote cell mobility during tumor migration and invasion [72,73]. This finding may indirectly explain why GAS5 inhibits the migration and invasion of glioma cells by reorganizing the cytoskeleton. Nevertheless, the mechanism needs further study.
The lncRNA UCA1 is upregulated in bladder cancer and induces EMT, migration, and invasion of bladder cancer cells. Mechanistically, UCA1 regulates bladder cancer cell migration and invasion by miR-145 and its target genes actin-bundling protein fascin and zinc finger E-box binding homeobox 1 and 2 (ZEB1/2) [74]. UCA1 is a direct target of hsa-miR-145 by interacting with the miR-145 binding site at exons 2 and 3 of UCA1. UCA1 mediates bladder cancer migration and invasion through the miR-145-ZEB1/2-fascin pathway [74]. Fascin localizes along the entire length of all filopodia. RNA interference of fascin reduced the number of filopodia, and the remaining filopodia had abnormal morphology and loosely bundled actin organization [75,76].
Thus, different lncRNAs can affect the migration of tumor cells via affecting various components of the cytoskeleton and associated proteins ( Fig. 1 and Table 1).
Rho/ROCK signaling in cytoskeletal reorganization
Many receptor proteins activated in the plasma membrane can initiate cytoskeletal reorganization; the signals are all mediated by the RhoGTP family and the downstream effector ROCK, which compose the Rho/ROCK signaling pathways [77,78]. The most studied Rho GTPases can be subdivided into three classes: Rho (RhoA, RhoB, and RhoC), Rac (Rac1, Rac2, and Rac3), and cell division cycle 42 (Cdc42). Other less studied GTPases include RhoD and RhoE [79][80][81].
Rho aggregates actin and myosin to form stress fibers and focal adhesion complex assembly [82,83]. RhoA is present at the cell membrane when it is active [84]. RhoA regulates the generation of actomyosin bundles, stress fibers, focal adhesions, and lamellipodia [85]. RhoB is found in endosomes and at the plasma membrane. The role of RhoB in cancer progression remains unknown, and it responds to specific signals in the tumor microenvironment [84]. RhoC modulates phagosome formation by actin cytoskeletal remodeling via mDia1 [86].
Rac primarily promotes the formation of lamellipodia and invadopodia. Rac1 localizes mainly to the plasma membrane and drives the formation of lamellipodia and invadopodia [87]. Rac2 is critical for cell adhesion to intercellular adhesion molecule-1 (ICAM-1) and for immunological synapse formation [88]. Rac3 is critical for integrating the adhesion of invadopodia to the extracellular matrix (ECM) to allow invadopodia to degrade the ECM [89].
Cdc42 promotes the formation of filopodia and induces cell migration and metastasis [90]. Other less studied GTPases, such as RhoE, can bind to ROCK1 and inhibit the activity of ROCK1 [91,92]. Phosphoinositide dependent kinase 1 activates ROCK1 by opposing the inhibition of RhoE and then promotes cell motility [92].
The ROCK family includes two members, ROCK1 and ROCK2 [93], which are encoded by two different genes [94,95]. ROCK1 plays a key role in the formation of stress fibers, and this isoform is mainly responsible for rigidity-dependent invadopodia activity through actomyosin contractility [96,97]. ROCK2 is important for phagocytosis, cell contraction and stabilizing the cytoskeleton [78,97,98].
Rho/ROCK signals can regulate the development and balance the formation of lamellipodia, filopodia and invadopodia, and these signals promote the degradation of the extracellular matrix [99].
Rac1 promotes actin polymerization during lamellipodium formation through the WAVE complex and subsequent activation of the Arp2/3 complex [100].
Cdc42 activates the formin protein mDia2 to regulate actin nucleation and elongation of microfilaments [101,102]. Cdc42 can also activate N-WASP to stimulate the Arp2/3 to induce actin polymerization. The straight parallel alignment of microfilaments form filopodia [99].
RhoA and RacC play important roles during the formation of invadopodia. RhoA activates mDia2 and further induces the formation of linear actin bundles to result in the elongation of invadopodia [103]. ET-1 triggers increased binding to ETAR and promotes the formation of the βarrestin/PDZ-RhoGEF signaling complex, which activates ROCK/LIMK/cofilin through RhoC activity and generates actin remodeling and invadopodia formation [104].
In addition to regulating the formation of invadopodia, Cdc42 plays an important role in the trafficking of MMPs to the invadopodia. For example, Cdc42 induced IQGAP1 binding at invadopodia to traffic MT1-MMP and MMP14 by vesicles [105,106]. ROCK expression increases during pancreatic cancer progression, and ROCK consequently increases the phosphorylation of MLC2. pMLC2 causes actomyosin contraction and then induces the release of the stromelysin MMP10 and the collagenase MMP13. Eventually, MMPs promote extracellular matrix remodeling to enable invasive growth [94] (Fig. 2).
LncRNAs regulate Rho/ROCK signaling during tumor migration
LncRNAs can regulate cancer cell migration by targeting Rho/ROCK signaling and include metastasis-associated lung adenocarcinoma transcript 1 (MALAT1), actin filament associated protein 1 antisense RNA1 (AFAP1-AS1), and maternally expressed 3 (MEG3) [7,107,108]. The lncRNA MALAT1 reduces the protein expression levels of RhoA, ROCK1, and ROCK2, indicating that MALAT1 may promote osteosarcoma cell migration by the RhoA/ROCK pathway; then, MALAT1 increases the number of actin stress fibers in osteosarcoma cells [107]. Another study suggested that MALALT1, miR-1, and cdc42 are competitive endogenous RNAs (ceRNAs) in breast cancer cells [108]. MALAT1 can bind and inhibit miR-1; miR-1 can bind to the 3'UTR of cdc42 and decrease the expression of cdc42 to induce the migration and invasion of breast cancer cells [108].
Our research group has found that the lncRNA AFAP1-AS1 leads to the loss of stress fiber formation in nasopharyngeal carcinoma by influencing the expression of RhoA/Rac2 signaling and F-actin polymerization [7]. Zhang et al. also found that increased expression of AFAP1-AS1 significantly correlates with pathological staging and lymph-vascular space invasion in patients with hepatocellular carcinoma via inhibition of RhoA/ Rac2 signaling. Overall, AFAP1-AS1 may promote NPC and hepatocellular carcinoma metastases through RhoA/ Rac2 signaling [109].
Wang et al. demonstrated that downregulated lncRNA MEG3 is associated with lymph node metastasis in primary thyroid cancer. Mechanistically, MEG3 suppresses [110]. Vinculin is another motility-associated protein that is synthesized in migrating cells, and vinculindeficient cells extend unstable lamellipodia and filopodia. The lncRNA XLOC010623 activates the TIAM1/Rac1 and RhoA/ROCK2 signaling pathways, increases the expression of vinculin and causes the migration of adipose tissue-derived stem cells [111]. The lncRNA SchLAH physically inhibits the migration of HCC cells through RhoA and Rac1 [112]. The regulatory mechanism of the lncRNAs ABHD11-AS1 and TDRG1 are similar; they both induce the expression of RhoC and MMP during tumor progression [113,114]. LncRNAs are involved in cancer metastasis mainly through reorganizing the cytoskeletal structure and by regulating the expression of molecules in the RhoA/ ROCK pathway to result in an increased number of actin cytoskeleton fibers, stress fiber formation, formation of lamellipodia and filopodia, tumor cell adhesion, and angiogenesis. The regulatory mechanism includes ceRNA and direct binding to RAC1 and other molecules in the Rho/ROCK pathway. Understanding these relationships may provide insights into human lncRNA regulation, cytoskeletal structure, cell migration, and cancer metastasis. We present various lncRNAs in Fig. 3 and Table 2 and describe many lncRNAs that regulate tumor metastasis through cytoskeletal remodeling via the Rho/ROCK pathway. lncRNAs may be a promising target for future cancer therapy.
Conclusions
From a physics viewpoint, the development of a tumor is a biological process driven by mechanics, which is regulated by the biochemical signaling pathways of tumor cells. For example, changes in cellular mechanical properties can activate signal transduction pathways. In this review, we associate the physical movement of the cell with Rho/ROCK signaling and discuss the regulatory involvement of lncRNAs, which are significant for future research.
Many studies have shown that Rho/ROCK-mediated cytoskeletal regulation plays a key part in cancer metastasis. External factors stimulate the transition of normal cells to tumor cells. The activity of the intracellular Rho signal increases, the arrangement of cytoskeleton fibers changes, and then cell morphology is affected. In contrast, in the Rho/ROCK pathway, lncRNA-regulated specific molecules are less frequent, and we do not understand their specific mechanisms of action. Therefore, the mechanism of Rho/ROCK regulation by lncRNA and their relationship to metastasis remain to be studied.
Targeting Rho/ROCK signaling-associated lncRNAs could be useful for inhibiting the migration of cancer cells and may be a new target for the treatment of cancer metastasis. Some questions need to be addressed. For example, do lncRNAs regulate other signaling pathways that are associated with the cytoskeleton? How can the cytoskeleton be targeted via lncRNAs for clinical cancer treatment? With the development of lncRNA research and technologies for measuring cell movements, the relationship between lncRNAs and cell migration and invasion in tumor metastasis can be uncovered. ABHD11-AS1 RhoC ABD11-AS1 can bind to RhoC directly in epithelial ovarian carcinoma [112] susceptibility candidate 2; PVT1: Pvt1 oncogene; RhoA: Ras homolog family member A; ROCK: Rho-associated coiled-coil containing protein serine/threonine kinases; TDRG1: testis development-related gene 1; UCA1: urothelial cancer-associated 1 | 2018-04-05T13:23:49.341Z | 2018-04-04T00:00:00.000 | {
"year": 2018,
"sha1": "07c6ba706cf5fc5d4da89e676fc89217e4ba1686",
"oa_license": "CCBY",
"oa_url": "https://molecular-cancer.biomedcentral.com/track/pdf/10.1186/s12943-018-0825-x",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "07c6ba706cf5fc5d4da89e676fc89217e4ba1686",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
56260278 | pes2o/s2orc | v3-fos-license | Spatial variability in tropospheric peroxyacetyl nitrate in the tropics from infrared satellite observations in 2005 and 2006
Peroxyacetyl nitrate (PAN) plays a fundamental role in the global ozone budget and is the primary reservoir of tropospheric reactive nitrogen over much of the globe. However, large uncertainties exist in how surface emissions, transport and lightning affect the global distribution, particularly in the tropics. We present new satellite observations of free-tropospheric PAN in the tropics from the Aura Tropospheric Emission Spectrometer. This dataset allows us to test expected spatiotemporal distributions that have been predicted by models but previously not well observed. We compare here with the GEOS-Chem model with updates specifically for PAN. We observe an austral springtime maximum over the tropical Atlantic, a feature that model predictions attribute primarily to lightning. Over northern central Africa in December, observations show strong interannual variability, despite low variation in fire emissions, that we attribute to the combined effects of changes in biogenic emissions and lightning. We observe small enhancements in free-tropospheric PAN corresponding to the extreme burning event over Indonesia associated with the 2006 El Niño.
Introduction
Peroxyacetyl nitrate (PAN) provides a thermally unstable reservoir for nitrogen oxide radicals (NO x ), facilitating their long-range transport at low temperatures and eventual release in warmer regions of the remote troposphere where they most efficiently contribute to ozone (O 3 ) production (Singh and Hanst, 1981).PAN chemistry effectively reduces O 3 production in NO x source regions and increases it in remote regions of the troposphere (Wang et al., 1998;Fischer et al., 2013).PAN is thought to be the dominant species in the reactive nitrogen budget over much of the globe (Roberts et al., 2007), but it is a particularly difficult compound to simulate in models due to the complexity of PAN chemistry and uncertainties in precursor emissions.Comprehensive in situ measurements of PAN are limited for the troposphere, particularly in the tropics (Maloney et al., 2001;Singh et al., 1996).
Since PAN abundance can be highly variable in space and time, it is difficult to know whether presently available but limited in situ measurements are broadly representative.Satellite measurements offer a new opportunity to place constraints on our understanding, providing global coverage over multiple years.Global measurements of PAN have previously been obtained via thermal-infrared measurements from the limb-viewing Atmospheric Chemistry Experiment Fourier Transform Spectrometer (ACE-FTS) and Michelson Interferometer for Passive Atmospheric Sounding (MIPAS) satellite sensors (Tereszchuk et al., 2013;Moore and Remedios, 2010;Wiegele et al., 2012;Pope et al., 2016).Limb sounding measurements of PAN for limited time periods, but at relatively high spatial density, have also been made from the Cryogenic Infrared Spectrometers and Telescopes for the Atmosphere (CRISTA), flown on the Space Shuttle for two separate missions in 1994and 1997(Ungermann et al., 2016) ) These observations provide information in the uppermost troposphere and lower stratosphere.Observations in the nadir-viewing geometry can provide sensitivity to PAN lower in the troposphere, where its variable stability makes its role in O 3 production more important to understand.PAN is formed rapidly in biomass burning plumes, and isolated cases of elevated PAN in biomass burning plumes in the troposphere have been observed from the MetOp Infrared Atmospheric Sounding Instruments (IASI) and Aura Tropospheric Emission Spectrometer (TES) sensors (Clarisse et al., 2011;Alvarado et al., 2011).More recently, global measurements of tropospheric PAN from Aura-TES have been obtained and are described in Payne et al. (2014).These TES PAN retrievals have so far been utilized in studies of the influence of fires in atmospheric composition in boreal spring (Zhu et al., 2015), the role of PAN in seasonal transport of East Asian pollution (Jiang et al., 2016) and the seasonality and interannual variability of PAN in the eastern Pacific (Zhu et al., 2017).Here we present new observations of TES PAN in the tropics.We focus on 2005 and 2006 in austral spring (the season of peak biomass burning) and compare these observations with simulations from the GEOS-Chem global chemical transport model.
The tropical troposphere plays an important role in global oxidation capacity, and understanding the role of PAN chemistry is necessary to understand the different contributions to the NO x reservoir and the O 3 enhancement in the tropical south Atlantic.PAN can be formed within fire plumes because NO x is co-emitted with large quantities of shortlived non-methane volatile organic compounds (NMVOCs), and it can form when biogenic NMVOCs react with NO x produced by lightning.Formation of PAN in the cold upper troposphere over this region acts to sequester NO x and decrease O 3 formation.The contribution of biomass burning to the NO x reservoir and the O 3 enhancement in the tropical south Atlantic remains a long-standing issue (Anderson et al., 1993;Gregory et al., 1996;Jacob et al., 1996;Edwards et al., 2003;Ziemke et al., 2009).Models predict that lightning is the most important source of PAN in the atmosphere of the tropical south Atlantic (Fischer et al., 2014, and references therein).However, this finding is particularly sensitive to the description of boundary layer chemistry, which remains very uncertain (Hewitt et al., 2010).Implementation of a state-of-the-science isoprene scheme (Paulot et al., 2009a, b) reduces the model sensitivity of upper-tropospheric PAN over the tropical Atlantic to lightning by changing the fraction of isoprene oxidized outside the boundary layer (Fischer et al., 2014).Elevated PAN mixing ratios (∼ 500 pptv) were observed in the middle to upper troposphere over the tropical south Atlantic during the October 1992 TRACE-A aircraft campaign, and an austral spring maximum in this region is predicted by state-of-the-science global chemical transport models (Fischer et al., 2014;Fadnavis et al., 2014).Limb-viewing satellite observations have shown PAN mixing ratios of ∼ 350 pptv at 260 hPa in this region in austral spring (Moore and Remedios, 2010;Glatthor et al., 2007).PAN observations over multiple years, in conjunction with global chemical models, offer the potential to shed light on the influence of fire emissions on the interannual variability of the tropical south Atlantic O 3 maximum.
Section 2 describes the characteristics of the TES PAN retrievals, while Sect. 3 provides background on the GEOS-Chem model simulations used in this work.Section 4 describes the features observed by TES in the tropics in austral spring of 2005 and 2006.Section 5 presents the relationships between PAN and carbon monoxide (CO) in different regions and discusses model-measurement comparisons.Conclusions are presented in Sect.6.
TES PAN retrievals
The TES has been flying on the Aura satellite since 2004.TES measures nadir-viewing spectrally resolved thermalinfrared radiances, providing information on numerous trace gases in the troposphere, including PAN.The TES PAN retrievals use an optimal estimation approach.An algorithm description is provided in Payne et al. (2014).TES has been shown to be capable of observing PAN with sensitivity to elevated concentrations (greater than ∼ 0.2-0.3ppbv) in the free troposphere (between ∼ 800 mbar and the tropopause).Estimated single-observation errors are 30-50 %.The number of degrees of freedom for signal (DOFS), or independent pieces of information, in the TES PAN retrievals is less than 1.0, meaning that the retrievals are not sensitive to the vertical distribution of PAN in the atmosphere.As discussed in Payne et al. (2014), TES PAN retrievals are generally insensitive to near-surface variations of PAN and are sensitive primarily to variations in the free troposphere.TES PAN retrievals are being processed routinely for the whole TES dataset and are publicly available in the TES v7 Level 2 product.However, at the time of this work, the v7 product was not yet available.The TES PAN retrievals shown here were processed using a prototype algorithm for the areas and time periods of interest.
PAN retrievals are not attempted for all TES targets.As discussed in Payne et al. (2014), PAN retrievals are not attempted for cases where the water vapor or O 3 from previous retrieval steps did not pass the master quality flags.PAN retrievals are also generally not attempted over sandy or rocky surfaces, such as desert or mountainous regions.The reason for this is the presence of a silicate feature in the surface emissivity spectra of those surfaces that coincides with the spectral position of the PAN absorption feature.While this is not an issue for the tropical data, we note that PAN retrievals over icy or snowy surfaces are subject to a high bias.Again, this is due to spectral features in the emissivity for these surfaces.Therefore, we recommend screening out data with surface temperature less than 270 K. Jiang et al. (2016) performed indirect comparisons of TES PAN with aircraft measurements from the Arctic Research of the Composition of the Troposphere from Aircraft and Satellites (ARC-TAS) campaign, using the GEOS-Chem model as a transfer standard.Results of that study suggest a high bias in TES cases with surface temperatures below ∼ 280 K, where surfaces are not ice or snow covered.One possible explanation is that surface temperature is a proxy for the representative temperature of the column and that the low bias stems from a lack of information on temperature dependence in the HITRAN 2008 spectroscopic cross sections used in the TES retrievals.The HITRAN 2012 database includes lowtemperature cross-section information for PAN.This will be considered for future versions of the TES algorithm.Based on the Jiang et al. (2016) comparisons for cases with warmer surfaces/atmospheres, we do not expect strong biases for the tropical data shown here.
We define cases for which elevated PAN is detected with confidence as those that pass basic quality checks and where the DOFS of the retrieval is greater than 0.6.Note that the use of DOFS is not, in itself, a quality flag.Retrievals with DOFS < 0.6 may converge with a good quality of fit.However, in those cases the retrieved PAN would be strongly affected by the prior constraint chosen for the retrieval.Further justification of the choice of DOFS = 0.6 as a threshold can be found in Payne et al. (2014).
GEOS-Chem global chemical transport model
GEOS-Chem (http://www.geos-chem.org) is a global chemical transport model driven by GEOS assimilated meteorological data from the NASA Global Modeling and Assimilation Office (GMAO).GEOS-Chem includes a state-of-thescience description of tropospheric oxidant chemistry.We used v9.01.01 with updates specifically for PAN as described in Fischer et al. (2014) (and references therein) to explore, analyze and explain the global TES PAN data.GEOS-Chem was driven by NASA GEOS-5 assimilated meteorological data with 0.5 • × 0.67 • horizontal resolution, 47 vertical levels and 3-6 h temporal resolution.We degraded the horizontal resolution to 2 • × 2.5 • .The simulations for 2005 and 2006 were preceded by a 1-year spinup.
Briefly, the version described in Fischer et al. (2014) includes updated budgets of many NMVOC PAN precursors including acetone, ethane and propane, acetaldehyde and methylglyoxal.Terrestrial biogenic emissions of NMVOCs are calculated using the Model of Emissions of Gases and Aerosols from Nature (MEGAN v2.0) (Guenther et al., 2006).The model incorporates a new oxidation schemes for isoprene and several additional NMVOC tracers (monoterpenes, ethanol, aromatics) that also serve as PAN precursors.Other relevant updates include the treatment of emissions from fires.In particular, the model includes biomass burning emissions of shorter-lived NMVOCs (monoterpenes, aromatics); 40 % of the biomass burning NO x is directly emitted as PAN (Alvarado et al., 2010), and 35 % of fire emissions are injected into the 10 model layers above the boundary layer (Val Martin et al., 2010).
Observations of PAN in the tropics
Figure 1 shows Aura-TES PAN in the tropics for October 2006.Figure 1a shows individual observations.Volume mixing ratios (VMRs) in Fig. 1a represent an average between 800 hPa and the tropopause.Figure 1b shows the fraction of observations with elevated PAN.This fraction is the ratio of the number of TES targets for which elevated PAN is detected with confidence to the number of targets for which the PAN retrieval was attempted.Figure 1b was created by calculating the fraction of observations with elevated PAN in 4 • × 5 • boxes, then smoothing this field with a twodimensional boxcar average with a width of two boxes.For October 2006, we see a high density of elevated PAN detections over the tropical south Atlantic and the surrounding landmasses and a high fraction of TES observations with elevated PAN over the tropical south Atlantic.High PAN over the tropical south Atlantic in austral spring is one of the major features in the global PAN distribution predicted by the GEOS-Chem (Fischer et al., 2014).High PAN values over the tropical south Atlantic in the uppermost troposphere (∼ 8-16 km) have previously been observed from MI-PAS (Moore and Remedios, 2010;Glatthor et al., 2007;Pope et al., 2016).The TES observations presented here provide information on the temporal evolution in austral spring 2005 and 2006.We express the gridded results in "fraction of observations with elevated PAN", rather than averaged PAN retrieval values, because the Aura-TES PAN retrievals are only possible for elevated PAN values.The precise details of the detection threshold depend on a number of factors, including cloud optical depth, the vertical distribution of PAN in the atmosphere and the details of the surface and atmospheric temperatures.Since water is a significant interferent in the spectral region used for the PAN retrieval, there is also some dependence of the detection threshold on the details of the water vapor profile.The vertical sensitivity of the TES PAN retrievals also varies with these factors, although in general the retrievals have highest sensitivity to variations in PAN in the free troposphere.Based on simulations over a range of conditions, Payne et al. ( 2014) specify an approximate detection threshold of 0.2 ppbv.Therefore, the interpretation of any averaged values would be complicated by the fact that we do not have information on the values at the low end of the true distribution.
Figure 2 shows histograms of free-tropospheric average PAN values in different regions of the tropics for September through December 2005 and 2006.Boxes showing the geographical extent of each of these regions are shown in Fig. 1a.Histograms are calculated on a logarithmic scale in order to better allow examination of differences in the 0.1 to 0.3 ppbv range.Histograms are normalized by the total number of TES observations in each region.Also shown for each region is the total of the histogram for each month and region.This total equates to the fraction of TES observations where elevated PAN was observed.It is clear from Fig. 2 that there is con- siderable variation in free-tropospheric PAN between regions and from one month to the next.In general, higher PAN values and higher fractions are observed for the months where peak biomass burning occurs in those regions.For the Amazon and southern Africa, peak burning occurs in September and October.For northern Africa, peak burning occurs in December.For Indonesia, peak burning usually occurs between September and November.For 2006, the Indonesian fires began in October and persisted through November (Logan et al., 2008).
In all TES retrievals, an effective cloud optical depth is retrieved in order to mitigate the impact of clouds (Kulawik et al., 2006).The impact of clouds is to reduce the sensitivity of the measured radiance to the target trace gas concentrations.This is accounted for in the averaging kernels and therefore is reflected in the DOFS for the retrieval.As discussed in Payne et al. (2014), clouds with optical depth greater than ∼ 0.5 have the potential to obscure PAN signals that are comparable in magnitude to the instrument noise.We would therefore expect that the fractions shown in Figs.1b and 2 would be, if anything, an underestimate of the true incidence of elevated PAN in the atmosphere.
For all the months shown here, TES made global survey measurements throughout the tropics, and the sampling is vastly more spatially uniform than could be obtained by any kind of in situ sampling strategy.However, the number of TES measurements in the tropics does vary somewhat between the two years shown here, with a greater number of measurements taken in 2006 than 2005.The number of measurements in any given region does also vary from one month to the next, depending on the details of instrument operation.In both 2005 and 2006 there were significantly fewer global survey measurements taken in September than in other months.For example, in the latitude band between 30 • S and 10 • N, there were 6665, 10 495, 10 738 and 9486 TES measurements taken in September, October, November and December 2006, respectively.In September 2005, the measurements are distributed earlier in the month, while in September 2006 the measurements are generally later in the month.It is possible that this difference in temporal sampling could account for the observed year-to-year differences in the fraction of elevated PAN over the Amazon region (South America) in September (see Fig. 2).For October, November and December, the TES measurements are spread more evenly throughout each month in both years.
In terms of year-to-year differences, strong differences are observed for northern central Africa in December.December 2005 shows elevated PAN detected with confidence in 45 % of TES observations compared to 30 % in December 2006.GEOS-Chem simulations indicate that PAN concentrations in this region are strongly influenced by biomass burning (see Supplement).The TES CO in this region does not show marked differences between 2005and 2006(Logan et al., 2008)).We infer from this that the observed yearto-year difference in PAN is not dominated by differences in biomass burning.MEGAN (via GEOS-Chem -see Fig. 3) does show higher monthly mean isoprene emissions over this region in December 2005 versus 2006.The difference in isoprene emissions at specific locations in the orange box in Fig. 1a range from 10 to 50 %.The total isoprene emissions for the region were ∼ 13 % higher in December 2005 than in 2006.Since biogenic emissions in the presence of lightning lead to PAN formation, stronger biogenic emissions in 2005 could contribute to higher PAN values.Logan et al. (2008) also note that there was more lightning over much of Africa (including the region considered here Vertical transport is also a consideration.If the surface emissions were the same, we would expect that stronger convection in a given year would enhance the impact of surface emissions on mid-upper-tropospheric PAN.It would not only enable more efficient lofting of fire smoke but also allow the same quantity of biogenic NMVOC emissions and/or secondary products to contribute more efficiently to aloft PAN formation for a given amount of lightning NO x .Either way, stronger convection in a given year would increase the contribution of surface emissions to PAN in the mid-troposphere, where it can be observed with the nadir-viewing thermalinfrared satellite measurements.Previous studies (e.g., Nassar et al., 2009) have pointed to the difference in convection over northern central Africa between these two years and subsequent differences in O 3 .Nassar et al. (2009) note that convection was stronger in December 2006 than December 2005 in this region.This would act in the opposite direction to the observed year-to-year PAN differences.We did GEOS-Chem simulations without convection and found that the amount of PAN above northern central Africa is very sensitive to the presence of convection.Transport and scavenging in convective updrafts is coupled in GEOS-Chem (Liu et al., 2001).Turning off the convection operator effectively suppresses both convective transport and scavenging in updrafts.Other related processes, e.g., lightning, NO x emissions, in-cloud oxidation, all remain.Figure 4 2008) extended into the upper troposphere and lower stratosphere, as seen by the Aura Microwave Limb Sounder (e.g., Zhang et al., 2011).Given
PAN/CO enhancement and comparisons with GEOS-Chem
In order to further explore the role of biomass burning on the observed PAN, we use coincident TES measurements of carbon monoxide (CO).2010), assuming a background mid-tropospheric CO value of 50 ppbv.These lines are shown here primarily to demonstrate the large range of values in aircraft observations of boreal plumes, not for the purposes of quantitative comparison with these tropical satellite observations.The TES measurements shown in Fig. 5 have not been specifically screened to establish fire influence nor have attempts been made here to categorize the satellite measurements according to distance from fires.
We also compare the TES-retrieved PAN-CO relationships with those from GEOS-Chem.The PAN-CO relationship from GEOS-Chem for October 2005 and 2006 are shown in Fig. 5b and d.When comparing GEOS-Chem modeled PAN with Aura-TES observations, we sampled the model fields at the measurement locations and times.The TES averaging kernels and a priori were applied to the GEOS-Chem profiles in order to account for the sensitivity of the TES measurements.Both the TES measurements and the GEOS-Chem model show a range of PAN / CO ratios.A considerable number of points show PAN / CO enhancements higher than that previously observed in biomass burning plumes in other regions (Alvarado et al., 2010).We hypothesize that high PAN / CO enhancements could also conceivably be associated with a strong influence of lightning.Unlike during fires, lightning NO x is emitted without CO.
The absolute values of PAN are distinctly higher in the measurements than the model.There are a number of possible reasons why the model might predict lower values than observed.One possible reason is a high bias in the observations.Pope et al. (2016), in a comparison between MIPAS PAN results from two different retrieval algorithms, found significant differences in the tropical PAN fields between the two sets of results and pointed to potential reasons for differences that include differences in the way that PAN crosssection data are interpolated within the forward model used in the retrieval algorithm.They concluded that the MIPAS satellite observations are able to detect realistic spatial variations in PAN, but further work is needed to evaluate the satellite retrievals in an absolute sense.We acknowledge that this type of further work is also desirable for evaluation of the results from the TES PAN algorithm.Alternatively, the global model, with its limited spatial resolution, may not be able to capture relatively small-scale plume enhancements that could be observed by the satellite (Rastigejev et al., 2010).It is also possible that the fire injection heights in the model are inaccurate.Other possibilities include underestimation of the NO x -to-PAN conversion ratios in the model or underestimation of the NO x emissions themselves, from fires, lightning or both.
Although the absolute PAN values from TES are higher than those from GEOS-Chem, we note that both model and measurements show features that are qualitatively consistent in terms of the PAN / CO relationship.Both model and measurements show a distinctive signature associated with the October 2006 Indonesian fires, extremely elevated CO and distinctly low PAN / CO enhancement ratios.The low PAN / CO could be due to two factors: (1) we expect a higher emission ratio of CO relative to NO x for peat burning compared to both tropical forests and crop residue (Akagi et al., 2011;Stockwell et al., 2014) and (2) these plumes were not directly injected into the free troposphere, promoting the decomposition of PAN (Tosca et al., 2011).
We used GEOS-Chem to assess the sensitivity of the model to injection height.For October 2005 and 2006, runs were performed both for the default case where 35 % of the of fire emissions are injected into the 10 model layers above the boundary layer and for the case where all fire emissions are injected directly into the planetary boundary layer.We found that at least over Indonesia, the modeled free-tropospheric PAN is not strongly sensitive to the injection height.The difference between the PAN for two runs was 10 % at most (see Fig. 6).A possible reason for this is that the persistent convection in this region enables rapid lofting of PAN to the free troposphere, regardless of whether the fire injection heights are within the boundary layer or above it.The model sensitivity result suggests that the higher emission ratio of CO relative to NO x for peat burning compared to tropical forests/crop residue is the dominant reason for the low PAN / CO observed for the Indonesian fires.When similar runs were performed for December in northern central Africa (not shown), the difference in freetropospheric PAN was up to 40 %, indicating that sensitivity to injection height is stronger in that region.The temperature in the lower atmosphere may also factor into the difference in sensitivity to injection heights between different regions.
Conclusions
Our findings can be summarized as follows: we observe elevated free-tropospheric PAN over the tropical south Atlantic in austral spring for the two years investigated (2005 and 2006).This feature has been predicted by models and previously observed in MIPAS satellite observations of the uppermost troposphere.The TES observations presented here provide confirmation that this feature is also observed in the nadir view.We see a strong enhancement in PAN over northern central Africa (5 • S to 10 • N) in December 2005 relative to December 2006.Since convection was stronger in December 2006 than December 2005 in this region, we hypothesize that the December year-to-year PAN difference in this region is most likely associated with changes in biogenic emissions and lightning.We observe small enhancements in free-tropospheric PAN and high enhancements in CO in October/November 2006 compared to 2005, corresponding to the extreme burning event over Indonesia associated with the 2006 El Niño.Comparisons between the TES observations and the GEOS-Chem model show qualitative agreement in observed regional and year-to-year variations in PAN / CO enhancement ratios.
Knowledge of the PAN distribution is key to understanding the reactive nitrogen (NO y ) budget that controls the tropospheric O 3 .These new nadir-viewing satellite observations of PAN, analyzed in conjunction with a global chemical transport model, demonstrate the importance of emissions, chemistry and transport in understanding the largescale distribution of PAN.TES PAN retrievals will be routinely processed for the entire TES data record, from 2004 to the present, in the TES version 7 data release.We suggest that nadir satellite observations of PAN will complement the existing limb satellite observations and will provide a powerful tool in understanding the reactive nitrogen budget and the global transport of pollution from polluting to receptor regions.
Data availability.TES PAN and CO data are archived at the NASA Langley Research Center Atmospheric Science Data Center (https://eosweb.larc.nasa.gov/project/tes/tes_table,TES Science Team 2013Team , 2017a, b), b).The TES products can also be accessed via the NASA Reverb tool (http://reverb.echo.nasa.gov).TES monthly Lite files are also available via the Aura Validation Data Center (http://avdc.gsfc.nasa.gov).The PAN dataset used in this work, produced using a prototype algorithm developed prior to the TES v07 Level 2 release, may be obtained upon request from the corresponding author (vivienne.h.payne@jpl.nasa.gov).
The Supplement related to this article is available online at doi:10.5194/acp-17-6341-2017-supplement.
Competing interests.The authors declare that they have no conflict of interest.
Disclaimer.Part of this research was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration.Reference herein to any specific commercial product, process or service by trade name, trademark, manufacturer or otherwise does not constitute or imply its endorsement by the United States Government or the Jet Propulsion Laboratory, California Institute of Technology.
Figure 1 .
Figure 1.(a) Gray points show locations of TES measurements during October 2006 where PAN retrievals were attempted.Colored points show cases where elevated PAN was measured in the TES spectra, colored according to the average VMR between 800 hPa and the tropopause.Values over 0.8 ppbv are colored red.Boxes show regions highlighted in Figs. 2 and 5. (b) Fraction of TES measurements where elevated PAN was detected in TES spectra, showing a maximum in the tropical south Atlantic.
) in 2005 compared to 2006.GEOS-Chem simulations showing the sensitivity of PAN to lightning for December 2005 and 2006 are shown in the Supplement.More lightning NO x in December 2005 than 2006 would also lead to enhanced PAN.
Figure 2 .
Figure 2. Variation of PAN for different regions in the tropics, as measured by TES, for September through December 2005 (left two columns) and 2006 (right two columns).Two-dimensional histograms show the distribution of PAN values measured by TES for September through December 2005 and 2006 for regions defined by the boxes shown in Fig. 1a -the Amazon region of South America, the tropical south Atlantic, southern central Africa, northern central Africa and Indonesia.Histograms are normalized by the total number of TES observations in that region/month.Line plots show the fraction of TES observations where elevated PAN was detected (colored lines).
shows maps of the difference in PAN between GEOS-Chem simulations with and without convection.In a global context, northern central Africa is one of the most sensitive regions.The en-hanced convection in November and December 2006 would have acted to increase mid-tropospheric PAN in this region more strongly in 2006 than in 2005.However, the PAN over this region is in fact higher in 2005 than in 2006.Therefore, we conclude that the December year-to-year PAN difference in this region is most likely associated with changes in biogenic emissions and lightning.A noticeable difference is also observed for Indonesia in October/November, with distinctly higher PAN in 2006 compared to 2005.Logan et al. (2008) have previously discussed extreme CO enhancements in October 2006, associated with the strong 2006 El Niño.During an El Niño event, the normally warm waters and associated convection over the western Pacific and maritime continent move towards the eastern Pacific, resulting in changes in the large-scale circulation.El Niño events are associated with decrease in convection and in precipitation over the maritime continent.The 2006 El Niño was associated with a severe drought in Indonesia, leading to intense fires in this region.The strong enhancements in the CO discussed by Logan et al. (
Figure 3 .
Figure 3. Monthly mean MEGAN biogenic isoprene emission rate for December 2005 (a) and December 2006 (b).Panel (c) presents the difference in average emission rates between December 2005 and December 2006.
Figure 5a and c show scatter plots of TES-retrieved CO versus PAN, for selected regions for October 2005 and 2006.In general, elevated CO in tropical regions can be interpreted as an indication of strong fire emissions.Variability in enhancements in PAN relative to CO ( PAN / CO) in fire plumes is driven by the efficiency of PAN formation, mixing(Yokelson et al. 2013) and transport.For example, in an evaluation of models at high latitudes,Arnold et al. (2015) note that model enhancement ratios show distinct groupings according to the
Figure 4 .
Figure 4. Sensitivity of PAN to convection during December 2005 (a) and December 2006 (b) at 6 km calculated as the difference in PAN between a simulation with and without convection.
Figure 5 .
Figure 5. Scatter plots of CO vs. PAN, from TES data and from the GEOS-Chem model, sampled at TES times and locations.Colored symbols show points where TES DOFS > 0.6 for selected regions (green crosses for the Amazon, blue crosses for the tropical south Atlantic, red diamonds for southern central Africa and purple squares for Indonesia.)Gray symbols show points within any of the selected regions where TES DOFS < 0.6.Gray dotted lines show maximum and minimum values of PAN / CO enhancements in aircraft measurements of boreal fire plumes, as reported in Alvarado et al. (2010).
Figure 6 .
Figure 6.Model sensitivity of free-tropospheric PAN to injection height.(a) Difference between a GEOS-Chem simulation where all fire emissions over Indonesia are injected directly into the planetary boundary layer (PBL) and a simulation where 35 % of fire emissions over Indonesia are injected above the PBL, for October 2005.(b) Same as (a) but for October 2006.Scales are fractional difference. | 2018-12-15T05:05:20.255Z | 2016-12-22T00:00:00.000 | {
"year": 2016,
"sha1": "f3d2da5b90bd456c535ab59b026ef1866c332694",
"oa_license": "CCBY",
"oa_url": "https://www.atmos-chem-phys.net/17/6341/2017/acp-17-6341-2017.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "f3d2da5b90bd456c535ab59b026ef1866c332694",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
7089088 | pes2o/s2orc | v3-fos-license | Gender Differences in Language and Motor-Related Fibers in a Population of Healthy Preterm Neonates at Term-Equivalent Age: A Diffusion Tensor and Probabilistic Tractography Study
BACKGROUND AND PURPOSE: Sex differences in white matter structure are controversial. In this MR imaging study, we aimed to investigate possible sex differences in language and motor-related tracts in healthy preterm neonates by using DTI and probabilistic tractography. MATERIALS AND METHODS: Thirty-eight preterm neonates (19 boys and 19 girls, age-matched), healthy at term-equivalent age and at 12 months were included. TBV was measured individually. Probabilistic tractography provided tract volumes, relative tract volumes (volume normalized to TBV), FA, MD, and λ⊥ in the SLF, in the TRs, and in the CSTs. Data were compared by using independent t tests, and Bonferroni corrections were performed to adjust for multiple comparisons. RESULTS: We showed that healthy preterm boys had larger TBV than girls. However, girls had statistically significantly larger relative tract volumes than boys bilaterally in the parieto-temporal SLF, and in the left CST. Moreover, in the left parieto-temporal SLF, a trend toward lower MD and λ⊥ was observed in females. CONCLUSIONS: Structural sex differences were found in preterm neonates at term-equivalent age in both sides of the parieto-temporal SLF and in the left CST. Further studies are necessary to investigate whether these structural differences are related to later sex differences in language skills and handedness or to the effect of prematurity.
S ubstantial interest in sex differences in neural structures has been generated in recent years by observations of sex differences in cognitive functions. 1-3 A male advantage for spatial abilities has been widely observed in humans and other animals, 4 whereas a female advantage has been seen for verbal abilities such as verbal fluency and verbal memory in adult life. [5][6][7] This difference also has been found in children, with girls having better language development at an early age 7-10 and boys experiencing more frequent language impairments. 11,12 Therefore, postmortem pathologic and in vivo quantitative brain imaging studies have been looking for differences be-tween males and females. In adults, several studies have shown that men have larger (by ϳ10%) brains than women. [13][14][15] Interestingly, these differences are already present in children [16][17][18] and neonates. 19 In adults, the regional volumetric gray matter distribution patterns tend to show an enlargement in females when adjusting for brain size. 14,[20][21][22][23][24] In children, findings of sex differences in relative gray matter volume have shown enlargement in females, most prominently in the temporal and parietal cortices. 25,26 Studies on sex effects on global and regional WM are controversial; both significant 22,27,28 and nonsignificant interactions 29,30 have been reported. It is possible that the measured WM volumes, as determined from conventional MR imaging, reflect changes in macrostructure only, and may not be sensitive to WM microstructure. 31,32 Such microstructural changes are within the reach of DTI, an MR imaging technique that allows studying the in vivo microstructure and the volume of the major WM tracts. DTI assesses and quantifies water diffusion at a microstructural level, given that water diffuses more easily in the direction of the fibers than orthogonally. [33][34][35] Diffusion indices, such as FA, MD, and // and Ќ , allow us to indirectly quantify brain microstructure. 36,37 Results for sex differences in diffusion indices in adults, either global or regional, have been inconsistent. One study showed no sex difference, 38 whereas others showed significant sex differences, but only when focusing on predefined brain regions, such as the frontal lobe or the corpus callosum. [39][40][41][42] Nevertheless, it should be noted that all these studies by using either ROI analysis or voxel-based morphometric techniques have an error related to anatomic ambiguity in the ROI definition, WM segmentation, and other postprocessing steps such as spatial normalization and smoothing. 22,43 These methods focus on predefined brain regions but not on specific WM tracts. DTT provides a 3D reconstruction of specific WM tracts and is able to overcome these confounding effects. Moreover, to our knowledge, no diffusion imaging studies have yet investigated whether sex differences are present in neonates.
In this study, we investigated, by using DTI and DTT, whether sex-related differences were present in the language and motor related fibers in healthy preterm neonates at termequivalent time.
Subjects
Among preterm neonates born between June 2005 and June 2009 who underwent brain MR imaging to detect lesions related to premature birth, 44 78 preterm neonates with acceptable (see below) DTI were studied. The inclusion criteria for normality were as follows: 1): normal head circumference at birth (Ͼ5th and Ͻ95th percentiles), 2) 5-minute Apgar score Ͼ6, 3) lack of evidence for congenital infection or multiple congenital anomaly syndrome, 4) normal structural brain MR imaging as assessed by 2 board-certified neuroradiologists (D.B., P.D.), and 5) normal physical and neurologic examination at termequivalent age and at 12 months corrected for GA as assessed by a board-certificated neuropediatrician (A.A.). On the basis of these criteria, 28 neonates were excluded. Furthermore, 12 normal neonates were excluded to obtain sex groups of equal sample size, that were matched for GA at birth and corrected GA at the time of MR imaging. Thirty-eight healthy preterm neonates (19 boys and 19 girls) were finally included in this study (Table). The study was approved by the ethics committee of our institution (reference P2004/207 and P2009/ 234), and informed written parental consent was obtained for each participant.
No sedation was used, and the neonates were spontaneously asleep, positioned in a vacuum immobilization pillow to minimize body and head movements. Ear-muffs were placed to minimize noise exposure. Oxygen saturation and electrocardiography were monitored throughout the acquisition.
Data Postprocessing
Data analysis was performed by using FSL software. 45
Image Preparation
Image artifacts due to eddy current distortions and head movements were minimized by registering the DTI from 32 directions to the B0 images. 46 DTI images corresponding to directions with motion artifacts was excluded from further data processing. DTI was considered as acceptable when Ͻ5 directions had to be excluded. Extraction of the brain parenchyma from scalp and skull was performed with the FSL Brain Extraction Tool; any small errors identified in the masks were manually corrected. 14 Maps of the diffusion indices were obtained by using FSL Diffusion Toolbox. 47
Probabilistic Tractography
The bundles were reconstructed in each subject by a single investigator (Y.L.) by using multitensor probabilistic tractography. 31 Seed masks and waypoint masks were generated on color-coded FA maps, placed carefully by one radiologist (Y.L.) and checked by a second radiologist (D.B.). 48 The SLF was separately tracked into 2 parts 48,49 : the frontoparietal SLF and the parieto-temporal SLF. For the frontoparietal SLF, a seed mask covered the frontal WM and the waypoint mask covered the frontoparietal WM; for the parieto-temporal SLF, the seed mask was the same as the waypoint mask of the frontoparietal SLF, and the waypoint mask covered the temporal lobe. 48 The TRs were studied separately in 4 subradiations 48 : ATR, the motor and sensory STR, and the PTR. A seed mask was positioned in the bottom of the thalamus and a waypoint mask was positioned in the anterior limb of the internal capsule for the ATR; in the precentral gyrus for the motor STR; in the postcentral gyrus for the sensory STR, and in the occipital lobe for the PTRs. The CST was isolated as a whole, by using a seed mask positioned in the cerebral peduncle and a waypoint mask in the precentral gyrus. 50 The original tracts were normalized by the total number of samples going from the seed mask to the target mask. 51 Finally, the obtained connectivity distributions were thresholded with a probability of 2%. 48,52,53 We calculated the TBV by measuring the volume of the voxels located in the brain mask. 14,54 To assess tract macrostructure, tract volumes and relative tract volumes (defined as the ratio between individual tract volume and TBV) were computed. The microstructure of the tracts was evaluated with diffusion indices (FA, MD, // , and Ќ ) by using FSL maths. 48
Statistical Analyses
All variables were analyzed with the SPSS software (SPSS, Chicago, Illinois). A 1-sample Kolmogorov-Smirnov test was performed to detect a possible departure from normality of our variables. Sex-related differences in the TBV, the volumes, the relative volumes, and the diffusion indices (FA, MD, // , and Ќ ) of each tract were analyzed by using a t test for independent samples. Adjustment for multiple comparisons was performed by using the Bonferroni correction, 55 tical significance was reached when P Ͻ .004. A trend toward significance was reported when P Ͻ .05.
Sex Differences in Principal WM Tracts
WM tracts related to sensorimotor and language functions are shown in Fig 1. Relative tract volumes were statistically significantly larger in females than in males (Fig 2) bilaterally in the parieto-temporal SLF (left, P Ͻ .001; right, P Ͻ .001) and in the left CST (P Ͻ .001). Moreover, trend toward larger tract volumes (Online Table 1) was found bilaterally in the parieto-temporal SLF (left, P ϭ .034; right, P ϭ .011).
A trend toward lower MD (P ϭ .041) and Ќ (P ϭ .033) in females was observed in the left parieto-temporal SLF (Online Table).
Discussion
In this in vivo brain MR imaging study, we investigated sex differences in the TBV and WM tracts with DTI probabilistic tractography in the language and motor networks in a population of healthy preterm neonates scanned at term-equivalent age. We found, like other studies in neonates 19 and in adults, 14,23 that at term-equivalent age healthy preterm male neonates had larger TBV than females. The original findings of our study were that female neonates had larger relative tract volumes bilaterally in the parieto-temporal SLF and in the left CST, with a trend toward lower MD and Ќ in the left parietotemporal SLF after Bonferroni correction.
Previous studies have shown that the temporal cortex is larger in females than in males. This has been demonstrated in children by using structural imaging, 25,26 and also in adults through pathologic studies showing larger planum temporale 56 and Heschl gyrus, 57 and a greater attenuation of neurons 58 in females. Given that the parieto-temporal SLF is supposed to transmit auditory information from the superior temporal gyrus to the inferior parietal lobe, we suggest that our results may reflect an early established difference in favor of female neonates in the number or size of axons in these language-related regions.
The SLF is one of the slowest maturing WM tracts, being not yet myelinated at birth. 59,60 Lower MD and Ќ are probably caused by a decrease in brain water content and an increase in membrane attenuation, and they suggest an advanced premyelination stage characterized by proliferation and maturation of oligodendrocytes. 61,62 Therefore, we propose that this microstructural sex difference might be caused by an advanced maturation in the left parieto-temporal SLF in female neonates.
The finding of different tract relative volumes with no significant difference in diffusion indices is a feature with no straightforward interpretation. In the right parieto-temporal SLF, a larger relative tract volume associated with a trend toward larger tract volume in females was found in the absence of difference in diffusion indices: this might possibly reflect macrostructural changes (more axons at the same myelination stage). In the CST, a larger relative tract volume in females was observed together with no significant difference in either tract volume or diffusion indices, suggesting a similar maturation and number of axons in a smaller female brain. In other published series, differences in tract volumes were not always associated with differences in diffusion indices. 33,48,63,64 Moreover, we used probabilistic tractography, which does not directly rely on diffusion index values, but on the uncertainty orientation of the distribution function, enabling it to progress across regions with principal direction uncertainty and through regions with crossing fibers. Therefore, in probabilistic tractography, volume measurement is not directly linked to diffusion indices. 51 Language acquisition and processing have shown sex-related differences in infants as young as 2 years old. 8,10,65 Maccoby and Jacklin 66 reported that girls outperformed boys during preschool and early years in articulation, length of sentences, verbal fluency, grammar, and spelling. In giving the California Verbal Learning Test to children between 5 and 16 years old, girls were found to use more semantic clustering, to recall and recognize more items, and to relate words together more as a recall aid than did boys. 67 In addition, language impairments have been found to occur more frequently in boys than in girls. 11,12 Because of its implication in language function, we suggest that the sex effect on parieto-temporal SLF relative tract volume and microstructure might explain the more rapid development of language skills hitherto reported in females.
Although we could not evidence a significant sex effect on the volume of the CST, we showed that the relative volume of the left CST is larger in females. Interestingly, studies in adults have already shown, after adjusting for the TBV, an increased volume 21 and gray matter concentration 14 in the precentral gyri in females. A relatively larger left CST in females might explain why meta-analyses showed more right-handed females than males in the general population. 68 However, the relationship between handedness and asymmetry in the adult brain is not established, because some studies have found such relationship 69,70 but others have not. 71 Sex differences in the volume and microstructure of certain WM tracts, as observed in this study, might result from genetic factors as well as from effects of sex steroids on brain development, both factors being known to affect regional tissue composition. [72][73][74] Another hypothesis is that our results may have been influenced by the effect of prematurity. Indeed, even if the normality of our preterm population was based on robust structural and clinical criteria, as in previous studies, 48,52 we cannot exclude the possibility that the sex related differences observed in the language and motor networks may have been caused by subtle cerebral lesions, because certain studies seem to suggest that preterm males may be more sensitive to brain injuries than females. 75,76 Therefore, it would be of interest to investigate whether these sex differences are also present in healthy term neonates.
Because the first years of life are perhaps the most dynamic phase of postnatal brain development, with rapid development of a wide range of cognitive and motor functions, 77 the link between structural sex differences at term-equivalent age with later functional differences should be interpreted with great caution. Longitudinal studies combining cognitive evaluation with structural and functional imaging may provide insights into the structurefunction relationship in sex differences.
Another limitation of our study is that the reproducibility of mask placement was not assessed. Nevertheless, mask placements were checked by 2 radiologists and in probabilistic tractography, by using the approach of normalization, the size of seed and target masks can be ignored. 51
Conclusions
In this DTI and probabilistic tractography study on healthy preterm neonates, we demonstrated that sex differences are present in language and motor-related tracts at term-equivalent age. Further studies are needed to investigate whether these structural differences are related to later sex differences in language skills and handedness or to the effect of prematurity. | 2017-06-22T18:32:26.827Z | 2011-12-01T00:00:00.000 | {
"year": 2011,
"sha1": "9305efb69ff5c9b6fa5993fceff583418885c32c",
"oa_license": "CCBY",
"oa_url": "http://www.ajnr.org/content/32/11/2011.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "e5fde5cee50db67954084624cf2f2d815dc1b32c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
255313754 | pes2o/s2orc | v3-fos-license | Twelver Shia in Edinburgh: marking Muharram, mourning Husayn
Research on the Shia in Scotland and of their spaces of worship and gathering continues to be under-represented in the research field of Muslims in Britain. According to the 2011 census, there are just under 77,000 Muslims in Scotland, with Edinburgh, its capital, home to about 12,400. This article aims to fill in some of these gaps by focusing on a Muharram procession emerging out of a Twelver Shia imambargah in Edinburgh. Drawing from fieldwork conducted from 2011 to 2013, the article provides an ethnographic account of this annual jaloos (ritual procession) in Leith district, examines its evolution, and analyses the jaloos’s signage and related proclamations in English and Urdu. In juxtaposing these elements, I argue that even as the procession is a normative means to commemorate and transmit the core values of the Twelver Shia through the events of Karbala, it actively engages with and responds to stereotypes about Muslims in the West and thus serves simultaneously as a wider public presentation on, and defence of, Islam. By closely examining these Muslims’ public performance of Islam, this article offers a case study of an alternative narrative of Muslims in Britain and sheds new light on the rituals and experience of the Twelver Shia in Scotland.
to commemorate the martyrdom of Husayn b. Ali, grandson of the Prophet Muhammad and third Shia imam. 1 These 'permanent Shi i ritual-oriented buildings', are sites 'for various stationary rituals, the departure and arrival point for processions, and the repository for symbolic objects used in different ceremonies' (Chelkowski 2015: 190). Drawing upon fieldwork from 2011 to 2013, it begins by providing an ethnographic account of this procession. Next, the article examines how the procession has evolved in the short time since it first took place, describing the community's demographics, its messaging and signage. Throughout, these descriptions are deliberate, for 'an adequate description is nothing less than a thorough analysis of a chunk of the world as it actually functions', and in this way, aiming to bridge a 'descriptive gap' which often 'seek [s] for the nature of things instead of their workings' (Dupret et al. 2012: 1). 2 In examing how the procession unfolds, the article also poses the question of what meaning the processionists are producing, both for themselves and others. As such, the article then analyses these signs and messages as well as the procession's associated public speeches in English and Urdu. In juxtaposing these two elements, I argue that even as the procession is a normative means to commemorate and transmit the core values of the Twelver Shia through the events of Karbala, it actively engages with and responds to stereotypes about Muslims in the West and thus serves simultaneously as a wider public presentation on, and defence of, Islam. By closely examining these Muslims' public performance of Islam, this article offers a case study of an alternative narrative of Muslims in Britain and sheds new light on the rituals and experience of the Twelver Shia in Scotland.
Jaloos: A Muharram procession
Founded by the Wali-Al-Asir Trust in August 1989, the imambargah in Edinburgh's Leith district describes itself simply as a 'Shi a Community Centre' and an 'Imambargah following the Shi a Ithna Asheri (Twelver) school of thought, based in Edinburgh'. 3 It is indistinguishable from the other buildings around it on Great Junction Street. There are no domes or minarets or archesindeed, nothing at all externally and stereotypically to indicate that Muslims regularly gather here for prayer and ritual practice. Once a year, however, during the Islamic month of Muharram, these Muslims make an unmistakeable, conscious and public declaration of their presence. This takes place in the form of what they call a jaloos (from Urdu, lit. 'procession'), which emerges from the imambargah on King Street, stepping off onto Great Junction Street and marching on to the statue of Queen Victoria at the Foot of Leith Walk for speeches, before doubling back for more private rites.
Historical background
Such processions constitute one of a series of mourning rituals held around the world by Twelver Shia Muslims during Muharram to commemorate the martyrdom of Husayn, the younger son of Ali b. Abi Talib, himself the cousin and son-in-law of the Prophet through his daughter Fatima, in Karbala,southern Iraq,in 680 CE. 4 The historical details below are worth revisiting briefly because they constitute fundamental tropes invoked and remembered publicly and repeatedly in the Muharram procession in Edinburgh.
'Ali was the fourth and last of what later Sunni Muslim tradition called the 'rightlyguided caliphs'. Upon his death in 661, Mu awiya, then governor of the wealthy province of Syria quickly consolidated his power, becoming the de facto fifth caliph of the now greatly expanded Muslim empire. Husayn took up arms to reassert his own rights to the caliphate when Mu awiya died and his son Yazid succeeded him. However, his rebellion failed -Yazid's forces quickly intercepted Husayn and his army, cutting them off from supplies of fresh water at Karbala. Surrounded by desert, weak with hunger and parched with thirst, Husayn's numbers dwindled. Over the next ten days, scores were killed: followers, friends and family, including his six-month-old son, Ali al-Asghar, shot, according to tradition, by an arrow through the neck as he was held up in the air in a desperate plea for water. On the tenth day of Muharram 680, Husayn was decapitated, his body trampled on by horses. 5 The killing of the Prophet Muhammad's own grandson, let alone its brutality, shocked the populace, and for the Shia particularly, it became a pivotal moment, infusing in them 'a new religious fervour' consolidating their ethos and identity (Daftary 2013: 33). Thus, from early on in Islamic history, religious, social and political discord became associated with a particular kind of governance, leadership and authority.
Contemporary commemorations in Edinburgh
The first Muharram procession in Edinburgh took place in November 2011. When I witnessed it in November 2012, it comprised some 150 men and 50 women, including children, and was organised as a prelude to a larger procession in Glasgow a few days later. Some of the processionists held aloft standards on which hung black cloth banners in various shapes and sizes edged with golden tinsel and calligraphic embroidery in blue, green and red. Others carried simpler motifs in the form of silver and blue floriated flags. The tips of some poles ended in a stylised representation of a hand atop a crescent. On one flagpole, wrapped in layers of cloth, sat a gold pot, on top of which rested another stylised hand. 6 Both men and women carried these standards, although the women's were smaller.
Most of the processionists wore black or dark blue clothing comprising either plain salwar kameez (trousers and shirts) under grey, black or blue winter jackets. Women, 4 For a historical overview of these processions, see Chelkowski 1994. For an ethnographic account of these processions in London, see Spellman-Poots 2012. Bøe and Flaskerud (2017) provide details for processions in the hitherto unstudied context for Oslo and Bergen in Norway. 5 For a detailed account of the battle at Karbala, see Jafri 2000. 6 For a wide-ranging survey on this motif, generally known as the khamsa, but here likely representing the hand of Abu al Fadl al-Abbas, half-brother of Husayn and a key figure of Karbala, see Suleman 2015. who marched behind the men, almost uniformly sported black trousers or blue jeans covered either by dark manteaux that fell at least below the knee, coupled with a black hijab (sometimes banded green), or a full black chador. Among some women, a slightly more colourful kameez was evident. Several men also wore salwar kameez ensembles where only the salwar was white but the kameez was black or blue. Grey hoodies were also a popular choice among younger men, as were jeans, usually blue, occasionally beige. 7 Some of these men also walked barefoot. Like all the women, several men had their heads covered, usually with a woollen hat, although this appeared more a practical measure to keep out the cold than to fulfil any religious requirement. Only one or two men wore turbans, which identified them as clerics (mullahs) of the community. Among both sexes, quite a few had additionally donned fluorescent yellow vests and it was they who flanked the other processionists, keeping them safe from traffic. A few, mainly children, wore slightly more colourful pinks, blues or browns. Many of these details offer a close ethnographic parallel with processions in Toronto, Canada (Schubel 1996: 198-201). In Edinburgh, as we shall see, they are also important for how they change and develop in the years that follow.
The procession took its time to head back to the imambargah. It had five distinct phalanxes. The primary standard bearers led the way, followed closely by a group of male singers to whose songs and chants the rest of the processionists rhythmically struck their palms on their chests. Every few minutes when the processionists would stop, the chorus formed a circle, which signalled to the men alongside or behind them the onset of a rather more elaborate and ritualised chest-beating that never failed to stop even the most hurried passers-by in their tracks for its distinct style and faster tempo.
Holding alternate hands high in the air, this third phalanx comprising primarily young men would bring them down on the opposite side of their chests while simultaneously bending their knees. This movement was usually accompanied by a sharp exhalation of the breath, 'Hu!' at each strike, which helped maintain the rhythm and so amplified the thumping of their hands on their chests, that it could be heard on the opposite side of the street. Behind them, older men, some of whom also bore standards, kept up with the increased pace without, however, changing over to this more involved rite, which in the South Asian context is commonly known as matam. The women and teenage girls who made up the final phalanx behind them did likewise, engaging only in light tapping, never chest-thumping. Despite also holding aloft standards, the women were not organised into sub-groups and thus appeared undifferentiated from each other. They also did not sing. The children in the procession tended, unsurprisingly, not to observe these boundaries, and depending on their age and sex often milled back and forth between their parents or chattered amongst themselves. While some processionists looked forlorn, others smiled and exchanged pleasantries with each other. Yet others, mostly men, would periodically check their smartphones or use them to take pictures and videos of the procession itself. A number of them also wielded dedicated cameras and camcorders. Slowly, the processionists filed into the imambargah, thus marking the end of the public procession.
Public reaction
Public reaction to the processions in both 2012 and 2013 ranged from mild shock and bemusement to predictable mutterings about holding up traffic unnecessarily. Cars honked insistently and more than a few passers-by asked each other aloud what was happening. Despite the biting wind, people stopped to watch the processions more closely, snap photographs and videos and accept flyers from the mourners. 8 The high street was bustling on both occasions. That the 2012 procession took place on a weekend and the 2013 one on a weekday made no difference either in its reception by potentially different members of the public or indeed on the level of participation and engagement by the community itself, which incorporated men, women and children, ranging in age from babies to individuals in their sixties. In other words, they represented a cross-section of the community (the broad demographics of which we shall turn to shortly) who had been deliberate about carving out time from either weekend leisure or workday responsibilities. For some, particularly the men, this was not just one day, but potentially another, too, given that a number also likely participated in the Glasgow processions.
If the passers-by were generally curious, the shopkeepers, store managers and assistants whose businesses along Great Junction Street comprised fishmongers, convenience stores, corner supermarket chains, drapers, travel agencies, betting shops, and fast food outlets, were distinctly and almost uniformly wary. As community members went up to businesses to hand out flyers, many kept their doors closed, and where some accepted the flyers, they did so heads cocked and with their bodies held close to the partially opened door, a ready hand on the door handle. This unease may partly be the result of perceived parallels with the Orange Order, which until recently conducted two annual parades in Leith. 9 Historically, the Orange procession or 'walk' was a public celebration of the anniversary of the Battle of Boyne and 'often the harbinger of serious disorder', which 'angered and offended' Irish Catholics by their 'ritual displays of Protestant tribalism' (Marshall 1996: 12). As such, the retailers may have simply been exercising caution about the alien and unknown turning up on their doorsteps. Given the history of sectarianism in Scotland, the threat of violence, whether real or imagined, is hardly trivial, regardless of the religious tradition being represented. Ignorance, too, cannot be a discounted as a factordespite the handouts explicitly narrating the story of Karbala and the role of Husayn, several passers-by in 2012 stopped to ask me if the procession was about the Prophet Muhammad. 10 8 For a comparative Sufi zikr procession, see Werbner 1996. 9 The Order is named after King William III, the Prince of Orange, whose victory at the Battle of Boyne in 1690 'secured the future of the Reformed Faith in Ireland' (Marshall 1996: 6). Primarily a working-class organisation and 'in effect organised militant Protestantism' (Marshall 1996: 9), the Order in Scotland dates back to the Industrial Revolution for Irish migrant Protestants concerned about distinguishing themselves from their Catholic counterparts. According to Michael J. Rosie, the Order moved their march to Regent Terrace with the onset of tramworks in Edinburgh: 'An ageing membership valued the removal of a big hill from their parade (!) and they've never attempted to resurrect the 'traditional' route. Orange parades in Edinburgh rarely get any press coverageand very little is written on the [Order] in Scotland, let alone Edinburgh' (Personal email, 13 November 2014). 10 I also learnt some weeks later that one of the sergeants policing the procession had been asked by a member of the public if the group was protesting against the Christmas tree!
Development and change
Demographics A full-length YouTube video of the first procession in Edinburgh in November 2011 uploaded by a community member (Sheikh 2011) contrasts starkly with my fieldwork in 2012 and 2013. Most apparently, there are double the number of people participating in that first procession than in my ethnographic observations in subsequent years. An analysis of the video itself suggests several constituent elements of their identities.
In terms of their civic identity, the majority of processionists in 2011 were likely from Glasgow, where there is a larger population of Twelver Shia (and indeed other Muslims more generally). This is borne out by the presence of many individuals in this footage who also appear in other videos of the community that are explicitly identified as Glaswegian as well as from my own footage and engagement with the community in Edinburgh over the course of my research. 11 The processionists' linguistic origin and identity can be similarly discernedover the general hubbub of the procession, one can hear the distinctive lilt of Persian being spoken by men and children amidst the mass of people otherwise speaking Urdu or English. In this regard, the Twelver Shia in Edinburgh likewise comprise both Pakistani as well as Iranian diasporic communities, with the former constituting an overwhelming majority. My own interactions also suggest a small contingent of South Asians from parts of East Africa. Although transient university students make up some of community, notably among the Iranians, all of these groups have settlement histories in Edinburgh that go back at least 30 years, often more. 12 In 2012 and 2013, only a handful of the processionists were Iranian, one of whom said to me that the procession was really a Pakistani affair, and that while theirs in Iran were rather different, it was important to show up to this one as a demonstration of solidarity. 13 Finally, with regard to gender, the (male) videographers seem more focused on capturing what is happening around the men. While women appear in the videos, they get much less airtime. Despite the difficulty of estimating the number of women in the 2011 procession, it is important to note that in successive years while men constituted the bulk of the processionists, women made up a quarter of their ranks. In these years, while the men recited longer ritual chants and thumped their chests, the women, some of them pushing prams or buggies, were significantly quieteralmost silent, chanting 'Ya Husayn, Ya Husayn' ('O Husayn, O Husayn') so softly as to be heard only when the men were silent or if one were very close to them. In the context of Karbala, as in 11 I was unable to determine why there was such a large Glaswegian contingent. Perhaps they were offering the kind of organisational experience familiar to larger groups as well as moral support in the form of making the community's presence in Edinburgh more visible for their first procession. 12 Specific details on the national or ethnic origins and intra-religious diversity of Scotland's Muslims is harder to come by. Seminal studies by Wardak (2000), Qureshi (2006) and Hopkins (2007), for example, focus on Sunni Pakistanis. Bonino (2017) offers valuable insights from a slightly more geographically diverse pool of interviewees, including those from East and North Africa. Drawing from the 2011 Census, both he and Elshayyal (2016) provide useful breakdowns and analyses of the ethnic origins of Muslims in Scotland, but these are constrained by the census categories for ethnicity. Little or no mention is made, therefore, of Iran and the Shia. 13 For an example in the Netherlands of how different Shia youth groups have used Dutch to address the challenges of the diversity of their ethnicities and national origins, see Schlatmann 2017. many others, battle and martyrdom are arguably gendered experiences that dialectically reinforce the role and performance of men over women, at least in public commemorative rituals. As Hegland notes of Twelver Shia women in Peshawar, Pakistan, they face 'symbolic complexes that reinforce men's role as repositories of holy power and succor ' (1998: 240). While the women had a less performative role, they worked in concert with the men and were integral in disseminating the central message of the procession to the wider public. Certainly as girls joined boys in distributing flyers and women bore banners with slogans, they demonstrated that the sexes were equal participants in the procession. More broadly these Muslims provided a very clear example of veiled women in the West actively participating in the public sphere, contrary to stock tabloid notions of their passivity. 14 The figure of Zaynab bint Ali, the sister of Husayn, is an important historical example of such participation, and an almost certain inspiration for the female faithful in Edinburgh. 15 The eloquence of her complaint at Yazid's court in Damascus (recorded in Tayfur 1987) 16 after the massacre at Karbala is popularly held up as a model of speaking truth to power, invoked and remembered by men and women alike. Indeed, Zaynab's esteem is reflected in many of the elegies sung by the male processionists. 17 Of course, Zaynab is no ordinary woman. For many Twelver Shia, she is an extended member of the ahl al-bayt, literally 'people of the house', referring to the family of the Prophet. Her mother, Fatima, is properly of the household of the Prophet and progenitor of the Shia imams. In this regard, Fatima is revered not merely by virtue of her filial relationship with the Prophet, but in and of her own right (Pinault 1999: 72-75) and as evidenced by the number of lectures extolling her and uploaded to YouTube.
Messaging
Aside from the make-up of the processionists, a second change had to do with the community's increasing efforts after 2011 to disseminate the message of the procession to the wider public. Whereas in 2012 only a handful of adults handed out flyers, several more did so in 2013 including, notably, boys and girls aged twelve and above. All of these pamphleteers actively went up not only to passers-by, but also shops and businesses on both sides of the street. As noted earlier, these flyers narrated not only the story of Karbala and the reason for the procession, but also pointed people to online resources such as the London-based website whoishussain.org, inaugurated in 2012 (Figs. 1 and 2).
The processions also appear, from associated YouTube videos and flyers in print and social media, to have been organised partly and jointly by different institutions and groups within the community including the Imamia Islamic Mission, also known as the Wali-Al-Asir Trust, and itself the site of the imambargah; the Jafaria Foundation, which has its own centre just out of Edinburgh in Dalkeith, and the Edinburgh Ahlul Bayt 14 Schubel notes for his study in Toronto that 'women marched separately from the men, at the rear of the procession, whereas in Pakistan women generally do not participate in processions', and that the 'increased presence of women in community activities is a common theme throughout [the] essays' in the volume to which he has contributed (1996: 198). 15 Although I was unable to speak with any of the female processionists, Schubel (1996) makes a similar observation for his comparable study in Toronto, Canada. 16 An English commentary and translation is provided by Ayati (n.d.) 17 Several examples of elegies, including of women other than Zaynab, feature in Pinault 1999: 83-92. society, founded in 2011, which appears to have been incorporated into the Scottish Ahlul Bayt Society (SABS) from around October 2015, and in October 2016 also included the Lady Sughra Society and the SABS Health Awareness Campaign. The Wali-Al-Asir Trust is a registered charity and parent organisation of the Imamia Islamic Mission. As noted on the Scottish Charity Register, the Trust aims (a) To advance community development by providing a community centre for social and religious activities to be carried out. (b) To advance religion by providing a place for religious services, for the perpetuation and propagation of (Shia) Islam, within our community and to spread the Light of Islam and peace in the world. (c) To promote equality, diversity, spiritual well-being, religious tolerance and harmony for the public benefit by fostering better relation between Muslim and non-Muslim communities. (d) To promote daytrips, gatherings, meals for disabled aged and isolated members of the community. 18 18 The Wali-Al-Asir Trust was registered as a charity (SCO43534) on 1 November 2012. According to the Office of the Scottish Charity Regulator (OSCR), these objects are taken directly from the charity's constitution. See www.oscr.org.uk/search/charity-details?number=SCO43534, last visited 20 September 2018. 19 For its part, the Scottish Ahlul Bayt Society aims 'to meet the needs of the Scottish Shia Muslim community and the breadth of society in general across the cultural, social, political and religious spectra'. 20 All three organisations reflect a largely Pakistani constituency, and smaller numbers of Iranians and East Africans of South Asian origin. While the Wali-Al-Asir Trust and the Jafaria Foundation are more internally oriented, the Scottish Ahlul Bayt Society additionally has an explicit outreach agenda, evidenced not least by an endorsement by Nicola Sturgeon, Scotland's First Minister, on their webpage. 21 Judging from a number of events I attended, it also appears to be run by a younger generation of community members, aged in their late 20s to their mid 40s. The precise networks of relationships between these groups is beyond the scope of this study, but it suggests that while religious allegiance in matters of interpretation of the faith may be pledged to key marja'-i taqlid in the conventional Twelver Shia fashion, there exist several diffuse and While the external endorsements on the left serve to universalise Husayn's martyrdom, the note on the right is explicitly localised, with no less than three references to 'neighbour', situating the commemoration within the local community relatively decentralised associations and, potentially even rival, models of leadership and authority in terms of the social governance of the Edinburgh community.
Signage
The most visible change over the period of study, however, has been in the standards, banners and flags heralding the processions. These a'lam, as they are known collectively, underwent a major transformation in 2013, which took place across three registers.
The first was an increase in the number and colours of these flagsorange shades and purple hues now accompanied the blacks, reds and greens of the previous year. Correspondingly, what the processionists were wearing had become progressively darker, more monotone, even amongst the children. As such, the contrast between the black-swathed processionists and the flags they were carrying was all the more striking, reinforcing the visual and psychological sense that this was not a random group of protesters, but a community bound by faith. Despite the bright colours and greater emphasis on the banners, it remained impossible to mistake this parade for a festival. There were no fancy costumes or bands playing joyful music; the elegies and chants were distinctly plaintive and mournful, the self-flagellation unmistakeable. Cementing this presentation was a second changethe introduction of large, plain white flags, with red stains on them. Dramatic in their simplicity, they represented the blood of Husayn. The reasons for these changes are difficult to determine, but with the tentative success of the procession in 2011, organisers may have felt increasingly confident to inject a little more drama and flair in the years that followed, and which mapped practices back 'home'. In any case, and as Flaskerud notes in her visual analysis of Iranian parchams or wall hangings commemorating the battle of Karbala, even as '[p]oetry and eulogies enhance a sad emotional temperament (2010: 107)', the 'visual language' (2010: 107) of 'signifying devices: the iconographic sign, inscriptions and colour symbolism'(2010: 106), 'phrases a visual lamentation (2010: 107). Much as 'the manipulated voice of a storyteller and an elegist, … colour functions to instigate in the recipient sad emotions and mournful attitudes' (Flaskerud 2010: 107).
The third and most important development was the introduction of English signage. In contrast to previous years, 2013 saw the introduction of large, black horizontal banners, held up at each end by a different individual. 22 The first of these banners to be unfurled (Fig. 3) carried a picture of a golden dome at its left and a minaret, also golden, at its right. Between these two images, which depict iconic elements of Husayn's shrine in present day Karbala, in white san serif letters, were the words: 'To me death is nothing but happiness and living under tyrants nothing but living in a hell.' 23 Minutes later, the women bringing up the rear of the procession raised a banner ( Fig. 4) with an equally terse message, all in white except for the last word, which was rendered in red: 'Everyday is ASHURA & every land is KARBALA.' Two images, again elements from the shrine in Karbala, formed the backdrop of this banner; on the left a massive blue arch, with two minarets rising behind it. On the right, in close-up, was another minaret, identical to the one in the first banner. 22 See Bøe and Flaskerud (2017) for examples of similar banners in Norway. 23 A detailed analysis of this and other signs follows in the next section.
Back in the front, two children walked hand-in-hand underneath another banner (Fig. 5), the older child holding aloft a pole about half his height wrapped in white cloth atop which rested a stylised gold hand. In a solid, white, san serif font it read: 'Fight terrorism through justice do not pass a verdict relying on probability'. Within a short while all the banners in English faced outward, parallel to the procession itself, helping onlookers read them better. Any question as to the identity of these people was addressed by an additional banner with the same san serif writing, 'SHIA MUSLIM COMMUNITIES OF SCOTLAND Ashura Procession' emblazoned across it (Fig. 6).
The drama that all of this creates has obvious parallels to Easter passion plays in other Western cityscapes, exemplified by Oberammergau, Germany, historically, or given contemporary art house treatment as in Jesus of Montreal (Arcand 2006). Chelkowski also notes historical 'similarities between the Muharram processions' as recorded in Safavid Iran, 'and the European medieval theatre of the Stations [that] are obvious ' (1977: 33). Edinburgh itself is no stranger to the passion play. The Princes Street Easter Play, for example, a community theatre production, has been putting on performances since 2005. Its 2014 production, The Edinburgh Passion, at Princes Street Gardens drew a crowd of 1500-2000. Focusing, predictably, on the referendum for Scottish independence, its stated aim was 'to reach people who might know very little of the original story and it seems to have worked well' (Princes Street Easter Play 2014). If this ignorance of the Easter passion, a fundamental Christian story, is credible within the Scottish context, let alone a wider Western European one, then common knowledge of an equivalent Muslim narrative, as told through the Muharram procession, is practically non-existent. There is, of course, an important caveat. While the jaloos re-enacts the lamentation processions of the eighth century and penitence, it is not, however, the ta'ziyeh, the 'only indigenous drama engendered by the world of Islam' (Chelkowski 1977: 31), and which is the passion proper in especially the Iranian Shii context. Rather, the procession is a shorthand for the story, indicating it without actually performing it. As we shall see in the next section, participation in the jaloos serves two main functions. Firstly, through the signs and flyers, it presents a valuable opportunity to educate those unfamiliar with the story of Karbala and thereby potentially better communicate the community's history and values. Secondly, in the very act of processing as an act of Islamic worship, it also co-opts these spectators into joining believers to bear witness to its eschatological significance. Documenting the event was thus an important aspect of the procession, demonstrated by the obvious care that the traffic chaperones took not to block the view of those wielding the smartphones, cameras and camcorders mentioned earlier, even when they were being held by those of us who were not part of the procession at all. This documentation extends the idea of bearing witnessit is not only a record of the day of the procession but also whenever it is viewed, particularly by others online, that day as well as the original day of Ashura is remembered and so one participates in the ritual anew. Given the number of days marking the deaths of various holy figures within the Twelver Shia tradition, the formative event of Karbala is never far from the 'collective memory' (Halbwachs 1992) and ritual calendar of the community. In 'doing da'wa' or 'spreading the message' in this Shia way, the specific story manifests an eminently relatable universal archetype: the suffering of an inspired but subversive man who stands up against the status quo dies so abject a death that he becomes a tragic hero, with the promise and power of redemption that is embedded in each re-enactment, remembrance and commemoration. As Ayoub notes in his classic study of the events of Karbala, 'the literature which this popular piety has produced is vast, highly emotional and even fantastic, especially to the modern western reader' (1978: 7).
Reading the signs
All the participants in the procession already knew the story of Karbalaindeed, the previous nine nights inside the imambargah had been spent lamenting every tragic death of the family of the Prophet, recounted as it was in graphic, mournful detail in the sermons delivered by mullahs in English, Urdu and Persianso while public penitence of the community is an integral part of the procession and its spiritual efficacy, all of the associated outward-facing English messaging is evidently directed externally. However, even in their didactic role, the messages are somewhat undermined by their oddness.
Take the first banner, 'To me death is nothing but happiness and living under tyrants nothing but living in a hell' (Fig. 3). To the uninitiated, even in plain English, the equation of death with happiness comes uncomfortably close to the kind of suicidebombing language and logic that frequently assails us on media, new and old. 24 However, the statement is actually a translation of hadith attributed to Husayn and used as the rallying catchphrase, as captured on publicity and marketing material as well as on social media, for Muharram commemorations not just in Edinburgh but in English-speaking Twelver Shia communities around the world. 25 More scholarly treatment published in Qum, Iran translates the hadith as 'I consider death as happiness and life with the wrong-doers as boredom' (Shu'ba al-Harrani 2000). This latter translation, in turn, is invoked in equally nuanced explications elsewhere. 26 But by and large, there seems to be a consensus on the first form for the hadith's English standardisation. For popular religious discourse, therefore, the first translation makes for a punchy insider slogan, however obscure, and even potentially misleading its implications, for outsiders. As such, what might appear to outsiders as a nihilistic community statement is to insiders an assertion of identity and meaning, which is rooted in a pivotal historical event.
While an increasingly secularised society may not normally associate death with happiness, the notion of 'living in a hell', however, retains its symbolic power. In the context of Karbala, Husayn's death is not meaningless. For believers, as evidenced by the plethora of hadiths that arose after it, it was foretold. Framed as part of a divine plan, Husayn's role was to be martyred and this martyrdom helped spread the message (Pinault 1999: 71-72). In this way, the banner emphasises the importance of standing 24 This quite aside from an uncanny and altogether unfortunate similarity with Jim Jones: 'But to me, death is notdeath is not a fearful thing. It's living that's cursed' (quoted in Maaga 1998: 149). 25 At least in 2013 when I was doing the fieldwork, and a Google search resulted in thousands of hits for the exact phrase in a variety of locations, screenshots of which, however, I neglected to take then. Continuing technological advances and optimisations in search algorithms (see Pariser 2012 and most recently Lanier 2018) make it difficult to replicate those search results. Repeating it nonetheless in October 2018 for the partial phrase 'to me death is nothing but happiness^resulted in 'about 3500 results (0.59 s)' across a variety of websites both institutional and personal, as well as a number of social media platforms, including Facebook, Flickr, Pinterest, Twitter and Tumblr, evidence of its enduring appeal. See, for example, https://twitter. com/Tahahaider_/status/1039808971232763904, 12 September 2018, for Twitter. On Facebook, see https://www.facebook.com/Syedbilgrami110/posts/for-me-death-is-nothing-but-happiness-and-living-under-atyrant-is-nothing-but-h/1557548330971233/, 27 September 2017. 26 See, for example, http://www.shiachat.com/forum/topic/235017985-please-translate-this-hadith-to-fullarabic-text/ which has an 'Advanced Member' of the forum write in: 'This is part of a larger narration, and obviously this is badly translated by whoever did it.' (Accessed 1 October 2016). up and speaking out against tyranny and injustice, whatever the cost. It also headlines the context of the speeches we shall examine shortly.
The second banner, which the women had unfurled, namely, 'Everyday is ASHURA & every land is KARBALA' (Fig. 4), points to the kind of struggle that most Muslims will refer to as, and what in Western popular discourse is a dreaded word, namely, 'jihad'. 'Jihad' ('struggle', 'striving', 'effort', and by these extensions the paradoxically reductive 'battle' or 'war' beloved of extremist Muslims and the far right) shares the same root as the term 'ijtihad', commonly translated in the context of the development of Islamic law as 'independent reasoning'. As Schubel's study also illustrates, this is a popular procession banner, and in Toronto, too, borne by women (1996: 200). And so, when 'Everyday is ASHURA…', the tenth day of Muharram, when one of the two beloved grandsons of the Prophet, whom he would indulge to clamber upon his back during prayer, is brutally murdered, it is an ever present reminder for believers to be mindful of the deliberate as well as the unthinking, the major and the minor, wrongs, injustices, infractions, and unkindnesses, they face or dispense every day. The second banner thus articulates a clear challenge and public accountability of this principle in Islam: Do the faithful bear witness to these struggles and rise to address them within themselves (the 'greater jihad' of most of Islamic history, theology and jurisprudence) as well as in others, or do they look away in weakness, fear and discomfort? Furthermore, if 'every land is Karbala', then together with the fourth banner discussed below, the community is arguably reflexiveit is asking its members and telling the wider public what it is willing to volunteer and/or sacrifice to say no to these injustices, not only in the historical heartlands of Islam, but also in the contemporary societies beyond, and which for many Muslims is now also home. This is not to suggest that such sacrifice and volunteerism is, or should be, violent. In fact, any hint of violence is immediately rejected by the third banner, 'Fight terrorism through justice do not pass a verdict relying on probability' (Fig. 5). After the identity banner discussed below, it is probably the most comprehensible of the four English messaging banners. In asserting the importance of fighting terrorism while simultaneously cautioning the making of snap judgements, the banner also points to a larger faith community feeling under pressure (e.g. Abbas 2005). The underlying message here is that the simple fact of being Muslim should not automatically brand oneself to others as an extremist, someone to be feared and loathed as alien and other. The phrase is also somewhat technical in its use of 'verdict' and 'probability', which are hardly slogan friendly. A verdict suggests a final, authoritative judgement, rational and arrived at by due process. Juxtaposed with 'probability', it highlights the mutual exclusivity of the two concepts and the irony of conflating religious identity with extremism under the veneer of the law. Justice, therefore, becomes an important element of this discourse. This is not merely a this-wordly justice, the outcome of a rule of law that is dispassionate and logical. It is justice in its teleological sense, and in its specifically Shia conception, inextricably intertwined with love, devotion and loyalty (walaya) for God, the Prophet and his family, specifically his descendants, the imams, who issue from him. 27 Imbued with these ethics, this justice is a reminder of divine rights and authority due to the family of the Prophet but usurped for political expedience and accompanied by unfathomable cruelty. In this regard, and as we shall see next, this is also part of an effort to publicly differentiate Shia Muslims from extremist forms of Islam. 28 As a statement of identity, the fourth banner, 'SHIA MUSLIM COMMUNITIES OF SCOTLAND Ashura Procession' (Fig. 6) is relatively straightforward. Yet it, too, reveals several points. Firstly, there is the explicit invocation of Shiism. Sunni events, at least in Edinburgh, do not identify themselves as Sunnias a majority group, its members take the privilege of its proportion and normativity for granted. Being Sunni is being 'properly' Muslim in popular insider perception and conversely, being Muslim is being Sunni. There is rarely a need to qualify it because it is the majority view. Being a Shia, however, is a minority view and explicating it as such on a banner suggests an element of necessary and deliberate distinction from the majority 'other '. 29 This fourth banner also refers to 'communities'. The 'Muslim community' is used widely by journalists, politicians and Muslims themselves. This usage in the singular, however, erases important differences in terms of heritage, country of origin, languages, beliefs, practices, and, therefore, Muslim 'positions' on a variety of issues, including, for example, the practical (as opposed to the ideal) role and status of women, veiling, law, faith schools, iconography, etc. The plural on the banner, however, makes this diversity very clear, and all the more striking for the numbers involved in the procession. These numbers are small enough for the encounters to have a real impact in terms of a dialectic understanding of one's own identity vis-à-vis the other. As we have already seen, the processions included Pakistani, Iranian and East African Twelver Shia. The banner thus acknowledges a real, meaningful, and abiding encounter of the community with its own diversity because there are ethnic, linguistic, and national differences sheltering under the umbrella of an ostensibly single religious identity. A further difference that cuts across all of these categories is generational, for within these groups are also those who have acquired a Scottish identity by settlement or imbibed it through birth. 30 This is not to say such diversity is not evident in other religious spaces in Edinburgh such as the Central Mosque, 31 only that the larger numbers there mean that the differences tend to get diffused. This is because broadly similar smaller groups tend to congregate into larger normative, majoritarian ones. Therefore, encounters with difference are less likely in these larger groupings to pose doctrinal or practical challenges. As a long-standing white Scottish convert observed wryly to me about the Central Mosque, it is a great place for prayer, but not to talk about Islam in this way. 28 For an example of efforts at such differentiation among Shia Muslims in Belgium, see Lechkar, which focuses on crying as a 'specific Shi a disposition…fundamental if one wants to be a BtrueŜ hiite' (2017: 241). 29 The diversity illustrated in this kind of alternative and diasporic Muslim identity is important for the reasons outlined shortly. For an example of Muslim diversity, specifically as it relates to religiosity, see Gholami (2016), who examines it in relation to the understudied area of secularism in diasporic Muslim communities. 30 Bonino notes that 'Scottish Muslims feel more Scottish (24%) than English Muslims feel English (14%)' (2017: 67). Importantly, however, 'Muslims in Glasgow and Dundee … record higher feelings of belonging to Scotland and lower affiliations to their non-UK ethnic identities compared to Muslims in Edinburgh and Aberdeen' (Bonino 2017: 68-69). This is because Arabs comprise at least 15% of the local Muslim population in these two cities, and because of the turnover of people in these economic hubs. In Glasgow and Dundee, however, Pakistanis make up at least 50% of the local Muslim population (Bonino 2017: 68-69). There is no quantitative data available at present which allows for an analysis of Scottish identity and belonging vis-à-vis the intra-religious diversity of Scotland's Muslims, let alone the diversity within the specific Shia group under discussion. 31 Formally 'Mosque of the Custodian of the Two Holy Mosques & Islamic Centre of Edinburgh'. This brings us to the final point on reading signs: the banner does not reference Edinburgh alone, but the whole of Scotland. As such, the procession incorporates other cities, notably Glasgow, as discussed earlier, and potentially smaller centres such as Dundee and Aberdeen too. 32 It does not, however, invoke the rest of the UK. Whether this is a function of Scottish nationalism and efforts by the Twelver Shia to present themselves as part of these dynamics, or just a simple assertion of Scottish affiliation and identity, the important point is that it suggests a certain autonomy in relation to larger Twelver Shia institutions and organisations that are based primarily in England.
There is, thus, a duality of messages: one that speaks to outsiders and another that speaks to insiders. There are disconnects, of courseoutsiders arguably would not fully understand the messages directed at them. 33 In Toronto, too, Schubel also notes that 'despite the attempts of the community to use the julus for education about … Islam, the press seemed more interested in asking questions about their reaction to the attempted Islamic coup that had just taken place in Trinidad. They were seemingly uninterested in the religious significance of the procession ' (1996: 198). Bøe and Flaskerud (2017) record the same kind of press indifference to the Muharram procesion rituals in Norway. Pinault also describes a similar 'ritual opacity' (Grimes 1990) for observers of an Indian diasporic procession in Chicago in 1994Chicago in (2001. Nonetheless, this duality is not limited to the messages on the banners. As we shall see in the next and final section, it is also reflected in speeches made in both English and Urdu, which not only elaborate upon these messages but also further extend the community's engagement and interaction with itself and with outsiders.
Public speeches, private meanings
The middle of the march from and back to the imambargah was marked by a stop at the square that sits at the crossroads of Great Junction Street and the Foot of Leith Walk. The processionists filed into the square, the young men spreading out in rough rows parallel to the Foot of Leith Walk, while the women stood behind them. The remaining men formed concentric half circles, clustering around the foot of the statue of Queen Victoria. Several processionists holding 'alams stood against the curved railing that demarcates the square from the crossroads. As before, these faced outward, clearly visible to both motor and pedestrian traffic. Even though traffic was flowing again, it was clear that there was a demonstration going on. All around the square, shoppers came in and out of stores, while other members of the public sat, stood, and milled about. After a further round of ritual chest-thumping by the younger men, several other men came forward to deliver speeches in quick succession.
The English speeches
In 2013, three speeches were delivered in English. Varying in accent and inflection they reflect the blended identities of the speakers and, therefore, the community as a whole. 32 A number of posters at the imambargah advertised these cities as sites for Muharram commemorations in 2014. 33 As in the perception noted earlier about the procession being a protest about Christmas trees.
More importantly, the speeches demonstrate how the community not only articulated these identities to itself, but also presented and represented them to an 'other'. In elaborating upon the banners' English messaging in the procession proper, they seamlessly invoked the formative history of the community, transmitted its values and traditions, and spoke to wider concerns of contemporary relevance to both Muslims and non-Muslims.
The first speech was delivered by a layman. It began explicitly with the narration of the story of Husayn and the fate of his immediate family ('… he sacrificed his whole family, and in particular, at the end, even a six-month-old son, Ali al-Asghar'). It then rapidly coupled a historical martyrdom with modern notions of freedom and human rights: This gathering, this processing today, we are reminding ourselves, and our host nation, that when it comes to the freedom and human rights, we the Shia Ali ahl al-bayt … will always, stand shoulder to shoulder, in ensuring there is no encroachment, no adulteration, and no loss of human rights and freedom for the people, whoever they are, whatever they do.
In so doing, it also subtly made the point that despite seeming differences between religious ideas and secular ideals, they share the same values, that even though 'the events of Karbala, and the sacrifice of Imam Husayn, for this freedom, took place 1400 years ago, but we the Shia … will carry on marching, and reminding everybody, of how precious this freedom is'. Importantly, it carved out a public space for the expression of religious identity but situated it within the larger discourse of human rights and 'freedom for the people, whoever they are, whatever they do'.
Embedded in this notion of identity was also the value of service to others in the remembrance of the imam. A key example of the practical application of service was evident in the community's blood drives during this month, held under the wider auspice of the Islamic Unity Society, which organises the Imam Hussain Blood Donation Campaign, and advertised on posters inside the imambargah. 34 These drives consciously transformed an older, controversial ritual of shedding one's own blood through violent self-flagellation, practised by some Twelver Shia as penitence for historically failing Husayn, into a life-giving act of real impact and material consequence in the present. 35 In doing so, they also explicitly invoked Qur an 5:32 for sanction of the practice ('And whoever saves one life, it is as if he saved the whole of 34 See Spellman-Poots (2012: 46-48) for an account of the campaign in London, UK, and Bøe and Flaskerud (2017) for Norway, which is sometimes 'documented with Bselfies^, thus making individual actions publicly known and part of a collective effort performed in Muharram' (195). In Greece, however, Chatziprokopiou and Hatziprokopiou (2017) relate the political rejection of such a campaign. 35 See Chelkowski (1994) for a general overview of the degrees of this 'self-mortification' ritual. Pinault (2001) provides ethnographic accounts of this practice and its contestation in a number of towns and cities in India. Spellman-Poots (2012) highlights the diversity of opinion on the issue among young Twelver Shia in London. More recently, Dogra (2017) illustrates how debates about its validity in its original contexts in Iran, Iraq and India have been transplanted to Twelver Shia communities of South Asian backgrounds in London and root an ongoing struggle for their authority and authenticity. In Greece, Chatziprokopiou and Hatziprokopiou (2017) mankind'), reinforcing notions of Husayn's intercessionary and salvific capacities through, as mentioned earlier, the foretold spilling of his blood at Karbala. In this vein, the speech carried on to make abundantly clear that this precious freedom 'required the sacrifice, and the blood of Imam Husayn and his family', and that the community be given 'the tawfiq [strength/good fortune], to carry on, in the service and remembrance of Imam Husayn'.
Several proclamations in Urdu and Arabic followed these statements before this speech came to an end. The first of these proclamations enjoined the processionists to recite the salawat. 36 The remainder map onto the Muslim creed or shahada, the declaration of faith, 37 before calling one last time for the salawat.
The second speech was delivered by a cleric and added detail: 1400 years ago our imam, in the desert of Karbala he remembered you. He said, 'O Shias, upon you is peace. O Shias, whenever you drink water remember my thirst, for I, was slaughtered, and wasn't given even a single drop of water. Not only was I slaughtered, horses ran over my body', and such was his state, that when Lady Zaynab came to his body, she didn't recognise him.
It also took a deeper historical turn. However, in its explication of hadith of the Shia imams and Qur anic verses, 38 it was clearly geared towards the community and, potentially, its younger members. Coupled with an emotional appeal and graphic first-person narration, this introspection became an integral and socialising practice of the faith, transmitting the community's specific ethos and values. Yet, here too, there was an element of reaching out beyond the community, for in narrating the sorrow of Husayn's son and successor, Imam Sajjad, the cleric referenced the Qur anic story of Joseph and Jacob. Noting how Jacob lost his sight from crying at having lost only one of his sons, Joseph, 39 the Imam, said the mullah, had chided a disciple for being unjust by asking why he had cried for 14 years when 'in front of me, the kin of my family was slaughtered!' Here, the speech drew parallels with the suffering of characters of an even older story, familiar to believers in the other Abrahamic traditions, thereby seamlessly melding two different pasts. As before, the speech came to an end with a processionist enjoining the crowd to recite the salawat.
The third speech was delivered by another layman. It began by welcoming 'you, the sons of Husayn and daughters of Zaynab!' before specifically invoking the notion of 'not [being] a minority'. But, of course, they werea crowd of 150 does not a majority make. Twice, the speaker proclaimed that they should 'not be undeterred [sic] by the like [later, 'lack'] of our numbers', and juxtaposed this with the defiant assertion 36 Supplicating the Divine by invoking His attributes is part of the regular practice of the faith for Muslims of all stripes and colours, as is calling upon Him to bless Muhammad and his descendants. The latter practice, called salawat, is often understood as an appeal to God for Muhammad to intercede for his community. See also Rippin (2014). Shia Muslims have the additional recourse of calling upon their imams in this intercessionary capacity as designated inheritors of the mantle of the Prophet. Invoking Husayn, the martyr par excellence of Islam, thus comes naturally for many Muslims as part of their respective communities of interpretation. 37 To which the Shia under discussion here, as elsewhere, add the shibboleth, ' Ali, the commander of the faithful, is the friend of God'. 38 E.g. Qur an 42:23 on kindness to the Prophet's family. 39 Qur an 12:84 that they were not a minority. In doing so, the speaker effectively reminded the processionists that even though few of Husayn's companions remained to fight by his side, 40 and thereby met their tragic ends, they ultimately possessed a moral and spiritual triumph over their executioners. Husayn had two options: One is to unsheath the sword, or the second is humiliation. We. will. never. be. humiliated. 41 Imam unsheathes his sword. And he took everything on. Imam said, Ali ibn Abi Talib said, 'Let them be. If, I, were to die, and be burned, and then, my ashes were to be scattered in the air, and if that, happened to me, 1000 times, then I will. At this point everyone joined in to the call to Husayn, repeating it seven times, before trailing away.
Taken as a whole, the three speeches summarised here were characterised by the religious, social and political concerns of the historical tragedy of Karbala. Importantly, however, they all made direct links between this history and action in the present day. This living and lived tradition is a key example of another kind of practice of faith; one where action in this world for reward in the next is not merely limited to rote ritual. Linguistically, the speeches also employ the technique of code-switching between Arabic, English and Urdu. This helps bridge the gap not only between past and present but also between internal and external audiences. As Bøe and Flaskerud observe for Muharram processions in Norway, these 'events perform a dual function as ritualised mourning and as public expressions of a Norwegian Shia identity, which is presented as inherently non-violent. The new Shia voice in the public urban space thus use well-established ritual practices as platforms for communication with fellow citizens ' (2017: 193).
The speeches also invoke an abiding pledge of spiritual allegiance rooted in ideas of justice, and the importance of standing up against persecution, oppression, tyranny and injustice whatever the cost. These speeches are all the more relevant for the contemporary anti-Shia backdrop against which they were made, and of which Shia Muslims in Edinburgh or elsewhere could not have been unaware -Shia pilgrims undertaking the hajj to Mecca from America just a month earlier in October 2013, for example, were widely reported as having been attacked by English-speaking extremists and told, 'Our [holy pilgrimage] will be complete once we have killed you, ripped out your hearts and eaten them, and [then] raped your women', before shouting, 'We're going to do Karbala all over again' (Husain 2014).
The Urdu sermon
The English speeches were followed by a sermon in Urdu. Differing in tenor from the earlier speeches, the Urdu sermon was about twice as long as the three English speeches combined. Apart from Husayn, it invoked a number of figures, notably Zaynab, Fatima and Ali al-Asghar, amidst frequent interjections of 'Labbayk ya Husayn' and 'Be shakt', that is, 'indeed, verily, truly'. Yazid also featured prominently and parallels were drawn between him and the Pharaoh ('They continue to come in varying guises. Recognise them. Test your mettle and humility'). It was also much more emotional, vehement, and vivid. This was reflected back by the processionists' responses, which were not only louder and more vehement than for the English speeches, but also more frequent. This is partly explained by the fact that the story and the message have a longer history of inhabiting languages like Arabic, Persian and Urdu than they have English. As such, it is rhetorically more emotive and fiery in those languages. Its prose, cadence and style, too, are similarly affected. Since the necessity of communication in English is more recent, translating the message and adapting it to the rhetoric of English is understandably harder. Nonetheless, both kinds of speeches were characterised by a staccato delivery, dramatic pauses, and slow, long-drawn out inflections.
There were also repeated supplications and emotional exhortations which elicited amens (illahi ameen). Much reference was made of 'the world' (duniya) and 'people' (duniya walo, lit. 'people of the world'), to justice, to tyranny and oppression, to innocence, good and evil, as well as the necessity of bearing witness to injustices, past and present. There were also assertions of the elevated role and nature of Husayn ('The protector of God's Oneness is Husayn. The second name for justice is Husayn … The second name for prayer is Husayn. The second name of crying is Husayn. If it were said of the hajj that it is Husayn, then Husayn is the hajj and that is the truth. All that is good in the world is of Husayn'). 42 That these sentiments were given public expression is surprising, particularly given traditional Sunni discomfort of this idea. Nonetheless, the value of Husayn's sacrifice is made very clearhis blood and death continue to give life; and so failure is sublimated into success. As with the English speeches, the Urdu sermon came to a close with the same proclamations, followed by prayers for the prosperity of the followers of Husayn, and the ruin of the followers of Yazid, before finally ending with the salawat.
It may be argued that being extempore speeches, they should not be read too closely. However, precisely because they were public and likely unrehearsed, they are more revealing. They were pitched at varying levels, to different audiences and represented overlapping identities and ways of being. This public-private dichotomy and layering of the procession is subversive because the duality of messages serves both insiders and outsiders. Yet, even as it steeled members of the minority community against potentially negative reactions from the majority, its relative safety also reinforced the rightness of the procession and its associated rituals. In turn, whatever the degrees of their ritual opacities (Grimes 1990), this allowed the community to express itself freely thereby increasing its confidence in its own public identity and the presentation of itself to an internal as well as external 'other'.
Conclusion
Much research remains to be done on the Shia in Scotland and of their spaces of worship and gathering. In attempting to fill this gap, this article has discussed an alternative example of a Muslim religious space in the form of an ethnographic study of a Muharram procession in a public street in Edinburgh, Scotland. It demonstrated how the procession provides a sense of continuity of tradition in the West, fulfilling not only a keenly-felt ritual requirement, but also the community's aspirations for public visibility and recognition. Furthermore, it traced the development and change in these aspirations in the form the procession took over the years to educate and engage with the wider community within which it lived. Although the procession is clearly an expression of Shia ritual practice, it was not explicitly presented in opposition to the majority Sunni interpretation of Islam. Nonetheless, it made visible, if not comprehensible, a ritual practice that is of fundamental importance to a significant minority of Muslims, but is almost practically invisible or unknown to the wider public. By analysing the public speeches made during the course of the procession, this study also demonstrated the duality of its messages and audiences. This duality speaks to insider and outsider, to the past as well as the present. In doing so, a formative historical event in the past is relived and revivified to make sense of the anxieties and uncertainties of the present, and to produce a space where tradition and modernity coincide to createand sustainfaith. | 2023-01-01T15:12:21.169Z | 2018-12-27T00:00:00.000 | {
"year": 2018,
"sha1": "5a9f61aff570120520938522af9dc0e765ddcb3f",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11562-018-0432-z.pdf",
"oa_status": "HYBRID",
"pdf_src": "SpringerNature",
"pdf_hash": "5a9f61aff570120520938522af9dc0e765ddcb3f",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": []
} |
270616200 | pes2o/s2orc | v3-fos-license | Classification and Treatment Strategies of Tibial Tubercle Fractures in Adults
Objectives Tibial tubercle is a crucial player in maintaining the structural integrity and functional stability of the knee joint. Currently, there is no standardized protocol for the classification and treatment of tibial tubercle fractures in adults. This study analyzed the incidence and treatment strategies of tibial tubercle fractures in adults according to the four‐column and nine‐segment classification system. Methods Data of patients with proximal tibial fractures involving tibial tubercle fractures who were treated at our hospital from August 2007 to March 2023 were retrospectively reviewed. The fractures were classified using the AO/OTA classification and four‐column and nine‐segment classification systems, and the treatment protocol (surgically treated or conservatively treated) was recorded. The number and distribution proportion of patients were counted. A two‐sided t‐test was conducted to determine the significance of differences between the gender and sides. Results In total, 169 tibial tubercle fractures were found in 1484 proximal tibial fractures. According to the AO/OTA classification, seven of the 169 patients, (4.1%) were type A, 36 patients (21.3%) were type B, and 126 patients (74.6%) were type C. According to the four‐column and nine‐segment classification, type 1 cleavage without free fragments was the most common type of fracture (93/169, 55.0%), followed by type 2 dissociative segmental fragments (48/169, 28.4%) and type 3 comminuted fractures (28/169, 16.6%). Overall, 139 of the 169 proximal tibial fractures with tuberosity involvement were treated surgically. Among them, additional fixation of the tubercle fragment was performed in 52 fractures. Conclusion The incidence of tibial tubercle fractures involved in proximal tibial fractures was approximately 11.4% (169/1484) in adults, and approximately one‐third of the tubercle bone fragment required additional fixation (30.8%, 52/169). The injury types in the four‐column and nine‐segment classifications are helpful for accurately judging and making treatment‐related decisions for tibial tubercle fractures.
Introduction
T he tibial tubercle is an oblong bony elevation on the proximal, anterior aspect of the tibia and is the tibial insertion of the patellar ligament.It is a crucial player in maintaining the structural integrity and functional stability of the knee joint.Tibial tubercle fractures involve partial or total impairment of the knee extension function.The repair of this fracture is beneficial for the recovery of the knee extension function.If tibial tubercle fractures are not treated properly, complications such as knee pain, stiffness, and dysfunction can occur.These fractures can occur alone or can be involved in proximal tibial fractures.The size and morphology of the tibial tubercle fragment vary considerably.Judging the injury type of this fragment helps explain the injury mechanism and formulate repair strategies.
2][3][4][5] However, they are unsuitable for adults.7][8][9] In the AO/OTA classification, isolated tibial tubercle fractures are encoded as 41 A1.2.][12] In 2018, Yao et al. proposed the four-column and ninesegment classification of tibial plateau fractures.The tibial plateau and proximal fibula were divided into four columns: medial (segments a, b), intermedial (segments c, e, f), lateral (segments g, h) and fibular (segments i+), which were further subdivided into nine segments (Figure 1).The four-column and nine-segment classification named the tibial tubercle as segment c of the intermedia column and categorized tibial tubercle fractures into three injury types. 13This classification system appears to be precise and comprehensive, but a detailed summary of its clinical application is lacking.This study analyzed the tibial tubercle fractures in adults according to the four-column and nine-segment classifications.The purpose of this study was twofold: (i) analyze the proportion and distribution of tibial tubercle fractures; and (ii) explore treatment strategies for diverse categories of tibial tubercle injuries.
Patients
After approval was received from the ethics review committee of the hospital, this study retrospectively analyzed the medical records and computed tomography (CT) images of all proximal tibial fractures between August 2007 and March 2023 from the Affiliated People's Hospital of Jiangsu University.Patient inclusion criteria were proximal tibial closed fractures, age of more than 18 years, no congenital deformities, availability of adequate imaging data, and no history of metabolic bone disease or knee surgery.Based on these criteria, 1472 patients with 1484 affected knees were included.
Assessment of Morphology
Of the 1484 affected knees, 169 knees exhibited tubercle fracture involvement.All tibial tubercle fractures were classified according to the AO/OTA classification and the four-column and nine-segment classification.Based on the CT imaging, the involved column/segment, injury type, and tibia plateau injury index (TPII) of each patient was retrospectively analyzed.The treatment of the tibial tubercle fractures was recorded.The selection of treatment methods is based on the three subtypes in the four-column and nine-segment classification.
According to the AO/OTA classification, the proximal tibial fractures were classified into three types: 41.A (extra articular fracture), 41.B (partial articular fracture), and 41.C (complete articular fracture).The AO classification does not provide a detailed description of tibial tuberosity fractures.According to the four-column and nine-segment classification system, the tibial tubercle fractures were classified into three types.The three injury types included: type 1-cleavage without free fragment; type 2-dissociative segmental fragments; and type 3-comminuted fracture.In addition, as a part of the knee extension device, we included patellar ligament tear (EX1) and the inferior patella avulsion fracture (EX2) as an additional part of the study (Figure 2).TPII is equal to the number of injured column(s) plus the number of segment(s).The complexity level of TPF was classified into three grades: mild comminuted (TPII: 2-5), moderate comminuted (TPII: 6-9) and severe comminuted (TPII: 10-13).Three observers, namely two orthopedic trauma surgeons and a radiologist, reviewed the x-ray and CT scans of the fractures.The classification was unanimously finalized after review by the three observers.No conflict of interest was observed between the observers and patients.
Statistics Analysis
Statistical analysis was performed using IBM SPSS Statistics 25.0 (SPSS Inc., Chicago, IL, USA).Qualitative and quantitative data are presented as n (percentage) and mean AE SD, respectively.A two-sided t-test was conducted to determine the significance of differences between the gender and sides.p < 0.05 was considered to indicate statistical significance.
Demographic Characteristics
In total, 1472 patients with 1484 knees with proximal tibial fractures were retrospectively analyzed.Among them, 169 knees had tibial tubercle fractures (169/1484, 11.4%).These 169 patients included 64 women and 105 men.The mean age of the patients was 53.5 AE 17 years (Table 1).
Treatment Strategies
Among 169 cases of tibial tubercle fractures, 139 cases of proximal tibial fractures were treated surgically, and 52 cases of tibial tuberosity fragments received additional fixation.Over mean follow-up of 9.3 months (range, 6-12 months), all patients achieved osseous union.There were no complications such as knee joint infection, stiffness, knee joint adhesion.In type 1 injury, 68 of the 93 cases were treated surgically, of which 14 cases received additional tubercle fragment fixation.In type 2 injury, 43 of the 48 cases were treated surgically, of which 24 cases received additional tubercle fragment fixation.In type 3 injury, all 28 cases were treated surgically, and among them, 14 cases received additional tubercle fragment fixation.
Discussion
W e here investigated the incidence and treatment strate- gies for tibial tubercle fractures in adults according to the four-column and nine-segment classification.In this reported a systematic review of the literature and summarized classifications in the above all. 4,5These widely-used classifications of children's tibial tubercles are essentially improvements on the Salter-Harris epiphyseal injuries classification. 14As children grow into adults, the proximal epiphysis of the tibia matures.The distribution of fracture lines and the morphology of tibial tubercle fragments changed significantly.The classification for epiphyseal injuries of the tibial tubercle is no longer applicable for adults.For the tibial tubercle in the adults, AO classification, Schatzker classification, and three/four column classification do not highlight tibial tubercle injury. 6,8,9,11The four-column nine-segment classification is proposed based on the morphological changes of proximal tibia in adults, which is the reason why our study adopts it.
Incidence
In adults, tibial tubercle fractures were observed in 11.4% (169/1484) proximal tibial fractures.In our previous study, on plotting 3D fracture line heat maps of proximal tibial fractures, we found that the frequency of tibial tubercle (segment c) involvement was lower (cold area) than that of articular cartilage and anterior/posterior cruciate ligament (ACL/PCL) tibial attachment. 15The sample size used here was almost double that of the previous study (1484 vs 766), but the incidence of tibial tubercle fractures remained consistent with that in previous studies (11.4% vs 11.9%). 15Maroto et al. retrospectively investigated 392 bicondylar fractures of the tibial plateau, of which 85 were tibial tubercle fractures (21.6%). 16Our cohort had more isolated avulsion and unicondylar fractures, and therefore, the incidence of tibial tubercle fractures was lower than that reported in Maroto et al.'s study.Based on the AO/OTA classification, we found that type B and type C cases accounted for the majority of the fracture injuries (95.9%, 162/169).This indicates that greater physical force trauma is more likely to evoke tibial tubercle injury.
Treatment Strategies
Tibial tubercle injury can be caused by knee joint extension or flexion.Tibial tubercle fixation is essential for the normal physiological activity of the knee extension device.8][19] Complications commonly observed after surgical fixation of tibial tubercle fractures include bursitis, compartment syndrome, and refracture. 5Planning personalized treatment strategies according to the variable injury morphology of the tibial tubercle is more reasonable.
According to the four-column and nine-segment classification, type 1 (93/169, 55.0%) was the most common tibial tubercle fracture.This type can arise from an impact or traction applied to the fracture site.In general, conservative treatment is adopted for type 1 when the tubercle fragment is stable.Screw fixation can be employed for unstable bone fragments.In the cohort, 14 of 93 type 1 cases underwent additional tubercle fixation (Figure 3).
Type 2 dissociative segmental fragments were highly correlated with the patellar ligament tension, which was mainly tension injury.In general, the broken tibial tubercle is repaired and fixed to the posterior tibial cortex with one or more lag screws.MacDonald et al. achieved excellent results by using lag screws for the treatment of tibial tubercle avulsion fractures. 20][23] Conservative treatment can be continued to be chosen when the displacement is small.When the posterior cortex of the tibial plateau is broken and screw fixation is unsuitable, wire strapping of the fracture site can also produce good results. 24hen the bicortical screw is placed, a popliteal neurovascular injury may occur because of the penetration of the screw into the posterior tibial cortex.To reduce the risk of posterior neurovascular injury, we recommend an oblique direction (anterior-lateral-superior to posterior-medial-inferior) for drilling and placing the screws (Figure 4).Using a shouunicortical screw can avoid this serious complication, along with the reduction of pull-out resistance.In our cohort, 24 of 48 type 2 patients were treated with lag screws.Figure 5 presents one patient with a type 2 injury in whom the tibial tubercle fracture was fixed with lag screws (Figure 5).
In this study, the incidence of type 3 injury, that is, the comminuted fracture of the tibial tubercle, was low (28/169) and was mostly caused by extensive direct violence.Type 3 fractures are complex, and no unified fixation scheme is available for these fractures.Type 3 injuries can be categorized as mild and severe comminution based on the bone fragment size.Fixation is relatively easy in mild comminution of the fracture site, whereas severe comminution requires enhanced repair.In our cohort, 14 of 28 type 3 patients were treated with additional tibial tubercle fixation.Figure 6 presents a type 3 patient in whom the fracture was fixed through wire strapping (Figure 6).According to Rana et al., because the tibial plateau with a tibial tubercle fracture is rare, imaging should be used to ensure that it is not missed. 25In complex tibial plateau fractures combined with tibial tubercle injuries, the focus is often only on articular surface reduction, leading to the neglect of tibial tubercle fractures and missed diagnosis.
Additional Part of the Study
As a part of the knee extensor unit, we included patellar ligament tear (EX1) and the inferior patella avulsion fracture (EX2) as an additional part of our study.The EX1 (patellar ligament tear) is caused by excessive flexion and pulling of the patellar ligament.A complete patellar ligament rupture needs to be repaired.If the patellar ligament is ruptured, an early repair will achieve good results.When EX1 injuries are missed, second-stage reconstruction may be required (Figure 7).In an adult case of tibial tubercle fracture combined with patellar tendon avulsion, 26 Woolnough et al. achieved good therapeutic results by using transosseous sutures through a slotted plate.The inferior patella avulsion fracture (EX2) associated with a proximal tibial fracture is relatively rare.In our previous study, the incidence of EX2 with a proximal tibial fracture was approximately 1.4% (18/1253). 27In the present study, this incidence decreased to approximately 1.3% (19/1484).The injury mechanism of an inferior patella avulsion fracture (EX2) was similar to that of EX1.The inconspicuously displaced small fragment was treated conservatively, while the sizable fragment requires surgical fixation.9][30][31] A biomechanics study reported that hollow nails combined with steel wire provide the best stability. 28In our study, the fragment from the inferior patella avulsion fracture was generally small.The predominant treatment modalities employed include conservative management, sutures, and anchors (Figure 8).
7][8][9] Obviously, the four-column and nine-segment classification system can effectively explain the injury types and mechanisms of tibial tubercle fractures.This classification is a pioneering approach.The precision and comprehensiveness of the four-column and nine-segment classification systems are significantly higher than those of alternative classification systems.Notably, misdiagnosing an unfused epiphysis as a fresh tibial tubercle fracture should be avoided.Figure 9 shows three cases of the unfused epiphysis of the tibial tubercle.
Limitations and Strengths
This study has several limitations.First, this study was conducted in a single trauma center, and the sample size was constrained.A study involving multicenter samples will yield a more accurate incidence.Second, the EX1 incidence was disregarded because the patients did not undergo magnetic resonance imaging (MRI).MRI would have provided further details about soft tissue injury.Third, the relationship between the injury type of the tibial tubercle and the prognosis needs to be further clarified.
This study has several advantages.First, this research employed an innovative classification system for tibial tubercle fractures.Second, it utilized the largest global database of proximal tibial fractures.Lastly, this study provided a comprehensive analysis of the mechanism and treatment strategy of tibial tubercle fractures.
Conclusion
I n this study, the incidence of tibial tubercle fractures with proximal tibial fractures was approximately 11.4% (169/1484) in adults, and approximately one-third of the tubercle bone fragment required additional fixation (30.8%, 52/169).The five injury types in the four-column ninesegment classification are helpful in the accurate judgment and treatment of tibial tubercle fractures.
FIGURE 1
FIGURE 1 The division of four-column and nine-segment classification (first published in Yao et al. 13 ).(A) The tibial plateau and proximal fibula were divided into four columns and nine segments.(B) The nine segments were: anteromedial segment (a, posteromedial segment (b, attached by edial collateral ligament [MCL]), tubercle segment (c, attached by patellar ligament [PL]), bare area (segment d, between the top level of the tubercle and the insertion of articular cartilage and anterior [ACL]), median segment (e, attached by ACL), posteromedian segment (f, attached by posterior cruciate ligament [PCL]), anterolateral segment (g, posterolateral segment (h, and fibular segment (i, attached by lateral collateral ligament [LCL] and other structure).
FIGURE 3 FIGURE 4 FIGURE 5 A
FIGURE 3 Preoperative computed tomography (CT) of five cases of typical type 1 tibial tubercle injury.(A-E) All fractures had cleavage without a free fragment.
TABLE 1
2tudy, the incidence of tibial tubercle fractures with proximal tibial fractures was approximately 11.4% (169/1484) in adults, and approximately one-third of the tubercle bone fragment required additional fixation (30.8%, 52/169).Jones initially classified tibial tubercle fractures into three types (type I-an avulsion fracture of the most distal portion of the ossification center of the tuberosity; type II-the level of epiphyseal disruption occurred at the normal site of the junction of the ossification centers of the tuberosity and proximal end of the tibia; and type III-the fracture line of the tuberosity physics propagated into the main tibial epiphysis).1In1980,Ogdenet al. revised the Watson-Jones classification and divided each type into A and B.2In 1985, Ryu and Debenham added the tibial tubercle fractures extending to the posterior cortex as Type IV.
The morphology of tibial tubercle injury in the four-column and nine-segment classification and related treatment. 3In 2012, Pandya et al. classified the special fractures as four distinct fracture patterns (tubercle youth, physeal, intra-articular, tubercle teen) based on 3D physeal closure.In 2016, Pretell-Mazzini et al. | 2024-06-21T06:17:27.325Z | 2024-06-19T00:00:00.000 | {
"year": 2024,
"sha1": "65102e0b518ee057ab4bfcf2ad259c0191b5833f",
"oa_license": "CCBYNC",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/os.14122",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c4a19d9244c9567e3b82717f192226f21bfa29d1",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
87259154 | pes2o/s2orc | v3-fos-license | Porphyromonas, a potential predictive biomarker of Pseudomonas aeruginosa pulmonary infection in cystic fibrosis
Introduction Pseudomonas aeruginosa pulmonary infections are the primary cause of morbi-mortality in patients with cystic fibrosis (CF). In this cohort study, the objective was to identify candidate biomarkers of P. aeruginosa infection within the airway microbiota. Methods A 3-year prospective multicentre study (PYOMUCO study) was conducted in Western France and included patients initially P. aeruginosa free for at least 1 year. A 16S-targeted metagenomics approach was applied on iterative sputum samples of a first set of patients (n=33). The composition of airway microbiota was compared according to their P. aeruginosa status at the end of the follow-up (colonised vs non-colonised), and biomarkers associated with P. aeruginosa were screened. In a second step, the distribution of a candidate biomarker according to the two groups of patients was verified by qPCR on a second set of patients (n=52) coming from the same cohort and its load quantified throughout the follow-up. Results Porphyromonas (mainly P. catoniae) was found to be an enriched phylotype in patients uninfected by P. aeruginosa (p<0.001). This result was confirmed by quantitative PCR. Conversely, in patients who became P. aeruginosa-positive, P. catoniae significantly decreased before P. aeruginosa acquisition (p=0.014). Discussion Further studies on replication cohorts are needed to validate this potential predictive biomarker, which may be relevant for the follow-up in the early years of patients with CF. The identification of infection candidate biomarkers may offer new strategies for CF precision medicine.
Introduction Pseudomonas aeruginosa pulmonary infections are the primary cause of morbi-mortality in patients with cystic fibrosis (CF). In this cohort study, the objective was to identify candidate biomarkers of P. aeruginosa infection within the airway microbiota. Methods A 3-year prospective multicentre study (PYOMUCO study) was conducted in Western France and included patients initially P. aeruginosa free for at least 1 year. A 16S-targeted metagenomics approach was applied on iterative sputum samples of a first set of patients (n=33). The composition of airway microbiota was compared according to their P. aeruginosa status at the end of the follow-up (colonised vs non-colonised), and biomarkers associated with P. aeruginosa were screened. In a second step, the distribution of a candidate biomarker according to the two groups of patients was verified by qPCR on a second set of patients (n=52) coming from the same cohort and its load quantified throughout the follow-up. results Porphyromonas (mainly P. catoniae) was found to be an enriched phylotype in patients uninfected by P. aeruginosa (p<0.001). This result was confirmed by quantitative PCR. Conversely, in patients who became P. aeruginosa-positive, P. catoniae significantly decreased before P. aeruginosa acquisition (p=0.014). Discussion Further studies on replication cohorts are needed to validate this potential predictive biomarker, which may be relevant for the follow-up in the early years of patients with CF. The identification of infection candidate biomarkers may offer new strategies for CF precision medicine.
IntroDuctIon
Respiratory polymicrobial infections play a major role in cystic fibrosis (CF) progression and the acquisition of bacterial pathogens during the course of the disease is now well described. After ~25 years old, the establishment of CF pathogens is usually completed, P. aeruginosa being the most predominant species in CF lung. P. aeruginosa has a negative impact on pulmonary function promoting more frequent acute exacerbations. P. aeruginosa is linked to worsened prognosis for patients with CF; they have a decreased life expectancy and experience a more rapid decline in pulmonary function compared with non-colonised patients. At the stage of chronic infection, the eradication of the pathogen is impossible; this places all the hopes of treatment at the early stage of the infection. Indeed, the chances to successfully eradicate P. aeruginosa are the most important in the early stages of P. aeruginosa colonisation. Improvement in the median survival of patients with CF is correlated with early antibiotic therapy in patients colonised with P. aeruginosa, the eradication success being essentially dependent on how early P. aeruginosa is detected. 1 As early P. aeruginosa pulmonary infection is completely asymptomatic in most cases, 2 the monitoring is based on systematic microbiological analysis. In this aim, P. aeruginosa quantitative PCR (qPCR) detection was shown to be relevant in patients' follow-up. 3 For the coming years, one challenge is to decipher the factors involved in the early onset of P. aeruginosa colonisation. Demographic and environmental factors were shown to increase the risk of P. aeruginosa acquisition, but are not precise enough to predict the risk of P. aeruginosa early colonisation. 4 In the framework of the present study, we hypothesised that lung commensal microbiota could be associated with P. aeruginosa Figure 1 Two-step approach of the study. Samples were issued from the PYOMUCO cohort study whose patients (n=96), initially (T0) all Pseudomonas aeruginosa (PA) free for at least 1 year, were separated into two groups (group 1 and group 2) according to their P. aeruginosa status at the end of the follow-up (Tf). 3 Group 1 patients remained negative, whereas group 2 patients became positive. In a first step carried out in a first set of patients (n=33), bacterial biomarkers associated with P. aeruginosa were screened by 16S-targeted metagenomics; a candidate biomarker (Porphyromonas catoniae) was revealed. In a second step, distribution of the candidate biomarker according to the two groups of patients was verified by quantitative PCR (qPCR) on a second set of patients (n=52) coming from the same cohort. CF, cystic fibrosis. early colonisation in CF. In the precision medicine era, the aim of this study was to find biomarkers for providing close monitoring to CF patients more at risk of early P. aeruginosa colonisation and improving clinical benefit of successful early P. aeruginosa eradication.
MethoDs
Patient cohort, inclusion criteria and global data A 3-year prospective multicentre study (PYOMUCO study) was conducted in Western France to assess the time saved in the detection of P. aeruginosa in patients with CF by qPCR compared with culture detection methods. 3 Only patients P. aeruginosa free for at least 1 year were included. The cohort was divided into two groups at the end of the follow-up; group 1 contained patients who remained free of P. aeruginosa while patients from group 2 became positive in culture for P. aeruginosa during the follow-up. For each patient, sputum was collected every 3 months up to the first P. aeruginosa positivity in culture. For this ancillary study, 33 patients with CF of the PYOMUCO cohort were selected as follows: 20 patients from the 36 who became P. aeruginosa-positive at the end of the follow-up, and 13 patients from the 28 who remained P. aeruginosa-negative both in qPCR and culture. 3 The analysis of airway microbiota was performed retrospectively on spontaneous sputum samples collected at two time points: at enrolment (T0) and at the end of the follow-up (Tf) (figure 1). Overall, 75.7% of the samples originated from a paediatric population (<18 years old at sampling time). Clinical and biological data were collected at each sampling time (see online supplementary table S1). The majority of patients were homozygous (n=21, 63.6%) or heterozygous (n=12, 36.4%) for the F508del-CFTR mutation. Four clinical states were defined: baseline clinical state, pulmonary exacerbation, treatment for exacerbation and recovery (BETR categories). Sputum sample quality was verified by cytological examination as previously described. 3 P. aeruginosa was quantified using qPCR and a culture-based method, as previously described. 3 In order to confirm Porphyromonas distribution with respect to P. aeruginosa colonisation throughout the follow-up, P. catoniae absolute quantification was carried out with qPCR on 52 additional patients of the PYOMUCO cohort.
targeted metagenomics and P. catoniae absolute quantification in sputum samples Total DNA was extracted using the QIAamp DNA Mini Kit (QIAGEN, Courtaboeuf, France) as previously described. 5 For bacterial diversity assessment, barcoded high-throughput 454 pyrosequencing was performed on the amplified V3 and V4 hypervariable regions of the 16S rRNA gene, and data analysed as previously described (Bioproject PRJNA445243). 5 6 The absolute quantification of P. catoniae was performed using a validated qPCR Open access scheme with the standard curve method. The qPCR was set up on the ABI 7500 Fast Real-Time PCR system (Applied Biosystems, Foster City, California, USA) with SYBR Green, and original primers (sense: 5′-GTGTCT-TCGCCCAGCTTACT-3′; antisense: 5′-AGGATGCGGCG-GGTTTCA-3′) targeting the rplb gene. PCR reactions were carried out in a total volume of 25 µL with 12.5 µL of Select Mastermix (Applied Biosystems), and a temperature profile of 50°C for 2 min, 95°C for 10 min, followed by 40 cycles at 95°C for 15 s, 60°C for 60 s, 95°C for 30 s and 60°C for 15 s.
bioinformatics and statistical analyses
Sequences were analysed with the standard UPARSE pipeline according to Edgar's instructions as previously described 5 (see online supplementary file 1).
Statistical comparison between groups was performed with the Mann-Whitney U and Kruskal-Wallis tests, and linear discriminant analysis effect size (LEfSe) was used to elucidate bacterial taxa associated with group 1 or group 2 patients. The false discovery rate was calculated to correct for multiple hypothesis testing. Principal component analysis and clustering analysis were computed on different distance matrix to document the presence of enterotype-like clusters in airway CF microbiota. 7 These clusters were named pulmotypes.
results
The cohort samples clustered into three pulmotypes (p=0.001, Kruskal-Wallis test) driven by the differences in relative abundance of three dominant genera, Streptococcus, Haemophilus and Staphylococcus, as well as other co-occurring genera (see online supplementary figure S1). Overall, 11 predominant genera (relative abundance ≥1%) were found, including Porphyromonas which abundance varied between 2.5% and 6.9% depending on the pulmotype. Random forest analysis revealed a significant connexion between relative abundance of Pseudomonas and Porphyromonas ( figure 2A). Interestingly, in group 1 patients who remained uninfected by P. aeruginosa during the follow-up (figure 2B), Porphyromonas relative abundance at T0 was significantly higher than in group 2 patients (p<0.001, Mann-Whitney U test) ( figure 2C).
As Porphyromonas reads mainly corresponded to P. catoniae species, we focused on this bacterial species. In order to check the distribution of P. catoniae according to the patient group, we quantified P. catoniae by qPCR in another set of patients from the PYOMUCO cohort. For group 1 patients, we did not observe any statistical difference in P. catoniae population between the first (T0) and last sample (Tf) (p=0.41, t-test). Conversely, group 2 patients showed a significant drop in P. catoniae population (p=0.039, t-test) ( figure 2D). Then, we compared patients according to their P. catoniae population before P. aeruginosa colonisation. Group 1 had a significantly higher initial P. catoniae absolute quantity than group 2 (p=0.026). Finally, we tested the predictive power of P. catoniae. To do this, we analysed the susceptibility of patients with CF to acquire P. aeruginosa according to the presence or absence of P. catoniae in the penultimate sputum (Tx) ( figure 2D). We observed that 40.7% of patients without P. catoniae developed a P. aeruginosa infection the visit after (3 months later), while only 24% of patients positive for P. catoniae developed the infection (table 1).
DIscussIon
Bacteria from the Porphyromonas genus are anaerobic commensals of the core pulmonary microbiota in healthy people 8 and also described as part of the CF pulmonary core microbiota. 5 6 9-12 Characterisation of bronchoalveolar lavages' microbiota in infants with asymptomatic CF retrieved Porphyromonas as one of the six highest abundant taxa. 10 Looking into details at the taxonomic affiliation of Porphyromonas reads, the vast majority of them were affiliated to P. catoniae, which is in agreement with other culture-dependent and culture-independent studies. 11 12 Interestingly, in another study, the abundance of Porphyromonas was significantly lower in sputa of patients with chronic obstructive pulmonary disease as compared with healthy subjects. 8 Moreover, a decrease in P. catoniae abundance was observed during exacerbations, 13 and after antibiotic treatment, the abundance of P. catoniae returned to a baseline identical to that of the pre-exacerbation period. 13 These results also echo a previous observation in patients with CF under the CFTR potentiator drug. Indeed, a sustained increase of Porphyromonas relative abundance after initiation of ivacaftor was stated, which was positively correlated with the percentage of predicted FEV 1 . 6 The present study gave clues on the power of P. catoniae in predicting the risk of P. aeruginosa acquisition. Indeed, P. catoniae colonisation was associated with a lower risk of P. aeruginosa infection. Conversely, patients harbouring no P. catoniae within their airway microbiota showed 1.7-fold risk of acquiring P. aeruginosa later.
Taken together, these results suggest that P. catoniae may be considered as a favourable prognostic biomarker in CF. Further prospective studies on replication cohorts are needed to define the benefit provided by P. catoniae quantification in identifying patients with a higher risk of P. aeruginosa infection. In case of confirmation, we suggest to perform molecular quantifications of both P. catoniae and P. aeruginosa as part of the CF diagnosis toolbox in order to enlarge the 'window of opportunity' in the management of P. aeruginosa infection.
To conclude, this study showed the crucial importance of microbiota data in the management of patients with CF. In the personalised and precision medicine era, microbiota-based study could identify signatures that could be useful in predicting the CF progression. Identification of new bacteria of interest opens the possibility of using them as prognostic biomarkers or as companion diagnostic test. Further cohort studies are needed to validate these findings and to address the question of causality. The influence of the input microbiota on CF progression during early life has also to be investigated. In the not-too-distant future, study of both biochemical and microbial signatures will constitute new approaches to understand CF microbiology.
Funding This work was supported by a grant from the French Cystic Fibrosis Associations 'Vaincre la Mucoviscidose' and 'Grégory Lemarchal' (contract no. RC20170501971).
competing interests None declared.
Patient consent for publication Not required.
ethics approval Two review boards, the local Comité de Protection des Personnes VI-Ouest and the institutional review board of the Brest University Hospital Centre, approved the protocol.
Provenance and peer review Not commissioned; externally peer reviewed.
Open access
open access This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http:// creativecommons. org/ licenses/ by-nc/ 4. 0/. | 2019-03-31T13:32:38.326Z | 2019-03-01T00:00:00.000 | {
"year": 2019,
"sha1": "662d72fa4584a9fcb9bf37147e9af62169a13d35",
"oa_license": "CCBYNC",
"oa_url": "https://bmjopenrespres.bmj.com/content/bmjresp/6/1/e000374.full.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "662d72fa4584a9fcb9bf37147e9af62169a13d35",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
240556025 | pes2o/s2orc | v3-fos-license | Are There Ovarian Responsive Indexes That Predict Cumulative Live Birth Rates in Women over 39 Years?
Objective: Ovarian response indexes have been proposed in assisted reproductive technology (ART) in order to optimize live birth rates (LBR), adjusting ovarian stimulation (OS), and minimizing risks. Gonadotropin doses are commonly adjusted according to ovarian reserve parameters, including antral follicle count (AFC), anti-Mullerian hormone (AMH), and basal follicle stimulating hormone (FSH) levels. The retrospective assessment of ovarian responses allows one to identify three primary indexes: (i) follicular output rate (FORT), the ratio of the number of pre-ovulatory follicles obtained at OS completion over AFC; (ii) follicle oocyte index (FOI), the ratio of oocytes retrieved over AFC; (iii) ovarian sensitivity index (OSI), the ratio of oocytes retrieved over the total gonadotropin dose administered. In recent publications, these indexes were reported to predict ART outcome. In the present study, we assessed the ability of these indexes to predict cumulative ART outcome in women ≥39 years. Materials and Methods: Retrospective cohort study. All patients ≥39 years who performed their first ART cycle with an antagonist protocol in our center between 01/2018 and 04/2020 were included. Patients with basal FSH > 20 IU/l, AMH < 0.1 ng/mL and severe male factors (azoospermia with testicular biopsy) were excluded. All patients received both recombinant FSH and human menopausal gonadotropin (hMG). Cumulative live birth rate (cLBR) was the primary outcome. Secondary outcomes included: the number of MII oocytes, cumulative implantation (cIR), and usable blastulation rates. Logistic regressions were performed to assess the predictive values of FORT, FOI, and OSI in cLBR and embryo culture success. For each parameter, the ability of the logistic regression models to predict embryo culture success was quantified by the area under the ROC curve (AUC). Only the significant findings related to FORT, FOI, and OSI were included in the multiple logistic regression model. Linear regression models were performed between cIR, cLB, FORT, FOI, and OSI. Each statistic model was adjusted for age. Concerning OR for OSI, values were multiplied *100 due to the very low value. Results: 429 patients met the inclusion criteria. There were 298 obtained usable blastocysts after ART treatment. Age-adjusted OSI was significantly associated with cLBR [OR = 17.58 95% CI (5.48–56.40), AUC = 0.707 95% CI (0.651–0.758)) and cIR (beta = 30.22 (SE: 7.88), p < 0.001, R2= 0.060). Both FOI (OR = 6.33 95% CI (3.27–12.25), AUC = 0.725 95% CI (0.675–0.771), R2 = 0.090, p < 0.001) and OSI (OSI*100; OR = 1808.93 95% CI (159.24–19,335.13), AUC = 0.790 95% CI (0.747–0.833), R2 = 0.156, p < 0.001) were independently, when age adjusted, associated with embryo culture success. OSI showed a main performance to explain successful embryo culture than FOI (R2 = 0.156 vs. R2 = 0.090, p < 0.001). In the age-adjusted linear regression model, FOI (R2 = 0.159, p < 0.001), OSI (R2 = 0.606, p < 0.001), and FORT (r2 = 0.030, p < 0.001) were predictive of the number of MII oocytes collected. Furthermore, for OSI (r2 = 0.759, p < 0.001) and FOI (r2 = 0.297, p < 0.001), the correlation with the number of metaphase II oocytes collected was significantly higher in the non-linear regression model. Conclusions: Our findings suggest that the best index, among those analyzed, to predict cIR and cLBR, is OSI. Both OSI and FOI predict embryo culture with success, but OSI is more accurate. OSI, FOI, and FORT are significantly related to the number of MII oocytes obtained.
Introduction
The ovarian response to stimulation is one of the most studied parameters in assisted reproductive technologies (ART), in order to optimize outcomes while minimizing risks. Indeed, the magnitude of the response to ovarian stimulation (OS) has a direct impact on the number of oocytes harvested, which is one of the primary factors affecting the ART yield, and in turn pregnancy rates [1].
Classically, the number of oocytes retrieved is taken as the main marker of ovarian responsiveness to gonadotropin (Gn) stimulation. When looking at fresh embryo transfers, the retrieval of 15-18 oocytes was found to be associated with optimal IVF outcome. Secondary outcome measures include the total dose of gonadotrophins administered, duration of stimulation, and peak serum E2 levels [2].
While AMH and AFC provide a good estimate of the number of oocytes harvested, their prediction of live birth rates is limited [4]. These biomarkers represent a "static" snapshot of the individual ovarian reserve; they do not reflect the "dynamic" nature of follicular growth in response to exogenous COS (controlled ovarian stimulation). There is a strong individual variability in the response to stimulation, linked to both extrinsic (gonadotropin dose) and intrinsic factors (FSH receptor polymorphisms) [5] and the individual rhythm of follicular maturation waves [6]. The latter can lead to an unexpected ovarian response to OS. The observation that the total number of oocytes retrieved does not always accurately reflect the ovarian potential has sparked research of other markers of ovarian response. With this in mind, the attention has been focused on qualitative markers of ovarian response. These include follicular output rate (FORT), follicle-to-oocyte index (FOI), and ovarian sensitivity index (OSI), which may better reflect the dynamic nature of follicular growth in response to exogenous gonadotrophins [7].
FORT, defined as the ratio of the number of pre-ovulatory follicles obtained after OS completion over the pool of AFC, was introduced to quantify the ovary's follicular competence. Introduced by Gallot et al., to make easier the interpretation of relationship between follicle responsiveness to COS and IVF outcome, FORT groups were categorized as low, medium, and high. The three FORT groups were arbitrarily chosen according to whether FORT values were under the 33th percentile (<42%, low FORT group), between the 33th and the 67th percentile (42-58%), or above the 67th percentile (>58%, high FORT group) of distribution. FORT was significantly higher in women who achieved a clinical pregnancy when compared with those who did not (54.4 + 1.3 vs. 47.2 + 1.2%, respectively, p < 0.001) [8].
Both the absolute number of oocytes retrieved, and the total gonadotrophin dose are important measures of ovarian responsiveness, therefore, their ratio, OSI, should be a better representation of ovarian responsiveness. It is significantly related to the main biomarkers of the ovarian reserve (AMH and AFC) and it is more closely linked with clinical pregnancy than the number of retrieved oocytes. It has been proposed also to define patients as poor, normal, and high response patterns in COS [9,10].
FOI is the ratio between the total number of oocytes collected at the pick-up, and the number of antral follicles available (AFC). This parameter was proposed as an alternative approach to address the ovarian resistance to gonadotrophins stimulation. Low FOI values imply that only a fraction of available antral follicles was exploited during OS, suggesting that there might be therapeutic opportunities to change the fate of these women in a subsequent OS. Naturally, technical aspects related to oocyte retrieval and triggering final oocyte maturation can influence FOI results.
FORT, FOI, and OSI are considered to be positively related to the outcomes of IVF [8]. Until now, there have been few reports relating these indices to the cumulative ART outcomes, in particular cumulative pregnancy rate. Many of the existing studies in the literature analyses at most only one or two of these and relate these indices only with the number of oocytes recovered.
This retrospective analysis was carried out with the aim of testing the possible relationship between the indexes FORT, FOI, OSI, and the reproductive outcome as reflected by the cumulative live birth, implantation, and usable blastulation rates.
Subjects
This study retrospectively investigated patients ≥39 years old who underwent their first IVF autologous cycle at Foch's Assisted Reproductive Technology Center in Suresnes (France) between January 2018 and April 2020. We included all causes of infertility. Exclusion criteria were: basal FSH ≥ 20 IU/l, AMH ≤ 0.1 ng/mL, severe male factors (azoospermia with testicular biopsy), and BMI ≥ 35.
For patients who obtained embryos after the oocyte's retrieval and IVF, only those who transferred blastocysts were selected. Cumulative live birth rate (cLBR) was the primary outcome. Secondary outcomes included: the number of MII oocytes, cumulative implantation (cIR) and usable blastulation rates.
OS and Embryo Transfer Protocol
All patients underwent GnRH-antagonist protocol and received both recombinant FSH and human menopausal gonadotropin (hMG), which is routinely given in a three/one ratio. The daily dose ranged between 225 and 600 IU, which was individually adjusted according to age, basal FSH, AMH, and AFC. The GnRH antagonist was always administered from the sixth day of OS. Ovarian response was regularly monitored by transvaginal ultrasound (US) examination and serum estradiol measurement. Ovulation was triggered as soon as three or more pre-ovulatory follicles (≥18 mm in diameter) were observed and E2 levels were >1000 pg/mL. The triggering used a combination of human chorionic gonadotropin (HCG) and GnRH agonists or only with GnRH agonists, according to patient's hyperstimulation risk. Received GnRH trigger, patients with E2 level higher than 3000 and >20 follicles on the trigger day. Oocytes were retrieved 35 h after triggering by transvaginal ultrasound-guided aspiration. Fertilization was achieved either by conventional IVF or intracytoplasmic sperm injection (ICSI), depending on semen parameters. The uterine cavity was routinely evaluated by hysteroscopy to exclude uterine pathologies that might compromise pregnancy potential.
For fresh embryo transfer, patients began the luteal phase support treatment: 100 mg/day of acetyl salicylic acid, 200 mg twice daily of cefixime for three days, from the evening of the egg retrieval, and vaginal progesterone 200 mg twice daily, subcutaneous progesterone 25 mg/day, oral estradiol two mg BID, from the day after the pick-up. For the frozen embryo transfer, the patients started taking oral estradiol at a dose of 2 mg twice daily and acetyl salicylic acid 100 mg/day from the first day of last period. Ultrasound examination of the endometrium was performed between the eighth and the tenth day of the cycle; if the endometrium was seven millimetres or thicker, the patient started taking vaginal and subcutaneous progesterone. The embryo transfer was carried out after five days of treatment. The luteal phase support treatment had to continue until the first pregnancy test (serum hCG assay), which was performed ten days after the embryo transfer, and up to twelve weeks of amenorrhea, in case of pregnancy. Clinical pregnancy was defined as the presence of a gestational sac observed at US scan at around seven weeks of amenorrhea.
FORT, FOI and OSI Calculation
All patients before starting ovarian stimulation underwent a vaginal ultrasound to determine the AFC-the number of all follicles measuring between three and eight millimetres in diameter. FORT was calculated by dividing the number of pre-ovulatory follicles (POF) obtained at ovarian stimulation over the AFC. Pre-ovulatory follicles count was valued on the last vaginal ultrasound check before the trigger. We considered the follicles with medium diameter of 17 mm or more as preovulatory.
FOI was calculated as the ratio between the total number of oocytes picked up at the end of OS and the number of antral follicles available at the start of stimulation (AFC).
OSI was calculated as the number of oocytes retrieved divided by the total administered Gn dose. Concerning OR for OSI, the value was multiplied *100 due to the very low value.
Statistical Analysis
Continuous variables are presented as median + (25th percentile-75th percentile) and were compared using the Mann-Whitney test according to the distribution of the continuous variables. Categorical variables are presented as n (percentage) and were compared using the Chi-squared test or Fisher's exact test.
Age-adjusted logistic regression models were performed between successful embryo cultures with FORT, FOI, and OSI. A multivariable logistic regression model was performed between successful embryo cultures and significant variables (p < 0.05) among FORT, FOI, and OSI. The ability of the logistic regression models (with odds ratio (OR) and 95% CI (confidence interval)) to predict successful embryo culture were quantified by the area under the ROC curve (AUC) with 95% confidence interval (CI), the determination of the R 2 (adjusted coefficient of determination), the calculation of sensitivity (Se) and specificity (Sp), positive predictive value (PPV), negative predictive value (NPV), and accuracy. Se, Sp, PPV, and NPV were calculated based on performing a confusion matrix of the different models. Linear and age-adjusted linear regression models (with beta: regression coefficient with SE (standard error)) were performed to investigate the relationship between cIR, cLB, the number metaphase II oocytes collected, and the fertilization rate with FORT, FOI, and OSI. Non-linear single-variable regression models were performed to investigate higher correlation between FORT, FOI, and OSI and the different outcomes. Results reported for each age-adjusted model were the adjusted coefficient of determination, R 2 , and the squared partial correlation coefficient, r2, which were used to describe the contribution to FORT, FOI, and OSI for each parameter. Non-linear regression models were performed manually according to the aspects of the distribution of each variable and to present the highest correlation with studied parameters and restraining on one degree polynomial equations with X transformation, including natural logarithm, exponential, square, square root or reciprocal, and no Y transformation. Differences in correlation between models were assessed using Steiger's Z tests. AUC of ROC curves were compared by the Delong test (Delong E et al. 1988 Comparing the Areas under Two or more correlated Receiver Operating characteristic curves: a nonparametric approach. Biometrics 44:837-845). Statistics were performed using SAS software (version 9.4; SAS Institute, Carry, NC, USA). A p value < 0.05 was considered to indicate statistical significance. The data presented here only involved a retrospective review of the centre's anonymized electronic research database, also used to report the centre's annual IVF outcomes to national registries.
The study was authorized by the local ethical committee Foch IRB: IRB00012437.
Moreover, the relationship between the number of metaphase II oocytes collected and FORT was also both significant in the non-linear regression model (r 2 = 0.051, p < 0.001) and in the linear regression model (r 2 = 0.030, p < 0.001) (Figure 3), but with no difference in the two models (p = 0.327).
In the age-adjusted linear regression model, the fertilization rate (N = 426 patients), was only correlated with OSI (R 2 = 0.027, p = 0.013) but not with FORT (p = 0.876) and not with FOI (p = 0.253) (Figure 4). No significant differences were observed between non-linear and linear regression models for the relationship between the fertilization rate and OSI.
Discussion
Our findings suggest that the best index, among those analyzed, that predict cIR and cLBR is OSI. Both, OSI and FOI are significantly related to the number of MII oocytes obtained and predict embryo culture success. Traditionally, IVF success rates have been reported in terms of live birth per fresh cycle or embryo transfer. Sunkara et al. [2] demonstrated that there is an initial increase in fresh LBR with the number of oocytes retrieved; LBR either reaches a plateau or may even decline when more than 15-20 oocytes are harvested. On the other hand, when all fresh and frozen embryos are considered, there is a significant positive association with ovarian response. Cumulative live birth rates (cLBR) are defined as the first live birth following the use of all fresh and frozen embryos derived from a single ovarian stimulation cycle and appear to be a better measurement of IVF treatment success. cLBR increases with the number of oocytes retrieved, suggesting that ovarian stimulation may have a minimal or no detrimental effect on oocyte/embryo quality [11].
This could erroneously suggest that stimulation with higher doses of gonadotropins, and the consequent obtaining of a higher number of oocytes, will give us greater chances of success. However, in the management of our patients, especially in the setting phase of the treatment, we must never forget the two main risks they may face during stimulation: the hyperstimulation risk (OHSS) and the thrombotic risk.
Before starting OS, the assessment of AFC and AMH levels allows prediction of the risk of high ovarian responses, defined as more than 15-20 oocytes retrieved. Gn dose might be planned according to the assessed risk. The use of a GnRH antagonist protocol is beneficial to high-risk women as it markedly decreases the incidence of OHSS without affecting clinical pregnancy ovarian response to COS. Moderate and severe forms of hyperstimulation occur in 3-10% of all IVF cycles; the incidence can reach 20% among high-risk women [12]. The incidence of venous thromboembolic events in women undergoing in vitro fertilization is estimated at being 0.08-0.11% of all treatment cycles, and the incidence of arterial thrombosis is significantly lower [13]. The development of these events is mainly attributed to OHSS. However, the number of retrieved oocytes may be affected by a series of intrinsic factors such as the polymorphic variability of FSH receptors and the individual rhythm and the extent of the follicular maturation waves [14]. When in the final treatment evaluation we only consider the number of oocytes retrieved as an outcome of ovarian response, we do not consider the individual sensitivity of the ovary to the pharmacological stimulus. The number of oocytes retrieved at pick-up does not always represent a real expression of the potential of the ovary to exogenous stimulus. A very low dose of FSH is often used for COS in women with an expectedly high response, which may result in a low oocyte yield and therefore an erroneous classification of the patient as a poor responder. It is therefore essential to evaluate the dose of gonadotropins necessary to develop a certain number of follicles, and to recover as many oocytes as possible. Overwhelming is the evidence of a significant negative effect of smoking on female fertility and also upon clinical outcomes of ART. In particular, there is evidence of decreased clinical pregnancy rate among smokers, in addition to the strong implication of a negative effect on live birth rates, miscarriage rates, ectopic pregnancy rates, and fertilization rates [15]. Different studies have variously reported an increased gonadotropin requirement for ovarian stimulation, lower peak estradiol levels, fewer oocytes retrieved, and higher numbers of canceled cycles [16]. Therefore, smoking could be influencing OSI calculation more than all the other parameters. We evaluated smoking habit in our study population but we did not find statistically significant differences regarding obtaining or not of an embryonic culture.
In the last few years, scientific research in reproductive medicine has focused above all on identifying variables capable of predicting the IVF outcome. The objective is to further individualize ovarian stimulation protocols for optimizing results and reducing costs and complications. Identifying prognostic indices is very complex, as many variables may affect ART success, including the ovarian responsiveness, the number of oocytes available, and their competence. This is the first study that takes into consideration all three indices, FORT, FOI, and OSI, and correlates them with multiple aspects of ART outcome.
In our study, the FORT index did not prove to be statistically significant in predicting cLBR, cIR, and fertilization rate; it is only statistically significant in predicting the number of metaphase II oocytes collected. This is probably due to index calculation, based on the measurement of the AFC and on the monitoring method used, ultrasound scans during stimulation, which has some limitations: (a) It is based on the AFC, which is an operator-depending exam (b) AFC can be affected by the presence of space-occupying structures within the ovary, such as a corpus luteum, a dominant follicle, or, even worse, an ovarian cyst. In these conditions, the measurement is highly inaccurate, and this can lead to a loss of predictive value of the index [17], (c) oocyte recovery also depends on the choice of trigger timing. This depends on the follicular diameter and above all on the experience/organization of the different ART centres.
The FOI index was statistically correlated only with the number of MII oocytes recovered and the success of embryonic culture. This is probably partially attributable to some of the same limits we talked about regarding the FORT index, as it is also based on the AFC calculation. However, we can affirm that a good correspondence between the follicular count and the number of oocytes retrieved at the pick-up is certainly a sign of a good response to ovarian stimulation.
FOI is considered to be an indirect measure of ovarian response to gonadotropins. A good ovarian response, in quantitative terms (number of oocytes retrieved and percent-age of MII oocytes), certainly correlates positively with the achievement of an evolutive embryonic culture.
In our study, OSI is the best index, which is statistically predictive, for all the outcome parameters analyzed: cLBR, cIR, success of embryo culture, and number of MII oocytes.
The OSI links the number of retrieved oocytes to the degree of hormonal stimulation, expressing how many units of exogenous gonadotropins are needed to obtain each oocyte [16]. This suggests that the patients with a poorly responsive ovary, who need a high gonadotropin dose, are ab initio less prone to pregnancy, as they may be pharmacologically forced to produce more oocytes, but those of poor quality. This emphasizes the independent information given by the total dose of FSH administered.
OSI can only be calculated after a first stimulation cycle, but it seems a promising marker capable of expressing the sensitivity of the ovary, without being conditioned by the stimulation protocol.
Huber et al. also studied a large population undergoing IVF and demonstrated that OSI has a normal distribution in the study population [18]. After applying the standard statistical procedures for normal populations, they defined poor, normal, or high responding patients based on the OSI value. The groups showed significant differences in all major outcomes. Furthermore, in accordance with our analysis, it emerged that OSI is a stronger predictor of live birth rates than the number of oocytes retrieved at pick up [18][19][20].
Li and collaborators [19] evaluated the inter-cycle variability of the OSI value and the number of oocytes and confirmed that OSI has a higher intra-class correlation coefficient (ICC) between two stimulation cycles than the number of oocytes.
OSI is a patient-related marker, described as the ovarian sensitivity to exogenous stimuli, is an intrinsic characteristic that has little subject to change. It is therefore a valid marker of ovarian sensitivity that could possibly be introduced in the logarithms used for calculating the dose of gonadotropins to be used, obviously in patients who have already undergone a first stimulation cycle. It should be noted, however, that the main disadvantage of OSI lies in the intrinsic characteristic of being operator-dependent.
Limitations Section
The main limitations of our study are related to its retrospective and monocentric character and sample size. It was very difficult to recover all the necessary data for the calculation of the FORT FOI and OSI indices; we had to exclude several patients for whom we were unable to recover all data. For further validation of the results, it would be desirable to design a prospective study, with a larger study population and a longer observation period. We also had to exclude from our analysis some patients who had achieved pregnancy but had not yet given birth. Furthermore, a multicentric study would allow us to evaluate whether the use of different stimulation protocols or lower gonadotropin doses can lead to differences in results-in our center, minimum gonadotropin dose is 225UI, the maximum dose is 600 IU. Furthermore, we do not have any information about the ploidy of the embryos, as it was not possible to perform PGTA.
Conclusions
Our findings suggest that the best index, among those analyzed that predicts cIR and cLBR is OSI. Both OSI and FOI predict embryo culture success but OSI is more accurate. OSI, FOI, and FORT are significantly related to the number of MII oocytes obtained. Only OSI is correlated with fertilization rate. | 2021-10-20T15:35:23.928Z | 2021-09-01T00:00:00.000 | {
"year": 2022,
"sha1": "4d79e90b6ec1a5ea8eb65dc8ca1256a929eeaf73",
"oa_license": null,
"oa_url": "http://www.fertstert.org/article/S0015028221006804/pdf",
"oa_status": "BRONZE",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "f512a33a85d570f80c11060478af7e4b736f5296",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
261214642 | pes2o/s2orc | v3-fos-license | Funnel Theorems for Spreading on Networks
We derive novel analytic tools for the discrete Bass model, which models the diffusion of new products on networks. We prove that the probability that any two nodes adopt by time t, is greater than or equal to the product of the probabilities that each of the two nodes adopts by time t. We introduce the notion of an"influential node", and use it to determine whether the above inequality is strict or an equality. We then use the above inequality to prove the"funnel inequality", which relates the adoption probability of a node to the product of its adoption probability on two sub-networks. We introduce the notion of a"funnel node", and use it to determine whether the funnel inequality is strict or an equality. The above analytic tools can be exptended to epidemiological models on networks. We then use the funnel theorems to derive a new inequality for diffusion on circles and a new explicit expression for the adoption probabilities of nodes on two-sided line, and to prove that the adoption level on one-sided lines is strictly slower than on anisotropic two-sided lines, and that the adoption level on multi-dimensional Cartesian networks is bounded from below by that on one-dimensional networks.
Introduction. Diffusion of new products is a classical problem in marketing
. The diffusion starts when the product is first introduced into the market, and progresses as more and more people adopt the product. The first mathematical model of diffusion of new products was introduced by Bass [1]. In this model, individuals adopt a new product because of external influences by mass media and internal influences (peer effect, word-of-moth) by individuals who have already adopted the product. This seminal study inspired a huge body of theoretical and empirical research [15].
The Bass model, as well as most of this follow-up research, were carried out using compartmental models, which are typically given by deterministic ordinary differential equations. Such models implicitly assume that all individuals within the population are equally likely to influence each other, i.e., that the underlying social network is a homogeneous complete graph. In more recent years, research on diffusion of new products gradually shifted to discrete Bass models on networks, in which the adoption decision of each individual is stochastic. The discrete Bass model allowed for heterogeneity among individuals, and for implementing a social network structure, whereby individuals are only influenced by adopters who are also their peers.
Initially, discrete Bass models on networks were studied numerically, using agent-based simulations (see, e.g. [10,11,12]). To analytically compute the adoption probabilities of nodes in discrete Bass models on networks, one has to start from the master (Kolmogorov) equations for the Bass model, which are 2 M − 1 coupled linear ODEs, where M is the number of nodes (see, e.g., [4,Section 3.1]. Therefore, in order to explicitly solve these equations, one needs to reduce the number of ODEs significantly.
At present, there are two analytic techniques for solving the master equations explicitly, without making any "mean-field" type approximation. The first is based on utilizing symmetries of the master equations, in order to reduce the number of equations. This approach was applied to homogeneous circles [3] and to homogeneous and inhomogeneous complete networks [5]. The second approach is based on the indifference principle [8]. This analytic tool simplifies the explicit calculation of adoption probabilities, by replacing the original network with a simpler one. The indifference principle has been used to compute the adoption probabilities of nodes on bounded and unbounded lines, on circles, and on percolated lines [6,8].
In this paper, we introduce a third technique -the "funnel theorems" (Section 4). Choose some node j, and divide the remaining nodes into two subsets of nodes: A and B (see Figure 1). The funnel theorems provide the relation between the adoption probability of node j in the original network, with the product of the adoption probability of j on the two sub-networks {j, A} and {j, B}, which in many cases is easier to compute. The funnel relation is an equality if j is a vertex cut (see Figure 1A), or more generally if j is a funnel node (see Figure 1B), and is otherwise a strict inequality. To prove the funnel theorems, we first prove that the probability any pair of nodes to be adopters is greater or equal than the product of the adoption probabilities of each of the nodes (Section 3). In other words, the correlation of adoption at the same time between any pair of nodes is nonnegative. This inequality is not only of interest by itself, but is also an important analytic tool. For example, we recently used it in [7] to derive optimal lower and upper bounds for the adoption level on any network.
To illustrate the power of the funnel theorems, we apply them to circular and Cartesian networks. Thus, 1. We derive a novel inequality for diffusion on circles (Theorem 7). 2. We derive a new explicit expression for the adoption probability of nodes on isotropic and nonisotropic two-sided lines (Theorem 8). 3. We prove that the adoption level on the one-sided line is strictly slower than on two-sided isotropic and anisotropic lines (Theorem 9). This improves on [8] by extending the proof to the anisotropic case. In addition, the new proof is considerably simpler. 4. We prove that the adoption level on infinite multi-dimensional one-sided and two-sided Cartesian networks is strictly higher than on the infinite line (Theorem 10).
Finally, we note that our results are also relevant to spreading of epidemics on networks. We discuss this in Section 8.1, and compare our results with those in the epidemiological literature. Since all individuals are nonadopters at t = 0, The adoption decision is irreversible, i.e., once a node adopts the product, it remains an adopter for all later times. The adoption of nodes is stochastic, as follows. Node j experiences external influences by mass media to adopt at the rate of p j . The underlying social network is represented by a directed graph with positive weights, such that the weight of the edge from node k to node j is denoted by q k,j > 0, and q k,j = 0 if there is no edge from k to j. Thus, if k already adopted the product and q k,j > 0, its rate of internal influence on j to adopt is q k,j . Since a nonadopter does not influence itself to adopt, Finally, internal and external influence rates are additive. Therefore, the adoption time τ j of j is a random variable, which is exponentially distributed at the rate of Thus, time is continuous, and the adoption rate of j increases whenever one of its peers becomes an adopter. The maximal rate of internal influences that can be exerted on j (which is when all its neighbors/peers are adopters) is denoted by The underlying network of the discrete Bass model (1) is denoted by Our goal is to explicitly compute the adoption probability of nodes and use this to compute the expected fraction of adopters (adoption level) where M j=1 X j (t) is the number of adopters at time t. In most cases, it is easier to compute the corresponding nonadoption probabilities
Dominance and indifference principles.
The dominance principle is useful for comparing the adoption probabilities of nodes in two networks. Let us begin with Definition 1 (network dominance). Consider the discrete Bass model (1) on networks for all k = j.
We say that "N A is strongly dominated by N B " and denote N A ≺ N B , if at least one of these M 2 inequalities is strict.
Theorem 1 (dominance principle for nodes [8]). Consider the discrete Bass model (1) on networks N A and N B , both with M nodes. If N A N B , then the adoption probability of any node in network N A is lower than or equal to its adoption probability in network N B , i.e., Let Ω ⊂ M denote a subset of the nodes, and let denote the probability that all nodes in Ω did not adopt by time t. The indifference principle simplifies the explicit calculation of [S Ω ], by replacing the original network with a network with a modified edge structure, such that the value of [S Ω ] remains unchanged, but its explicit calculation is simpler. To do that, we need to be able to distinguish between edges that influence the nonadoption probability [S Ω ](t) and those that do not. Definition 2 (influential and non-influential edges to Ω [8]). Consider a directed network with M nodes ( if the network is undirected, replace each undirected edge by two directed edges). Let Ω M be a subset of the nodes, and let Ω c := M \ Ω be its complement.
A directed edge k → m is called "influential to Ω", if the following two conditions hold: 1. k ∈ Ω c , and 2. either m ∈ Ω, or there is a finite sequence of directed edges from node m to some node u ∈ Ω, which does not go through node k.
A directed edge k → m is called "non-influential to Ω" if one of the following three conditions holds: 1. k ∈ Ω, or 2. k ∈ Ω c , m ∈ Ω c , and there is no finite sequence of directed edges from node m to Ω, or 3. k ∈ Ω c , m ∈ Ω c , and all finite sequences of directed edges from node m to Ω go through the node k.
Thus, any edge is either "influential to Ω" or "non-influential to Ω".
Theorem 2 (Indifference principle [8]). Consider the discrete Bass model (1), and let Ω M be a subset of the nodes. Then [S Ω ](t) remains unchanged if we remove or add edges which are non-influential to Ω.
Strong dominance principle for nodes.
In Theorem 1 we saw that if N A N B , then f A j (t) ≤ f B j (t) for any j ∈ M. In [8] it was also showed that if N A ≺ N B , then the adoption level in N A is strictly lower than in N B , i.e., f for at least one node. In order to fully characterize the nodes for which the condition Definition 3 (Influential node). Let Ω ⊂ M. We say that "node m is influential to Ω" if m ∈ Ω, or if m ∈ Ω c and there is finite sequence of directed edges from m to Ω.
Thus, m ∈ Ω c in an influential node to Ω, if and only if there is an edge emanating from m which is influential to Ω.
Lemma 1 (strong dominance principle for nodes). Consider the discrete Bass model (1) on two networks N A and N B , both with M nodes. If N A ≺ N B , then the adoption probability of any node in network N A is strictly lower than its adoption probability in network N B , i.e., if and only if at least one of the following two conditions holds: 1. p A m < p B m and node m is influential to Ω = {j}. that both i and j are adopters. If i adopts, it may influence node j to adopt as well. Therefore, if we know that i is an adopter, this increases the likelihood that j is also an adopter, i.e., ( To prove this inequality, it is convenient to reformulate it using the nonadoption probability that both i and j are nonadopters.
Indeed, we have the following result: Consider the discrete Bass model (1). Then for any two nodes i, j ∈ M, Proof. By (1), the stochastic adoption of j ∈ M in the time interval (t, t + ∆t) as ∆t → 0 is given by the conditional probability where X(t) := (X 1 (t), . . . , X M (t)) is the state of the network at time t. Let ∆t > 0, t n := n∆t, and ω n := (ω n 1 , . . . , ω n M ) ∈ [0, 1] M . We define the time-discrete realization of (4) as follows: Note that as N → ∞, ∆t → 0 and t N ≡ t. Therefore, to prove (3) it is sufficient to show that for any 0 < ∆t ≪ 1, To do that, we first note that We claim that the function G i and G j are both non-decreasing in [0, 1] M×N with respect to any ω n m , where m ∈ M and n = 1, . . . , N . Therefore, inequality (6) follows from Chebyshev's multidimensional integral inequality (Lemma 3).
To prove this claim, note that if we increase ω n 0 m 0 by a factor of β > 1, this is completely equivalent to decreasing p m 0 (t n 0 ) and {q k,m 0 (t n 0 )} k =m 0 by β, since in the definition of X j , ω n m only appears in the condition ω n m ≤ p m (t n ) + k =m q k,m (t n ) X k (t n−1 ) ∆t. Therefore, from the proof of the dominance principle, see [8, eq. (3.4)], for any n, we have that either X i (t n ) and X j (t n ) decrease, or they remain unchanged. Hence, either G i and G j increase or they remain unchanged. With this, the proof of inequality (3) concludes.
When does
The condition for inequality (3) to be strict makes use of the notion of an influential node (Definition 3).
Theorem 4.
Consider the discrete Bass model (1). 1. If there exists a node in M which is influential to i and to j, then 2. If, however, there is no node which is influential to i and to j, then Proof. By Chebyshev's integral inequality (Lemma 3), inequality (6) is an equality if and only if G i and G j depend on different coordinates. The function G i depends on ω n k if and only if node k is influential to node i. Therefore, inequality (6) is an equality if and only if there is no node which is influential to both i and j. Therefore, we prove (8). To prove (7), however, we also need to show that inequality (6) remains strict as ∆t → 0.
To see that, assume that node m ∈ M is influential to i and to j. Denote by τ m the random variable given by the adoption time of m. Let H(x) = P(τ m ≤ x) denote the CDF of τ m , and let h = H ′ denote its density. Then by the law of iterated expectations For a given τ m ≥ 0, the adoption of any node in M \ {m} in the original network is identical to its adoption in network N with nodes M := M \ {m} and with weightsp j := p j + q m,j ½ t≥τm and q k,j := q k,j for k, j ∈ M. Therefore, by inequality (3), Combining (9) and (10), we have Since node m ∈ M is influential to i and to j andp j = p j + q m,j ½ t≥τm is monotonically-decreasing in τ m , then by the dominance principle, the two conditional probabilities on the right-hand-side are strictly monotonically-increasing in τ m for 0 ≤ τ m ≤ t and are monotonically-increasing for 0 ≤ τ m < ∞. Therefore, by Chebyshev's integral inequality with weights (Lemma 2), as needed.
Since any node is influential to itself, we have Consider the discrete Bass model (1). If node i is influential to node j, then The adoption of node j in network N may be the result of three distinct direct influences: 1. Internal influences on j by edges that arrive from A. 2. Internal influences on j by edges that arrive from B.
External influences on j.
In order to identify the specific influence that led to the adoption of j, we define three networks, on which j can only adopt due to one of these three influences: {j}} be a partition of M. We define four different networks with respect to this partition: Network N A is obtained from N by removing all influences on node j, except for directed edges from A to j. Thus, we cancel the external influences on node j by setting p j = 0, and we remove all direct links from B to j by setting q m,j = 0 for all m ∈ B.
3. Network N B is defined similarly. 4. Network N p j is obtained from N by removing all internal influences on node j, but retaining the external influence p j . Thus, we remove all direct links to j by setting q m,j = 0 for all m ∈ A ∪ B. We denote the state of node j in each of these four networks by The funnel theorems below compare the nonadoption probability of node j in the original network, with the product of the three nonadoption probabilities of j due to each of the three distinct influences.
where P X p j Proof. By the indifference principle, all edges that emanate from j are non-influential to j. Since this holds for all the four probabilities in (11), in what follows, we can assume that no edges emanate from j.
In principle, we need to compute the four probabilities in (11) using the four different networks from Definition 5. We can simplify the analysis, however, by considering only two networks (which are also "quite similar"), as follows. Given network N , we define network N + by "splitting" node j into three nodes j A , j B , and j p , such that: 1. Node j A inherits from j all the (one-sided) edges from A to j, i.e., and has p + j A = 0. 2. Node j B is defined similarly. 3. Node j p inherits from j the external influences p + jp = p j , but has no incoming edges from A and B, i.e., q + k,jp ≡ 0 for all k ∈ A ∪ B.
4.
Since no edges emanate from j in network N , no edges emanate from nodes j A , j B , and j p in network N + . 5. The weights of all nodes but j and of the edges between these nodes are the same in both networks. Let X + k (t) denote the state of node k in network N + . By construction, and In Appendix C we will prove that where X k (t) denotes the state of node k in network N . Since j p is an isolated node in N + , its adoption is independent of that of j A and j B , and so By Lemma 3, Combining relations (14), (15), and (16) gives Substituting (13) in (17) proves (11).
In order to determine the conditions under which the funnel inequality becomes an equality, we introduce the notion of a funnel node: Definition 6 (funnel node). Let {A, B, j} be a partition of M. Node j is called a "funnel node of A and B in network N ", if there is no node in A ∪ B which is influential to j both in N A and in N B .
Recall also the following definition: Definition 7 (vertex cut (vertex separator)). Let {A, B, j} be a partition of M. Node j is called a "vertex cut" or "vertex separator" between A and B, if removing node j from the network makes the two sets A and B disconnected (see Figure 1A).
Any node which is a vertex cut is also a funnel node: If node j is a vertex cut between A and B, then j is a funnel node. Proof. Let m ∈ A be an influential node to j. Then m cannot be an influential node to j in N B , since in N B we removed all edges from A to j, and so there is no sequence of edges (influential or not) from m to B.
The converse statement, however, is not true, i.e., there are networks in which j is a funnel node, and A and B are directly connected. For example, this is the case if all edges between A and B are non-influential to j. Moreover, even if nodes a ∈ A and b ∈ B are connected by an influential edge a → b , j may still be a funnel node, provided that there is no influential edge that emanate from node a in network N A (e.g. Figure 1B). • If j is a funnel node of A and B, then (11) becomes the funnel equality • If, however, j is not a funnel node of A and B, then inequality (11) is strict, i.e., Proof. The inequality sign in the derivation of the funnel inequality (11) only comes from the use of Lemma 3 in obtaining (16). By Lemma 3, inequality (16) is a strict inequality if and only if there exists a node m in network N + which is influential to j A and to j B . Since no edges emanate from j A , j B , and j p , then m ∈ A ∪ B.
Thus, the funnel inequality in strict if and only if there exists a node m ∈ A ∪ B in network N + which is influential to j A and to j B . This, however, is the case if and only if there exists a node m ∈ A ∪ B which is influential to j in network N A and in N B , i.e., if j is not a funnel node of A and B.
The expressions P X A j (t) = 0 and P X B j (t) = 0 in Theorem 6 are usually unknown, as they do not take into account the effect of p j on j. Therefore, let us introduce two additional networks: On network N A,p j , j can adopt due to the combined influences of direct edges from A and due to p j , and on network N B,p j , j can adopt due to the combined influences of direct edges from B and due to p j . Definition 8 (Networks N A,p j and N B,p j ). Consider the discrete Bass model (1) with net- {j}} be a partition of M. We define two additional networks with respect to this partition: 1. N A,p j is obtained from N by removing all direct links from B to j, i.e., by setting q m,j = 0 for all m ∈ B. Thus, we retain the direct edges from A on j and the external influence p j . 2. N B,p j is defined similarly. We can use the funnel equality to compute the combined influences from A and p j : where X j denote the state of j in network N . Therefore, (19a) follows. The proof for (19b) is similar.
We can restate the funnel inequality and equality in terms of P X A,p j j (t) = 0 and P X B,p j j (t) = 0 . This representation is useful when P X A,p j j (t) = 0 and P X B,p j j (t) = 0 correspond to known expressions (see e.g., the proof of Theorem 7).
• If j is a funnel node of A and B, then inequality (20) becomes an equality, i.e., • If, however, j is not a funnel node of A and B, then inequality (20) is strict, i.e., Proof. This follows from Theorem 5, Theorem 6, and Lemma 6.
Circular networks.
We now present several applications of the funnel theorems. We begin with the discrete Bass model on a circle. This problem was previously analyzed in [3,8]. In this section, we use the funnel theorem to derive a novel inequality for diffusion on circles.
Theory review.
We begin with a short theory review. Let f 1−sided circle (t; p, q, M ) denote the expected fraction of adopters in a homogeneous one-sided circle with M nodes, where each individual is only influenced by her left neighbor (see Figure 2A), i.e., Similarly, denote by f 2−sided circle (t; p, q R , q L , M ) the expected fraction of adopters in a homogeneous two-sided circle with M nodes, where each node can be influenced by its left and right neighbors (see Figure 2B), i.e., the expected adoption fraction of adopters on one-sided and two-sided homogeneous circles is identical [3], i.e., Therefore, we can drop the subscripts 1-sided and 2-sided. For a finite circle, the expected fraction of adoption is given by the explicit expression [8, Lemma 4.1] where As M → ∞ this expression simplifies into [3] lim The calculation of P X A,p j j (t) = 0 is as follows. In sub-network N A,p j , we removed the edge j ← j+1 . As a result, all the clockwise edges { k ← k+1 } k =j become noninfluential.
Hence, by the indifference principle, we can compute P X A,p j j (t) = 0 on the counter-clockwise one-sided circle with q R = q 1 , i.e., 6. Bounded lines. The discrete Bass model on a bounded line can be used to gain insight into the effects of boundaries on the diffusion. This problem was previously analyzed in [8]. In this section, we use the funnel theorems to obtain additional results. Figure 3A), i.e., Let us denote the state of node j on the one-side line by X 1−sided j . The adoption probability of node j on the one-sided line (30) is Unlike the Bass model on the circle, there is no translation invariance on the line, and so f 1−sided j may depend on j. Indeed, as j increases, there are more nodes to its left that can adopt externally and then "infect" j through a sequence of internal adoptions. Therefore, we have since f j does not change if we remove the non-influential edge j → j + 1 .
Let N A be the one-sided line (30) with j ≥ 2 nodes, and let N B denote the network obtained from N A when we delete the influential edge 1 → 2 . By the dominance principle (Lemma 1), By (31) and (32), Intuitively, fix any node on a one-sided circle with M nodes. There are M − 1 nodes to its left that can adopt externally and than lead the node to adopt through a chain of internal adoptions. As M increases, there are more nodes to its left. Hence, the adoption probability of the node increases. Finally, for future reference, we note that since lim M→∞ f circle = f 1D , see (26), we have from Corollary 3 that 6.2. Bounded two-sided line. In [8], we obtained an explicit but cumbersome expression for the adoption probability of nodes on the two-sided line. In this section we use the funnel theorem to obtain a simpler expression.
Consider the discrete Bass model on a two-sided homogeneous anisotropic line with M nodes, where each node can be influenced by its left and right neighbors at the internal rates of q L and q R , respectively (see Figure 3B). Thus, Let us denote the adoption probability of node j in a two-sided line (35) with M nodes by and the corresponding nonadoption probability of node j by In [9], it was shown that [S 2−sided j ] can be written explicitly using S 1−sided j for the one-sided left-going and right-going lines, as follows. Let [S L j ](t) denoted the probability that X j (t) = 0 when we discard the influences of all the right neighbors by setting q R ≡ 0 in (35), so that the network becomes a left-going one-sided line. Similarly, we denote by [S R j ](t) the nonadoption probability of node j on the right-going one-sided line, which is obtained by setting q L ≡ 0 in (35).
Proof. We provide a simpler proof for this lemma, which makes use of the funnel equality. Let j be an interior node. Let A := {1, . . . , j − 1} and B := {j + 1, . . . , M }. Hence, {A, B, {j}} is a partition of the nodes. Since the sets A and B become disconnected if we remove node j, we have from Lemma 5 that j is a funnel node to A and B. Therefore, by the funnel equality (21), By the indifference principle, when we compute [S j ] on a two-sided line, we can delete all the noninfluential edges that point away from node j. Therefore, In addition, P X p j (t) = 0 = e −pt . Hence, (36) follows. Let j be the left boundary node. Then on the left-going one-sided line, j is not influenced by any node, and so [S L j ] = e −pt . In addition, by the indifference principle, [S 2−sided j ] = [S R j ]. Therefore, (36) follows. The proof for the right boundary point is identical.
Simpler expression for f 2−sided
j . An explicit expression for the adoption probability f 2−sided j of nodes on a two-sided line with M nodes was previously obtained in [8] in the isotropic A simpler expression, which is also valid in the anisotropic case q L = q R , can be obtained using the funnel equality: Consider the discrete Bass model (1) on the two-sided line (35). Then where [S circle ] := 1 − f circle and f circle is given by (25).
Proof. By Lemma 10, On , see (24). On finite lines, however, this is not the case. Indeed, in [8] we showed that one-sided diffusion is strictly slower that isotropic two-sided diffusion i.e., The availability of the new explicit expression (37) for [S 2−sided j ] allows us to generalize this result to the anisotropic two-sided case (q R = q L ), with a much simpler proof: Theorem 9. Let q = q L + q R . Then for any p, q L , q R > 0 and 2 ≤ M < ∞, The key to proving this inequality is to show that it holds for any pair of nodes {k, M + 1 − k} which are symmetric about the midpoint, i.e., that This inequality was originally proved in [8]. That proof, however, was very long and technical. We now give a simpler proof of (38), which makes use of the the new explicit expression (37), which was derived using the funnel inequality. Equation (38) can be rewritten as Therefore, by (33) and (37), it suffices to prove for t > 0 that s(q, k) > e pt s(q R , k)s(q L , k), s(q,k) > e pt s(q R ,k)s(q L ,k).
Therefore, to prove (39), it suffices to show that This inequality follows from the strict monotonicity of s(q, k) := [S circle ] (t; p, q, k) in k, see Corrollary 3. Therefore, we proved (39). The condition q R , q L > 0 ensures that the two-sided line does not trivially reduce to the one-sided line.
D-dimensional Cartesian networks. Consider an infinite D-dimensional homogeneous
Cartesian network Z D , where nodes are labeled by their D-dimensional coordinate vector j = (j 1 , · · · , j D ) ∈ Z D . For one-sided networks, each node can be influenced by its D nearest-neighbors at the rate of q D , and so the external and internal parameters are where e i ∈ Z D is the unit vector in the i-th coordinate. For two-sided Cartesian networks, each node can be influenced by its 2D nearest-neighbors at the rate of q 2D , and so the external and internal parameters are Note that for both (40a) and (40b), the weights of the edges are normalized so that see (2). We denote the fraction of adopters on one-sided and two-sided D-dimensional Cartesian networks by f 1−sided D and f D , respectively.
In [3], it was observed numerically that f D is monotonically increasing in D, i.e., f 1D (t, p, q) < f 2D (t, p, q) < f 3D (t, p, q) < · · · , t > 0, and similarly for one-sided diffusion. So far, however, this result was only proved for small times [4,Lemma 14]. We can use the funnel theorem to provide a partial proof, namely, that f D > f 1D for all D ≥ 2: 1 Theorem 10. For any D ≥ 2 and t, p, q > 0, where f 1D is given by (26). 1 Intuitively, this is because the diffusion evolves as a random creation of external seeds, which expands into clusters.
The expansion rate of multi-dimensional clusters grows with the cluster size, whereas 1D clusters grow at a constant rate of q. See [3] for more details.
Proof. Let D ≥ 2, and let N D denote an infinite D-dimensional one-sided or two-sided Cartesian network. Denote the origin node by 0 := (0, . . . , 0) ∈ Z D . By translation invariance, Let network N rays D be obtained from N D by removing all edges, except for those that lie on lines that go through the origin node 0 and also point towards 0 (see Figure 4). Hence, the origin node 0 in network N rays D is the intersection of D one-sided rays with edge weights q = q D in the one-sided network, and at the intersection of 2D one-sided rays with edge weights q = q 2D in the two-sided network. In Lemma 11 below, we will prove that P X Since some of the edges that were removed in N rays D are influential to the origin node, then by the strong dominance principle for nodes (Lemma 1), Combining (41), (42), and (43) gives the result.
To finish the proof of Theorem 10, we prove Lemma 11. Let node a N 0 be the intersection of N identical one-sided semi-infinite rays, such that the weight of all edges is q (see Figure 5). Then Proof. We prove by induction on N , the number of rays. By Lemma 9, f 1−sided j (t, p,q, M ) = f circle (t, p,q, j). Therefore, see [8], which is the induction base N = 1. Thus, we assume that (46) holds for N rays, and prove that it also holds for N + 1 rays. Indeed, let A denote the first N rays, and let B denote the (N + 1)th ray. Then node a N +1 0 is a funnel node of A and B. Therefore, by funnel equality (21), By (12), Plugging relations (48) into (47) and using (29) gives as desired.
Torodial networks.
We can also consider the one-sided and two-sided discrete Bass models Proof. The only difference from the proof in the infinite-domain case is that in the last stage of the proof of Lemma 11, we use inequality (27) instead of equality (29). 8. Discussion. The analytic tools developed in this study have numerous applications, as already demonstrated in Sections 5 -7 and in [7]. Beyond the specific results of this study, it reveals the intricate relations between the inequality [S i,j ] ≥ [S i ][S j ], the funnel theorems, Chebyshev's integral inequalities, and the concepts of influential nodes and funnel nodes. While all of these results are new for the Bass model, some results appeared in some form in the study of epidemiological models, as we describe next. 8. 1. Relation to epidemiological models. If we sets p j ≡ 0 in the discrete Bass model (1), we obtain the discrete Susceptible-Infected (SI) model on networks from epidemiology. Therefore, Theorems 3 and 4 (for , and all the funnel theorems, hold for the discrete SI model. In [2], Cator and Van Mieghem proved that [S i,j ] ≥ [S i ][S j ] in the SIS and SIR epidemiological models. In these models, infected individuals later recover, and recovered individuals can either become infected again (SIS) or are immune from getting infected again (SIR). That study, however, did not include the equivalent of our Theorem 4, namely, the conditions under which this inequality is strict and when it is actually an equality. Indeed, the role played by influential nodes is one of the methodological contribution of our study.
In [13], Kiss et al. derived the funnel equality (18) for the SIR model, for nodes that are vertex cuts. Our funnel theorems are more general in two aspects. First, we show that an equality holds not only when the node is a vertex cut, but also when the node is a funnel node which is not a vertex cut. Second, we show that when the node is not a funnel node, a strict funnel inequality holds, and we find the direction of the inequality.
Finally, we note that the relation between the inequality [S i,j ] ≥ [S i ][S j ] and the funnel theorems was not noted in the above studies.
Universal lower bounds.
In [3], Fibich and Gibori conjectured that the adoption level f (t) on any infinite network that satisfies p j ≡ p and q j ≡ q, see (2), is bounded from below by that on the infinite circle, i.e., f (t) ≥ f 1D (t), see (26). In [7], however, we proved that the optimal universal lower bound for f (t) for all finite and infinite networks that satisfy p j ≡ p and q j ≡ q is given by the adoption level on a two-node circle, i.e., f (t) ≥ f circle (t; p, q, M = 2). Since f circle (t; p, q, M = 2) < f 1D , see (34), this shows that f 1D is not a universal lower bound for f (t) for all networks. Theorem 10 shows, however, that f 1D is a universal lower bound, for all one-sided and two-sided infinite multi-dimensional Cartesian networks.
Hence, since An equality holds if and only if (f (x) − f (y))(g(x) − g(y)) ≡ 0 almost everywhere in [a, b] 2 , which is the case if and only if either f (x) or g(x) are constants.
Appendix C: Proof of (14). Let us fix t > 0. Let us consider the Bass model on networks N and N + with discrete times t n := n∆t, n = 0, 1, . . .
We also define the sub-realization {ω n −j } ∞ n=1 , where ω n −j := {ω n k } k∈A∪B . Since there are no edges emanating from j, j A , j B , and j p , this sub-realization completely determines { X k (t n )} and { X + k (t n )} for all k = j and n ∈ N. Moreover, if we use the same {ω n −j } ∞ n=1 and ∆t for both networks, then X k (t n ) ≡ X + k (t n ), k ∈ A ∪ B, n = 0, 1, . . .
To finish the proof of (49), we now show that the integrand N n=1 F n of (51) approaches, uniformly in {ω n −j } N n=1 , the integrand N n=1 F + n of (52). Indeed, by (50), F + n = (1 − p j ∆t) 1 − k∈A q k,j X + k (t n−1 ) ∆t 1 − k∈B q k,j X + k (t n−1 ) ∆t = (1 − p j ∆t) 1 − k∈A q k,j X k (t n−1 ) ∆t 1 − k∈B q k,j X k (t n−1 ) ∆t = 1 − p j + k∈A∪B q k,j X k (t n−1 ) ∆t + A n (∆t) 2 = 1 − p j + k∈A∪B q k,j X k (t n−1 ) ∆t 1 + A n (∆t) 2 1 − p j + k∈A∪B q k,j X k (t n−1 ) ∆t = F n 1 + A n (∆t) 2 1 − p j + k∈A∪B q k,j X k (t n−1 ) ∆t , where A n = p j k∈A∪B q k,j X k (t n−1 ) + k∈A q k,j X k (t n−1 ) k∈B q k,j X k (t n−1 ) (1 − p j ∆t). Hence, By (51)-(53), to finish the proof of (49), we need to show that lim ∆t→0 N n=1 1 + A n (∆t) 2 1 − p j + k∈A∪B q k,j X k (t n−1 ) ∆t = 1, uniformly in {ω n −j } N n=1 . The three sums that appear in A n are uniformly bounded: 0 ≤ k∈A q k,j X k (t n−1 ), k∈B q k,j X k (t n−1 ) ≤ k∈A∪B q k,j X k (t n−1 ) ≤ k =j q k,j = q j . | 2023-08-28T06:42:12.198Z | 2023-08-24T00:00:00.000 | {
"year": 2023,
"sha1": "28782391da9b4859498214f3850b3a31735e4cce",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "28782391da9b4859498214f3850b3a31735e4cce",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
119216403 | pes2o/s2orc | v3-fos-license | Location of upper borders of cavities containing dust and gas under pressure in comets
The distance between the pre-impact surface of Comet 9P/Tempel 1 and the upper border of the largest cavity excavated during ejection of material after the collision of the impact module of the Deep Impact spacecraft with the comet is estimated to be about 5-6 metres if the diameter of the DI transient crater was about 150-200 m. The estimated distance was 4 m at the diameter was 100 m. This result suggests that cavities containing dust and gas under pressure located a few metres below surfaces of comets can be frequent.
INTRODUCTION
In 2005 the impact module of the Deep Impact (DI) spacecraft collided with Comet 9P/Tempel 1 (A' Hearn et al. 2005). Ipatov & A'Hearn (2011) analysed images of the cloud of material ejected after this collision. Based on analysis of the images captured during the first 13 minutes, they studied the process of ejection of material and concluded that, in addition to the normal ejection, there was a triggered outburst of small particles. Some excess ejection was observed beginning at 1 s. The outburst was considerable 8-60 s after the impact. It increased the duration of the ejection and the mean velocities of observed ejected particles (compared with the normal ejection). The mean velocities (~100 m s -1 ) of small observed particles almost were almost constant with time elapsed, t e , since the impact for 10<t e <20 s. Ipatov & A'Hearn (2011) supposed that the outburst was caused by the ejection of material from cavities containing dust and gas under pressure. Velocities of such `fast`outburst particles could be mainly~100 m s -1 (such velocities were obtained at various observations of the leading edge of the DI cloud). Hosapple & Housen (2007) supposed that ejected particles could be accelerated by the dust-gas interaction. Such acceleration could be important for time intervals of several hours. Ipatov & A'Hearn (2011) came to their conclusions based on analysis of variations in brightness in images made during the first 13 minutes and considering distances, R, from a place of ejection between 1 and 10 km. They concluded that particles could not increase their velocities by more than a few metres per second during not more than a few minutes, when the particles moved at R between 1 and 10 km. Ipatov and A'Hearn analysed the brightness of the DI cloud at such distances but did not consider a plume base at R<1 km, in contrast to Richardson et al. (2007), who presented pictures of the evolution of the plume. In many DI images, the region corresponding to R<1 km consisted mainly of saturated pixels (in 8-bit images received from the DI spacecraft, the brightness of most saturated pixels was 255). Ipatov & A'Hearn (2011) analysed the sky brightness mainly at the distances R at which most observed material did not fall back on the comet. According to Holsapple & Housen (2007), only 1.4 per cent of ejected DI material (10 5 kg) did not return to the comet surface.
In is difficult to explain the time variations in the brightness of the DI cloud at 1<R<10 km without consideration of the triggered outburst with velocities of ~100 m s -1 . Several authors (e.g. Richardson et al. 2007) assume that variation in brightness of the DI cloud was caused by variation in particle size distribution resulting from the striking of a layered target. concluded that sub-micron water particles were observed from 3 s to 45 min after impact and the ejection of icy particles began when the crater depth reached 1 m. We suppose that the model of a layered target plays some role in explanation of the variation of brightness of the DI cloud, but it cannot explain all details of such variation (for example why at~10 s there was simultaneously the jump in the direction from the place of ejection to the brightest pixel in an image of the DI cloud by 50 o , an increase in the rate of ejection of small particles, and an increase in the brightness of the brightest pixel; why at the time of ejection t e~6 0 s there was a sharp decrease in the rate of ejection of small particles, why at time t~60 s after the impact the direction from the place of ejection to the brightest pixel returned to the direction at 1<t<12 s, why the mean ejection velocities of observed particles were almost the same at t e~1 0-20 s, etc.). Holsapple & Housen (2007) concluded that conventional cratering cannot be the sole key to observed plume of the DI event, and that a volatile subsurface could greatly enhance the amounts of ejected mass. Ipatov & A'Hearn (2011) noted that additional ejection at t e <60 s was different for different directions. Together with variation in the direction to the brightest pixel mentioned above, this suggests of that at t e~1 0-60 s the additional ejection was not only the result of ejection from a volatile subsurface. Jorda et al. (2007) concluded that the diameters of particles that made contributed most to the brightness of the DI cloud were smaller than 3 μm. The sizes of the `fast`outburst particles could be mostly less than a few microns (although a few relatively large pieces of cavity borders could be ejected), and therefore their contribution to the total mass of ejected material would be much smaller than their contribution to the brightness of the DI cloud. At the beginning of the main outburst (at t e ≈8 s), typical velocities of ejected particles were about 100 m s -1 . Holsapple & Housen (2007) concluded that only 370 kg of ejected material would be travelling at such a velocity. As typical sizes of outburst particles were smaller than those from the normal ejection, the total mass of outburst particles ejected at that time could be much smaller than 370 kg, although their contribution to the brightness of the DI cloud was noticeable.
Based on studies of the ejecta plume, Richarson et al. (2007) It is noted on this website that the crater is estimated to be 150 metres in diameter. In this image, one can see that the diameter of the brightest part of the ring zone of ejected material around the crater is about 90-100 m. The diameters of the inner and outer edges of the ring zone are ~60-70 m and ~130-140 m, respectively. The ring zone may correspond to ejected material, and the diameter of an excavation zone might be about 100 m. The crater size observed by Stardust spacecraft could be different from the transient crater size just after its formation. This is because the sublimation process as well as various modification processes changed the crater size after its formation.
Based on the time of the beginning of excavation of the main cavity (t e ≈8 s) obtained by Ipatov & A'Hearn (2011) and on the above estimates of the diameter of the DI crater, in Section 2 we estimate the distance d cavDI between the pre-impact surface of Comet Tempel 1 and the upper border of the main excavated cavity. Such estimates enabled a better understanding of the distances d cav between the surfaces of comets and the upper borders of cavities containing dust and gas under pressure. Possible values of d cav for several comets are discussed in Section 3.
CAVITIES CONTAINING DUST AND GAS UNDER PRESSURE IN COMET 9P/TEMPEL 1
It is considered (e.g. Croft 1980;Melosh 1989) that ejected material originates from an excavation cavity which has a geometry distinct from that of the transient crater. The excavation cavity and the transient crater have the same diameter d tc , but the depth, d he , of the excavation cavity is ~0.1d tc , or about one-third of the transient crater depth, and, in the case of simple bowlshaped craters, about one-half of the depth of the final apparent crater. For example, d he~1 0 m for d tc =100 m. For some craters, Croft (1980) considered the ratio d he /d tc to be in the range [0.09, 0.17].
For theoretical models (e.g. Holsapple & Housen 2007), during the most time of crater formation (except for initial and final stages), the diameter, d c , of a crater at time, t e , elapsed since the impact is proportional to t e γ , where γ is about 0.25-0.4. No energy dissipation corresponds to γ=0.4. Porous material has a greater dissipation; γ=0.29 for dry soils with a porosity of 30-35 per cent, and γ ranges from 0.25 to 0.29 for highly porous materials. In their table 4, Holsapple & Housen (2007) considered models at 0.29≤γ≤0.355. As comets are porous, we can probably assume that 0.25<γ≤0.3.
According to fig. 12 in (Holsapple 1993), the diameter, d c , of a crater grows faster than t e γ in the initial stage (reaching the value denoted as d cav1 ), but hardly grows (and can slightly decrease) in the final stage. The duration t 1 of the initial stage is usually less than 0.1T e , where T e is the time between the beginning of ejection and the end of the intermediate stage (at this stage, d c is proportional to t e γ ). In the mentioned figure, for water the duration t 1 is even less than 10 -3 T e . As noted by Richardson (personal communication 2011), as a result of significant impactor penetration into the porous target, the depth of the initial excavation can grow even faster than for analytical models. The depth d cavDI of the DI crater at the time, t eb , of the beginning of excavation of the main cavity is d cavDI =d cav1 +(d he -d cav1 )×((t eb -t 1 )/(T e -t 1 )) γ for t 1 <t eb <T e . It is difficult to estimate accurately the values of d cav1 and t 1 for the DI crater.
Ipatov & A'Hearn (2011) concluded that outburst and excavation of a large cavity began ~8 s after the DI collision. Supposing d c to be proportional to t e γ also at the initial stage, we can estimate the lower limit of the depth, d cavDI , of the DI crater at the time, t eb , of the beginning of excavation of the main cavity as d cavmin =d he ×(t eb /T e ) γ . At d he =0.1d tc , the value of d cavmin equals d cmn =0.1d tc ×(t eb /T e ) γ . At t eb =8 s, the values of d cmn for three values of γ are presented in Table 1. The real value of d cavDI is greater by some value d h1 than d cmn because, during the short initial stage, the real growth of the crater is greater than that for the model used for calculation of d cmn . We suppose that the difference d h1 can be about 1 m. In Table 1 we consider that d cavDI =0.1d tc ×(t eb /T e ) 0.3 +d h1 , where d h1 =1 m.
In table 1 of Holsapple & Housen (2007), the crater formation time, T cr , (which is slightly greater than T e ) is proportional to d tc (strength) or d tc 1/2 (gravity), and T cr =288 s at d tc =88 m for sand-gravity scaling. At T cr proportional to d tc , the latter estimate of T cr corresponds to 330 s at d tc =100 m. Supposing that T cr =330 s at d tc =100 m (these values are also presented in table 4 in Richardson et al. 2007) and considering T cr to be proportional to d tc 1/2 (a lower estimate) or d tc (an upper estimate), we obtain that T cr is about 470-660 s at d tc =200 m, 400-500 s at d tc =150 m, and 185-230 s at d tc =50 m. For estimates of the values of d cmn presented in Table 1, we assumed that values of T e are equal to the upper values of the intervals of T cr presented above. For T e =470 s and d tc =200 m, the values of d cmn are greater by a factor of 2 γ/2 (2 γ/2 equals 1.09, 1.11, and 1.15 at γ equal to 0.25, 0.3, and 0.4, respectively) than the values of d cmn at T e =660 s presented in Table 1. Therefore, the values of d cmn obtained at the lower values of the above intervals of T cr are almost the same as those in the table.
It is hoped that the values of T e presented in Table 1 can be used. Nevertheless, below we present estimates of d cavDI at much smaller values of T e . The smaller the values of T e are, the greater the values of d cavDI . If the values of T e are not known, then it is possible to estimate the upper limit of d cavDI using the estimates of the lower limit of T e . During the intermediate stage of crater formation (when the diameter d c of a crater is proportional to t e γ ), time usually increases by more than a factor of 10 (see fig. 12 in Holsapple 1993). Therefore, during the time interval [0.1T e , T e ] d c increases by a factor of 10 γ , where 10 γ equals to 2, 1.8, and 2.5 at γ equal to 0.3, 0.25, and 0.4, respectively. These estimates show that at t eb =8 s and T e >80 s (this inequality is fulfilled for the DI crater) the maximum value, d cavmax , of d cavDI does not exceed d he /10 γ +d h1 , which is in the range [0.4d he +d h1 , 0.56d he +d h1 ], that is d cavmax ≤0.056d tc +d h1 at d he /d tc =0.1 (e.g. d cavmax ≤0.05d tc +d h1 for γ=0.3). At T e =80 s and γ=0.3, the value of 0.05d tc is greater by about 2 and 5 m than the values of d cmn presented in Table 1 for d tc =100 m and d tc =200 m, respectively.
Based on the values of d cavDI presented in Table 1, it is concluded that the distance between the pre-impact surface of the comet and the upper border of the main excavated cavity is about 5-6 m for the estimates (150-200 m) of the diameter of the DI transient crater presented by Schultz et al. (2012). The estimated distance is 4 m for the diameter of 100 m. The excavated cavity could be located at some distance from the centre of the DI crater (i.e. not directly below its centre). Therefore, the distance d cavDI between the pre-impact surface of the comet's nucleus and the upper border of the cavity could be smaller than the depth of the crater at the beginning of excavation of the cavity. On the other hand, as a result of cracks caused by the impact, the outburst from the cavity could begin before excavation of its upper border, and consideration of cracks can increase the estimate of d cavDI .
The largest cavity excavated after the DI collision could be relatively deep because a considerable excess ejection lasted during ~50 s (at 8<t e <60 s). This ejection probably was from the same cavity because the direction from the place of ejection to the brightest pixel in images made at 12<t<60 s was quite different from the direction at t<12 s and t>60 s, and one of `rays of ejection`(i.e. rays of brighter material in the DI cloud with a vertex at the place of ejection) disappeared at 60 s. The existence of `rays of ejection`in DI images made at t≈13 min argues in favour of the ejection of particles from cavities at t e ≈10 min. The ejection of slower-moving particles from a `fresh`surface of the DI crater could continue for more than 10 min.
For small cavities/cavity excavated at t e ≈1 s, the depth of the crater could be estimated as d cmn /8 γ +d h1 (where the values of d cmn are presented in Table 1, and 8 γ is about 2) and could be ~2-3 m.
CAVITIES IN OTHER COMETS
The distance between the pre-impact surface of Comet 9P/Tempel 1 and the upper border of the largest excavated cavity equal to about 4-6 m, and sizes of particles inside the cavities of a few microns are in good agreement with the results obtained by Kossacki & Szutowicz (2011). These authors made calculations for several models of the explosion of Comet 17P/Holmes. They concluded that the nonuniform crystallization of amorphous water ice itself is probably not sufficient for an explosion, which could be caused by a rapid sublimation of the CO ice leading to a rise of gas pressure above the tensile strength of the nucleus. In their models, the initial sublimation front of the CO ice was located at a depth of 4 m, 10 m, or 20 m, and calculations were finished when the CO pressure exceeded the threshold value 10 kPa. It was shown that the pressure of CO vapor can rise to this value only when the nucleus is composed of very fine grains, a few microns in radius.
The estimates of the locations of cavities in Comet 9P/Tempel 1 presented in Section 2 and the studies of the initial sublimation front of the CO ice in Comet 17P/Holmes discussed in the previous paragraph show that the upper borders of relatively large cavities containing dust and gas under pressure can be located at a depth of 4-20 m below surfaces of different comets. After some time, gas under pressure can make its way from a cavity to the surface of a comet, and the gas formed later will use the same way at a relatively low pressure. Therefore, probably, the more time a comet has spent close to the Sun, the greater the distances from the surface of the comet to the upper borders of cavities containing dust and gas under considerable pressure.
It is usually considered that the main sources of gas pressure are water ice sublimation, the sublimation of a more volatile ice such as CO or CO 2 at a lower temperature than required for water ice, and the crystallization of amorphous ice in the interior of porous nucleus. Other discussed potential mechanisms of outbursts include the polymerization of hydrogen cyanide HCN, the thermal stresses, the annealing of the amorphous water ice, and meteoritic impacts (see e.g. Gronkowski & Sacharczuk 2010;Ivanova et al. 2011; more references are given in Ipatov 2012). The porous structure of comets provides enough space for sublimation and argues in favour of existence of cavities.
The projection of the velocity of the leading edge of the DI cloud (onto the plane perpendicular to the line of sight) was about 100-200 m s -1 (see references in Ipatov & A'Hearn 2011) and is typical for outburst particles ejected from comets. According to Feldman et al. (2007), in the 2005 June 14 natural outburst from Comet Tempel 1, velocities of ejection were 60-145 m s -1 . Sarugaku et al. (2010) obtained that the dust cloud caused by the outburst from Comet 217P/LINEAR expanded at a velocity of 120-140 m s -1 . Velocities of outburst particles ejected from Comet 29P/Schwassmann-Wachmann 1 were about 250±80 m s -1 (Trigo-Rodriguez et al. 2010). Similarity of velocities of particles ejected at the triggered and natural outbursts shows that these outbursts could be caused by similar internal processes in comets.
CONCLUSIONS
The upper border of the largest cavity excavated during ejection of material after the collision of the impact module of the Deep Impact spacecraft with Comet 9P/Tempel 1 could be located at a depth of about 5-6 metres below the pre-impact surface of the come if the diameter of the DI transient crater was about 150-200 m, as suggested by Schultz et al. (2012). The estimated depth was 4 m at the diameter was 100 m. The largest cavity excavated after the DI collision could be relatively deep because a considerable excess ejection lasted for~50 s.
These estimates of the depth are in accordance with the depth (4-20 m) of the initial sublimation front of the CO ice in the models of the explosion of Comet 17P/Holmes considered by Kossacki & Szutowicz (2011). Our studies suggest that cavities containing dust and gas under pressure and located a few metres below the surfaces of comets can be frequent. | 2019-04-13T03:00:47.055Z | 2012-05-27T00:00:00.000 | {
"year": 2012,
"sha1": "75042de232779ae239a348ceea55bee50bddd920",
"oa_license": null,
"oa_url": "https://academic.oup.com/mnras/article-pdf/423/4/3474/4905512/mnras0423-3474.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "5cc40cfdac89c589673d4f04b33981e23ee938d0",
"s2fieldsofstudy": [
"Geology",
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Geology"
]
} |
68498218 | pes2o/s2orc | v3-fos-license | Current status of new modes of mechanical ventilation
Over the past 10 years, a number of different modes of mechanical ventilation have been introduced, in addition to changes in the philosophy by which we apply mechanical ventilation. Of primary concern today is the prevention of ventilator-induced lung injury. Along with this concern has come a change in the level of carbon dioxide considered to be acceptable in critically ill patients (permissive hypercapnia) and the introduction of adjunct therapies (tracheal gas insufflation [TGI]) designed to reduce carbon dioxide. In addition, the focus of ventilator delivery has moved from volume to pressure. Pressure support and pressure control have become the standards for ventilatory modes. VENTILATOR-INDUCED LUNG INJURY Mechanical ventilation is a nonphysiological process. Pressure, volume and fraction of inspired oxygen beyond the levels that the lung normally tolerates are frequently used. As a result, lung injury may be caused or extended by the process of mechanical ventilation. Lung injury may be manifest in two forms: gross barotrauma or parenchymal injury similar to acute respiratory distress syndrome (ARDS) (Table 1). Three conditions must usually be present for gross barotrauma to develop: disease; high transpulmonary pressure; and overdistension (1). The precise pressures and volumes having a high likelihood for the development of barotrauma are unknown. However, because the maximum transpulmonMECHANICAL VENTILATION SYMPOSIUM
ary pressure gradient developed by healthy individuals is about 35 to 40 cm H 2 O, it seems reasonable to expect that the probability of barotrauma will increase if pressure is applied above this level (2).
Numerous animal studies (eg, in rats, sheep, dogs and pigs) have demonstrated parenchymal damage after relatively short periods of mechanical ventilation when peak airway pressures are maintained at about 45 cm H 2 O (3,4).An important finding of these studies was that the extent of the damage was decreased if positive end-expiratory pressure (PEEP) was maintained above the inflection point on the compliance curve of the lung (3) or if the thorax was strapped (preventing hyperinflation because of decreased chest wall compliance) (4).These data have lead most authorities on mechanical ventilation to recommend limiting end-inspiratory plateau pressure and thus the resulting inflation volume (5).The term 'volutrauma' has been used to describe the lung injury induced by mechanical ventilation to emphasize that it is local overdistension that causes lung injury and not pressure per se (2).If local overdistension is limited by strapping of the thorax (or any other mechanism that decreases chest wall compliance), no injury develops despite high alveolar pressure.From a practical perspective, most have indicated that peak alveolar pressure (end-inspiratory plateau pressure) should be kept below 35 cm H2O (5).
PERMISSIVE HYPERCAPNIA
Permissive hypercapnia is the deliberate limitation of ventilatory support to avoid regional or global overdistension, allowing PaCO2 to rise to levels greater than normal (50 to 100 mmHg) (6).Allowing PaCO2 to rise to these levels should be considered when the only alternative is a potentially dangerous increase in peak alveolar pressure.The potential adverse effects of elevated PaCO2 are listed in Table 2 (6).Most of the more important clinical problems occur at PaCO2 levels above 150 mmHg.However, even small increases in PaCO 2 increase cerebral bloodflow, and permissive hypercapnia is generally contraindicated when intracranial pressure is increased (eg, acute head injury).Elevated PaCO 2 also stimulates ventilation, but patients are usually sedated and paralyzed in the settings where permissive hypercapnia is maintained.
Permissive hypercapnia may adversely affect the oxygenation status of some patients.Elevated PaCO2 and low pH shift the oxyhemoglobin dissociation curve to the right, decreasing the affinity of hemoglobin for oxygen and decreasing oxygen loading in the lungs, but facilitating the unloading of oxygen at the tissues.In addition, as illustrated by the alveolar gas equation, an increase in alveolar PCO 2 results in a decrease in alveolar PO2.For each PaCO2 rise of 1 mmHg, PaO2 decreases by about 1 mmHg.Whenever permissive hypercapnia is used, optimal efforts to maximize oxygenation should be attempted.
The effect of carbon dioxide on the cardiovascular system is more difficult to predict because carbon dioxide elicits competing responses from the cardiovascular system (7).Carbon dioxide directly stimulates or depresses some parts of the cardiovascular system, but opposite effects can occur via stimulation of the autonomic nervous system.It is thus difficult to predict the precise response of the cardiovascular system to permissive hypercapnia in any given patient (7).However, clinically an increase in PCO2 normally causes pulmonary hypertension.Dosage of pharmaceutical agents affecting the cardiovascular and autonomic nervous systems may need to be adjusted in the presence of permissive hypercapnia, but this is due to the resulting acidosis and not to elevated PCO 2 (6).
The primary factor limiting the use of permissive hypercapnia is pH.Patients without primary cardiovascular disease or renal failure usually tolerate a pH of 7.20 to 7.25, and younger patients may tolerate an even lower pH (6).The specific acceptable minimal pH needs to be determined on an individual patient basis.Allowing PCO 2 to rise gradually from the onset of ventilation allows gradual renal compensation without severe acidosis.Abrupt changes in ventilator strategies that result in rapid and marked elevation of PaCO 2 are more poorly tolerated.Whether alkalizing agents should be administered to manage acidosis induced by permissive hypercapnia is debatable.In the setting of cardiac arrest, sodium bicarbonate use has been questioned because of the resulting increased intracellular acidosis (8).Its use in permissive hypercapnia, however, has not been extensively studied.One can expect a short term increase in carbon dioxide load when sodium bicarbonate is administrated, which is exhaled over time if the level of ventilation is held constant.However, whether the use of alkalizing agents has any effect on an overall tolerance of permissive hypercapnia is not known.TGI: TGI is an adjunct to mechanical ventilation used in settings of elevated PaCO2 (9).A secondary flow of gas (4 to 12 L/min) is injected distal to the tip of the endotracheal tube but proximal to the carina through a small bore catheter.TGI is proposed to lower PaCO2 by reducing dead space ventilation via washout of carbon dioxide from the large airways at end-expiration, injection of part or all of the tidal volume (VT) at the trachea and enhanced gas mixing by the high velocity gas flow injected (10).Application can be either continuous or during expiration only.Preliminary data indicate that PaCO2 is decreased in direct proportion to TGI flow and that TGI is more effective the greater the baseline PaCO 2 (10).Of concern is that TGI elevates peak alveolar pressures, increases VT and causes auto-PEEP (11).As a result, it appears that expiratory phase TGI or volume-adjusted TGI would be the safest approach to TGI (11).With volumeadjusted TGI, VT during volume-controlled ventilation is decreased by the TGI volume delivered during the inspiratory phase.Although TGI is promising, it must be considered experimental; problems with humidification, system overpressure, ability to monitor changes in peak alveolar pressure and auto-PEEP must be solved before TGI can be recommended for general clinical use.
PRESSURE-VERSUS VOLUME-TARGETED
VENTILATION There are distinct advantages as well as disadvantages of both pressure targeting and volume ventilation (Table 3).The decision to employ one or the other approach is generally based on personal bias, and which of the advantages and disadvantages are considered most important.Review of the literature with a focus on well-defined, controlled studies indicates that there are no differences in physiological effects, development of barotrauma or acute lung injury, or outcome between pressure and volume ventilation regardless of the inspiratory:expiratory (I:E) ratio used (12,13).This is particularly true when pressure ventilation is contrasted to volume ventilation with a decelerating flow waveform and an end-inspiratory plateau (14).
Pressure-targeted ventilation -advantages and disadvantages:
The major advantage of pressure-targeted ventilation is that peak inspiratory and alveolar pressures are maintained at a constant level.This may decrease the likelihood of localized over-distension with associated barotrauma and acute lung injury.In addition, pressure ventilation is able to respond on a breath-to-breath basis to changes in ventilatory demand, thus increasing patient-ventilator synchrony and reducing patient effort.The major disadvantage is that V T varies as impedance changes, increasing the likelihood of blood gas alterations and making it more difficult to identify major alterations in impedance rapidly.
Volume-targeted ventilation -advantages and disadvantages:
The major advantage of volume-targeted ventilation is the delivery of a constant V T .This ensures a consistent level of alveolar ventilation and results in easily identifiable changes in peak inspiratory pressure as impedance to ventilation changes.However, with volume ventilation, peak alveolar pressure may change dramatically as impedance changes, potentially increasing the risk of ventilator-induced lung injury.In addition, volume ventilation is unable to respond to changes in patient demand.As a result, patientventilator dyssynchrony and increased patient effort can be anticipated with volume-targeted ventilation.Combined pressure/volume modes: A number of manufacturers have developed modes (pressure augmentation, volume support, pressure-regulated volume control) of ventilation that combine the beneficial aspects of both pressure and volume ventilation and limit the disadvantages of each.Preliminary data indicate that these approaches are successful in marrying the two targets (15,16).As a result, based on current literature, one must speculate whether standard volume ventilation is ever indicated.In both the assisted and controlled ventilated patient, pressure targeted or combined pressureand volume-targeted approaches appear to be better at preventing circumstances associated with ventilator-induced lung injury and improving patient-ventilator synchrony.
INVERSE RATIO VENTILATION
As discussed earlier, no differences have been reported between volume and pressure ventilation compared at normal or inverse I:E ratios (12,13).However, these studies have helped to focus attention on the methods available to increase mean airway pressure in order to improve oxygenation.This discussion has particular relevance in the ARDS patient in whom oxygenation is a particular problem.Of primary concern is setting PEEP at a level that ensures recruitment of lung units (about 12 to 15 cm H 2 O).Once PEEP is established at this level, oxygenation is directly related to mean airway pressure.Extending inspiratory time is one method of increasing mean airway pressure without increasing peak alveolar pressure.The emphasis should not be establishing a specific I:E ratio but establishing the mean airway pressure that allows the oxygenation target to be met.Inspiratory time extension should be limited by the development of auto-PEEP (17).Once auto-PEEP starts to develop, increases in inspiratory time should stop and other approaches (set PEEP) to increasing mean airway pressure should be used.Auto-PEEP should be avoided because it results in a much less uniform increase in lung unit total PEEP and functional residual capacity than applied PEEP (16).Because auto-PEEP depends on local lung unit time constants, lung units that are most stiff have the least auto-PEEP, whereas lung units that are most compliant have the greatest increase in auto-PEEP (17).
•
Shift in the oxyhemoglobin dissociation curve to the right • Decreased alveolar PO2 • Both stimulation and depression of the cardiovascular system • Stimulation of ventilation • Dilation of vascular bed • Increased intracranial pressure • Anesthesia (PaCO2 200 mmHg) • Decreased renal bloodflow (PaCO2 150 mmHg) • Leakage of intracellular potassium (PaCO2 150 mmHg) • Alteration of the action of pharmacological agents (a result of intracellular acidosis)
TABLE 3 Advantages and disadvantages of pressure-and volume-targeted ventilation Pressure-targeted ventilation
Can Respir J Vol 3 No 6 November/December 1996 | 2019-03-06T14:18:19.666Z | 1996-01-01T00:00:00.000 | {
"year": 1996,
"sha1": "fece8907f0aeb68994a8a9dcd793a5c722e5efbc",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/crj/1996/248358.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "fece8907f0aeb68994a8a9dcd793a5c722e5efbc",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
3937788 | pes2o/s2orc | v3-fos-license | TGFβ1-Induced Differentiation of Human Bone Marrow-Derived MSCs Is Mediated by Changes to the Actin Cytoskeleton
TGFβ is a potent regulator of several biological functions in many cell types, but its role in the differentiation of human bone marrow-derived skeletal stem cells (hMSCs) is currently poorly understood. In the present study, we demonstrate that a single dose of TGFβ1 prior to induction of osteogenic or adipogenic differentiation results in increased mineralized matrix or increased numbers of lipid-filled mature adipocytes, respectively. To identify the mechanisms underlying this TGFβ-mediated enhancement of lineage commitment, we compared the gene expression profiles of TGFβ1-treated hMSC cultures using DNA microarrays. In total, 1932 genes were upregulated, and 1298 genes were downregulated. Bioinformatics analysis revealed that TGFβl treatment was associated with an enrichment of genes in the skeletal and extracellular matrix categories and the regulation of the actin cytoskeleton. To investigate further, we examined the actin cytoskeleton following treatment with TGFβ1 and/or cytochalasin D. Interestingly, cytochalasin D treatment of hMSCs enhanced adipogenic differentiation but inhibited osteogenic differentiation. Global gene expression profiling revealed a significant enrichment of pathways related to osteogenesis and adipogenesis and of genes regulated by both TGFβ1 and cytochalasin D. Our study demonstrates that TGFβ1 enhances hMSC commitment to either the osteogenic or adipogenic lineages by reorganizing the actin cytoskeleton.
Introduction
Fat and bone tissues both originate from bone marrow progenitor cells called skeletal stem cells, also known as bone marrow-derived multipotent stromal cells or mesenchymal stem cells (MSCs). The formation of these tissues is regulated throughout an organism's lifetime by homeostatic mechanisms within the marrow cavity. It has been suggested that an imbalance between osteogenic and adipogenic lineage commitment and differentiation is responsible for age-related impairment of bone formation, and a number of therapeutic interventions targeting and activating MSCs, thus enhancing bone mass, have been proposed. Indeed, the identification of novel strategies to steer human skeletal (mesenchymal) stem cell differentiation towards the production of osteoblastic cells, thus increasing bone formation, is very topical in the bone biology field.
The transforming growth factor (TGF) superfamily consists of over 40 members, including activins, inhibins, bone morphogenetic proteins (BMPs), growth and differentiation factors (GDFs), and TGFβs [1]. TGF family members are multifunctional regulators of cell growth and differentiation, playing pivotal roles during embryonic development, organogenesis, and tissue homeostasis [2]. The cytokine TGFβ1 is among the most abundant in bone matrix [3] and is secreted by endothelial cells, epithelial cells, fibroblasts, smooth muscle cells, and most immune cells [4]. TGFβ1 is deposited in bone matrix as an inactive, latent complex with latencyassociated protein (LAP), the binding of which masks the receptor domains of active TGFβ1. During bone formation, osteoclast-mediated bone resorption activates TGFβ1 by cleaving LAP and releasing it from bone matrix, thus creating a transient gradient of active TGFβ1 that attracts MSCs to bone remodeling sites, where they undergo osteoblastic differentiation [5]. Furthermore, TGFβ1 is known to regulate the proliferation and differentiation of osteoprogenitor cells [6].
Actin microfilaments are composed of polymers of actin, the most abundant cellular protein which also forms the thinnest part of the cytoskeleton, and are primarily responsible for skeletal structure [7]. Cellular actin exists in two forms, filamentous polymerized actin (F-actin) and globular/monomer depolymerized actin (G-actin), and transitions between these forms during highly dynamic intracellular polymerization and depolymerization processes [8]. In mammals, actin polymerization factors regulate actin polymerization and depolymerization [9]. While the stiffness of actin is lower than that of microtubules, actin molecules form a highly organized structural network, supported by a large number of interacting cross-linking proteins, which together confer a substantial amount of mechanical strength [10]. The cytoskeleton is known to be important for determining cell morphology and for mediating changes in adhesion and differentiation [11]. Indeed, during human MSC (hMSC) lineage commitment, cells undergo significant morphological changes and actin cytoskeletal reorganization which contribute to the determination of cellular fate [7,12].
In this study, we investigated the effect of TGFβ-induced actin cytoskeleton modifications on the potential of hMSCs to differentiate into osteogenic and adipogenic lineages, as well as the effect of the actin polymerization inhibitor cytochalasin D (CYD). Our data suggest that TGFβ-induced actin cytoskeleton reorganization is a prerequisite for hMSC differentiation into osteocytic or adipocytic lineages.
TGFβ1
Treatment Enhanced the Osteogenic Differentiation of hMSCs. A single treatment with TGFβ1 (10ng/ml, for 2 days) enhanced hMSC osteogenic differentiation, as shown by the increased mineralized matrix formation made evident by alizarin red S staining (Figures 1(a) and 1(b)). Conversely, when TGFβ1 signaling was blocked with the inhibitor SB-431542 (10 μM), significantly lower mineralized matrix formation was observed (Figures 1(a) and 1(b)). Consistent with this, higher expression of the osteoblastic genes alkaline phosphatase (ALP), runt-related transcription factor 2 (RUNX2), and osteocalcin (OCN) was observed in hMSCs undergoing osteogenic differentiation in the presence of TGFβ1, while treatment with the TGFβ1 inhibitor SB-431542 severely inhibited this expression (Figure 1(c)).
TGFβ1
Treatment Enhanced the Adipogenic Differentiation of hMSCs. Next, we examined the effect of treating hMSCs with a single dose of TGFβ1 (10 ng/ml, for 2 days) on adipogenic differentiation. We found that adipogenic differentiation was enhanced following TGFβ1 treatment, as shown by an increase in the number of lipid-filled adipocytes (Figures 1(d) and 1(e)). Similarly, the expression of several adipogenic gene markers, including lipoprotein lipase (LPL), peroxisome proliferator-activated receptor gamma 2 (PPARG-2), adipocyte protein 2 (aP2), and ADIPOQ, was upregulated following TGFβ1 treatment, while treatment with SB-431542 reversed these effects (Figure 1(f)).
TGFβ1 Stimulation Has No Effect on hMSC Viability or
Proliferation. The effect of TGFβ1 on hMSC cell viability was assessed using alamarBlue assay reagent. No significant effect on viability was observed after 4 days of treatment ( Figure 2(a)). To investigate the effect of TGFβ1 on cellular proliferation, we used the xCELLigence RTCA DP® cell proliferation assay system, which allows the continuous monitoring of cell numbers over time. As shown in Figure 2(b), there was no measurable difference in hMSC proliferation in the presence or absence of TGFβ1.
Molecular Phenotype of TGFβ1-Treated hMSCs.
To understand the molecular mechanisms underlying the TGFβl-mediated regulation of hMSC differentiation, we compared global gene expression in TGFβl-treated hMSCs and vehicle-treated control cells using microarray analysis. In total, 1932 gene transcripts were significantly upregulated, and 1298 were significantly downregulated following TGFβl treatment. Significant changes were defined as a fold change ≥ 2, p < 0 05 and are listed in Supplementary Tables S1 and S2. Hierarchical clustering of differentially expressed genes revealed a clear distinction between TGFβl-treated and control samples (Figure 3(a)). Next, we used performed gene ontology analysis to identify the biological processes that were favored following TGFβl treatment. We found that the genes that were significantly altered in TGFβl-treated MSCs were enriched within several skeletal and extracellular matrix categories, including extracellular matrix (53 genes), extracellular matrix organization (51 genes), and proteinaceous extracellular matrix (Supplementary Table S3). Furthermore, pathway analysis of significantly changed genes revealed the significant enrichment of several signaling pathways in TGFβl-treated hMSCs. Among these, the most enriched pathways were "regulation of actin cytoskeleton," "MAPK signaling," "focal adhesion," "TGFβ1 signaling," "adipogenesis," "endochondral ossification," and "osteoblast signaling" (Figure 3(b)). Table 1 lists the genes within osteogenesis-and adipogenesis-related signaling pathways that were upregulated in TGFβ1-treated cells. A selected panel of genes known to be involved in cell differentiation and TGFβ signaling that were significantly changed in the microarray data were examined by qRT-PCR. In general, a good degree of concordance was observed between the microarray and qRT-PCR data (Figure 3(c)).
Actin Microfilaments in MSCs
Are Altered following Treatment with TGFβ1 or the Actin Polymerization Inhibitor CYD. Our molecular phenotyping analysis of TGFβltreated hMSCs revealed a significant enrichment of genes associated with cytoskeletal changes. Based on this, and on our previous observations that TGFβl treatment triggers significant morphological changes in hMSCs, we examined the effect of TGFβl on the cytoskeleton using transmission electron microscopy (TEM), which has the power to reveal structural changes in actin microfilaments. Actin microfilament polymerization was found to be inhibited in cells treated with either the potent actin polymerization inhibitor CYD or the TGFβ inhibitor SB-431542. In contrast, TGFβ1 treatment was associated with a prominent distribution of actin filaments, organized as bundles/aggregates, in the perinuclear area and at one cell pole ( Figure 4). The ultrastructural characteristics of the cells under the various treatment conditions are summarized in Supplementary Table S4. 2.6. CYD Regulates Osteogenic and Adipogenic Differentiation in the Presence of TGFβ1. To confirm that TGFβ1 regulates actin cytoskeletal dynamics, hMSCs undergoing either osteogenic or adipogenic differentiation were treated with TGFβ1 in the absence or presence of the actin polymerization inhibitor CYD. CYD treatment significantly inhibited hMSC osteogenic differentiation in both the presence and absence of TGFβl, as shown by reduced mineralization (Figure 5(a)). Similarly, expression of the osteogenic marker genes ALPL, RUNX2, and OCN was inhibited by CYD treatment, with and without TGFβl ( Figure 5(b)). Conversely, CYD treatment enhanced hMSC adipogenic differentiation, as shown by a greatly increased number of lipid-filled mature adipocytes and the increased expression of the adipogenic marker genes LPL and PPARG-2. These effects were maintained when cells were treated concomitantly with TGFβl (Figures 5(c) and 5(d)).
Molecular
Phenotype of CYD-Treated Cells. The data presented above suggest that CYD and TGFβ1 target similar molecular pathways during hMSC osteogenic and adipogenic differentiation. In order to investigate this further and to elucidate the molecular mechanisms underlying the CYD- mediated effects on hMSC differentiation, microarray analysis was performed to establish global gene expression profiles for CYD-treated and controls cells. In total, 10,855 genes were significantly upregulated, and 2523 genes were significantly downregulated following CYD treatment. Genes were defined as significantly changed if they had a fold change ≥ 2 and p < 0 05 and are listed in Supplementary Tables S5 and S6. As was seen with TGFβ1 treatment, hierarchical clustering of the differentially expressed genes revealed a clear distinction between untreated and CYD-treated hMSCs ( Figure 6(a)). Pathway analysis of these genes revealed several molecular pathways that were enriched upon CYD treatment ( Figure 6(b)). Among the most significant were pathways involved in the regulation of the actin cytoskeleton, focal adhesion signaling, endochondral ossification, TGFβ1 signaling, regulation of the microtubule cytoskeleton, and MAPK signaling (Figure 6(b)). The genes that are associated with these pathways that were upregulated in CYD-treated cells are listed in Table 2. Forty-two genes that are involved in adipogenesis-related pathways were significantly enriched in CYD-treated cells (Table 3). Interestingly, 218 genes were both upregulated in TGFβ1-treated hMSCs and downregulated in CYD-treated hMSCs ( Figure 6(c)), showing that the molecular signature on CYD treatment is the inverse of that seen with TGFβ1 treatment and suggesting that these genes may be involved in TGFβ-mediated cytoskeletal reorganization (Table 4).
Discussion
TGFβ is a potent regulator of various biological functions in many cell types, but its effects on hMSC differentiation are, to (c) qRT-PCR validation of selected genes that were upregulated in the microarray data (n = 3, * p < 005; * * * p < 0001). Cells treated with vehicle (DMSO) were used as controls.
date, poorly understood. In the present study, we contribute to this understanding and demonstrate that TGFβ can enhance both osteoblastic and adipocytic lineage commitment by modulating changes to the actin cytoskeleton. TGFβ1 is known to regulate the proliferation and differentiation of osteoprogenitor cells [6,[13][14][15], and it reportedly stimulates bone matrix apposition and bone cell replication [16]. Several studies have demonstrated that TGFβ1 promotes bone formation in vitro by recruiting osteoblast progenitors and inducing bone matrix formation at early stages of differentiation. In addition to this direct regulation of bone formation, TGFβ1, along with BMPs, enhances RUNX2 expression at early differentiation stages [17]. This is consistent with our finding that TGFβ1 promoted osteogenesis and was associated with the upregulation of the osteogenic genes ALPL, RUNX2, and OCN.
Furthermore, we showed that TGF-β1 treatment enhanced the in vitro adipocytic differentiation of hMSCs. This is consistent with several previously reported studies which demonstrate that TGFβ1 has a positive effect on adipogenic differentiation under specific culture conditions [18,19]; an early study considering rat brown adipocytes showed an upregulation of lipogenic enzymes following TGFβ1 treatment [19].
Our results showed that TGFβ1 treatment did not affect MSC cell growth in vitro. Previously, conflicting results have been published; some studies reported that TGFβ1 regulated osteoprogenitor proliferation in vitro [13,20], whereas Yu et al. reported that TGFβ1 treatment strongly inhibited the proliferation of human lung epithelial cells [21]. The mitogenic effects of TGFβ on cells are reportedly variable; while progressive mitogenesis was stimulated in confluent cells following treatment with 0.15-15 ng/ml TGFβ, in sparse cultures 0.15 ng/ml TGFβ exhibited inhibitory effects. However, at all cell densities, 15 ng/ml TGFβ stimulated collagen synthesis, with this effect being most pronounced when DNA synthesis was declining [22]. Most of the published data on TGFβ has shown a mitogenic effect on osteoprogenitors [16,[23][24][25][26], but relatively few studies have examined the growth inhibitory effect of this cytokine on osteoblast-like cells [27,28]. It is likely that these contradictory observations reflect the fact that the effect TGFβ has on cellular proliferation is dependent upon TGFβ concentration, culture conditions including cell density, the cell model system (tumorigenic versus nontumorigenic), the differentiation stage of the target cell population, and/or the presence of other growth factors.
Endochondral ossification
On the other hand, adipocytic differentiation is associated with the morphological change from fibroblast-like cells to spherical cells filled with fat droplets [33]. These morphological alterations are also associated with cytoskeletal changes and actin reorganization, which takes place in the early lineage commitment stage, prior to the upregulation of many adipocytic-specific gene markers [34]. The differentiation of hMSCs into the adipocytic lineage in vitro is known to be influenced by the cytoskeletal tension that results following actin reorganization [32]. Furthermore, TGFβ1 Ca 2+ signaling is known to regulate osteoblast adhesion through enhanced α5 integrin expression, the formation of focal contacts, and the mediation of cytoskeleton reorganization [35,36]. Additionally, the TGFβ1-mediated stimulation of DNA synthesis in mouse osteoblastic cells is reportedly associated with morphological changes and is accompanied by the enhanced synthesis and polymerization of cytoskeletal proteins [37]. Consistent with this, our data suggests that TGFβ1 enhances hMSC lineage commitment by regulating the morphology of the actin cytoskeleton, focal adhesion, and endochondral ossification, via the TGFβ1 and MAPK signaling pathways.
Also consistent with our results are reports that CYDmediated reductions in actin polymerization stimulate adipogenesis, but inhibit osteogenesis [30], suggesting that cytoskeletal modification is a prerequisite for cell fate determination. Our gene expression profiling revealed that the genes FGF1, FGF2, and KRAS, which commonly regulate actin cytoskeleton reorganization, were upregulated and downregulated in TGFβ1-and CYD-treated cells, respectively, suggesting that they are involved in the actin polymerization-mediated differentiation of MSCs.
We showed that during osteogenesis, TGFβ1 treatment reorganized the cytoskeleton, but this reorganization, and thus osteogenesis, could be disturbed by CYD treatment. Conversely, treatment with either TGFβ1 or CYD promoted adipogenesis. This observation can potentially be explained by considering that TGFβ1 and CYD promote the formation of different cytoskeleton patterns, both of which support adipogenesis. Alternatively, it is possible that cytoskeletal reorganization leading to adipogenesis can be promoted by both TGFβ1-dependent and -independent mechanisms, and that CYD-mediated cytoskeletal reorganization cannot override the TGFβ1-independent mechanism. We propose a model wherein TGFβ1 regulates cytoskeletal organization by modulating actin cytoskeleton-related genes, leading to enhanced hMSC differentiation into both osteoblasts and adipocytes (Figure 7). We propose that CYD enhances adipogenesis and inhibits osteogenesis by regulating the expression of a number of key candidate genes, including FGF2, TGFβ2, Plat, EGR2, MEF2D, and IRS1. These genes were modulated by both TGFβ1 and CYD and are thus heavily implicated in the determination of hMSC fate. In summary, our study provides novel molecular insights into the role of the intracellular TGFβ signaling pathway in bone and bone marrow adipose tissue formation. This signaling involves the reorganization of the actin cytoskeleton in order to control the lineage-specific differentiation of hMSCs.
Cell
Culture. An hMSC-TERT cell line was created previously to serve as a model of human primary MSCs by overexpressing human telomerase reverse transcriptase (hTERT) in normal human bone marrow MSCs [38]. This cell line has been extensively characterized and exhibits a similar cellular and molecular phenotype to primary MSCs [39]. For the current experiments, we used a previously characterized subline derived from hMSC-TERT cells, termed hMSC-TERT-CL1 [40]. For ease, this cell line is referred to as "hMSC" for the remainder of the manuscript. Cells were cultured in Dulbecco's modified Eagle's medium (DMEM) supplemented with 4500 mg/l D-glucose, 4 mM L-glutamine, 110 mg/l sodium pyruvate, 10% fetal bovine serum (FBS), 1× penicillin/streptomycin (pen/strep), and nonessential amino acids. All reagents were purchased from Gibco, USA.
OsteoImage Mineralization
Assay. The formation of mineralized matrix in vitro was quantified using an Osteo-Image mineralization assay kit according to manufacturer's instructions (Cat. number PA-1503; Lonza, USA). Briefly, culture medium was removed, and cells were washed once with PBS and then fixed with 70% cold ethanol for 20 minutes. Next, diluted staining reagent was added at a level recommended by the manufacturer, and plates were incubated in the dark for 30 minutes at room temperature. The cells were then washed, and staining was quantified using a fluorescence plate reader (Molecular Devices Co., Sunnyvale, CA, USA) with excitation and emission wavelengths of 492 and 520 nm, respectively. Briefly, cells were cultured in 96-well plates in 100 μl of the appropriate medium before 10 μl alamarBlue substrate was added at the indicated time points. Plates were then incubated in the dark at 37°C for 1 hour. AlamarBlue fluorescence was then detected using a Synergy II microplate reader (Bio-Tek Inc.) with excitation and emission wavelengths of 530 nm and 590 nm, respectively.
RTCA Cell Proliferation
Assay. An xCELLigence RTCA (real-time cell analysis) DP system (ACEA Biosciences Inc., San Diego, CA) was used to measure the rate of cellular proliferation according to manufacturer's protocol. Briefly, 100 μl DMEM supplemented with 10% FBS was loaded onto each well of an E-plate 16 chamber slide, which was then placed inside the humidified incubator of the RTCA DP analyzer for 1 hour at 37°C to allow the membrane surface and medium to equilibrate. After 1 hour, background measurements were performed. Next, 5000 cells/100 μl DMEM + 10% FBS were added per well, and measurements were recorded at 15-minute intervals for various total durations, depending on the experimental setup.
Transmission Electron Microscopy (TEM).
For TEM, cells were trypsinized, washed with PBS, pelleted, and then fixed in 2.5% glutaraldehyde (Cat. number 16500; Electron Microscopy Sciences) in 0.1 M phosphate buffer (pH 7.2) at 4°C for 4 hours. Next, the cells were washed in 0.1 M phosphate buffer (pH 7.2) 3 times for 30 minutes each and then treated with 1% osmium tetroxide (OsO 4 ) in 0.1 M phosphate buffer (pH 7.2) for 2 hours. Cells were then dehydrated in increasing concentrations of ethanol (10%, 30%, 50%, 70%, 90%, and 100%) for 15 minutes each, before being resuspended in acetone and incubated for 15 minutes. The resulting cell suspension was then aliquoted into BEEM® embedding capsules and infiltrated firstly with a 2 : 1 acetone : resin mixture for 1 hour and secondly with a 1 : 2 acetone : resin mixture for 1 hour. Following infiltration, the BEEM capsules were centrifuged at 2500 rpm for 5 minutes and embedded in pure resin for 2 hours. The resin was then polymerized by baking in an oven at 70°C for 12 hours. Semithin sections (0.5 μm thickness) were prepared and stained with 1% toluidine blue. Ultrathin sections (70 nm thickness) were prepared and mounted on copper grids and then stained firstly with uranyl acetate (saturated ethanol solution) for 30 minutes, rinsed with double distilled water and then stained with Reynold's lead citrate solution for 5 minutes before a final rinse with distilled water. The contrasted ultrathin sections were examined and photographed under a JEOL 1010 transmission electron microscope (JEOL, Tokyo, Japan).
Statistical Analysis.
All results are presented as the mean ± SD of at least 3 independent experiments. Differences between groups were assessed using Student's t-test, and p values < 0.05 were considered statistically significant.
Conflicts of Interest
The authors declare that they have no conflicts of interest. | 2018-04-03T00:05:53.290Z | 2018-02-15T00:00:00.000 | {
"year": 2018,
"sha1": "017f55ae7848aff33f6403b84e287a116cf94544",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/sci/2018/6913594.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8434370294146afe0d14c83c9bf4dcb0219af229",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
59945867 | pes2o/s2orc | v3-fos-license | Stabilization of Continuous-Time Random Switching Systems via a Fault-Tolerant Controller GuoliangWang and
This paper focuses on the stabilization problem of continuous-time random switching systems via exploiting a fault-tolerant controller, where the dwell time of each subsystem consists of a fixed part and random part. It is known from the traditional design methods that the computational complexity of LMIs related to the quantity of fault combination is very large; particularly system dimension or amount of subsystems is large. In order to reduce the number of the used fault combinations, new sufficient LMI conditions for designing such a controller are established by a robust approach, which are fault-free and could be solved directly. Moreover, the fault-tolerant stabilization realized by a mode-independent controller is considered and suitably applied to a practical case without mode information. Finally, a numerical example is used to demonstrate the effectiveness and superiority of the proposed methods.
On the other hand, any system is inevitable to have faults in practice.It is necessary and meaningful to study the related problems of system experiencing faults.Fault-tolerant control problem [27][28][29] is when the faults of actuator, sensor, or internal component occur, the closed-system is still stable and has ideal characteristics.It is known that the classical fault-tolerant control methods may be divided into two categories: passive and active fault-tolerant control methods.Passive fault-tolerant control method is to use a fixed controller which ensures the closed-loop system insensitive to some specific faults.In other words, it could maintain the system stable.Therefore, this strategy is similar to the robust control technique [30,31].On the contrary, active fault-tolerant control method needs to reconstruct the controller design and reschedule the control law.In other words, according to the characteristics of expectation, we need to design a new control system after the faults occurred [32,33].Compared with the active fault-tolerant control method, the passive fault-tolerant control method does not need the real-time fault information or adjust the structure of controller online.In this sense, it is said that passive faulttolerant control method is relatively simple to implement.When the underlying system is referred to Markovian jump system, some results about fault-tolerant control were given in [34][35][36].By investigating such references, it is seen that the considered problems and studied methods between such cited references and this paper are quite different.Firstly, the considered system is different from the traditional Markovian jump system, where the dwell time considered here consists of a fixed part and random part.Secondly, the fault considered here is described by using a binary structured uncertainty 2 Mathematical Problems in Engineering rather than that simply described by a vector in some existing references.In fact, this structure is more clear to describe the fault.Compared with the traditional methods dealing with the fault, the method to be presented has a better relaxation and is more applicable.Thirdly, but not the last, because of the fault of controller described by a binary structured uncertainty, the quantity of fault combination will be very large and have large computation complexity.Moreover, due to the fixed and random dwell time contained simultaneously, how to make the existence conditions for such a fault-tolerant controller within LMI framework and have a concise form are also necessarily studied.It is said that the abovementioned problems not only are important in theory, but also have practical applications.For example, from [37], it was shown that the helicopter system could be modeled as an Markovian jump system, whose dynamic characteristics are clearly described by a Markov chain with three different states according to airspeeds of 135 (nominal value), 60, and 170 knots.Moreover, the helicopter system is also inevitable to have kinds of faults due to the internal components faults happening or the external environment changing in practice.In order to guarantee it still works when these faults occur, a better and necessary scheme is to design an effective faulttolerant controller.On the other hand, though the switching of helicopter among such three modes satisfies a Markov process, it is more reasonable that each subsystem is likely to hold for a period of time.In other words, there will be a fixed and random dwell time in each subsystem.Finally, but not the last, it is also important that the desired control method should be with less computation complexity and easily realized.Based on these explanations, it is said that it is significative to design a fault-tolerant controller for Markovian jump systems experiencing forced dwell time and also has practical significance.To the best of our knowledge, very few results are available to design fault-tolerant controller for random switching system.All the facts motivate the current research.
In this paper, the stabilization problem of continuoustime random switching systems closed by a fault-tolerant controller will be studied, whose conditions are presented in terms of LMIs and without any fault.The main contributions of this paper are summarized as follows: (1) A kind of faulttolerant controller is proposed to stabilize a continuous-time random switching system which contains fixed and random dwell time simultaneously, whose conditions are obtained by exploiting a robust method; (2) the sufficient conditions for the desired controller are presented with LMI forms and fault-free, which could be solved directly; (3) due to the results without any fault, the complexity of computation will be smaller than ones obtained by the traditional methods; (4) because of the given conditions being LMIs, the existence conditions for fault-tolerant controller without any mode operation are obtained easily.
Notation.R denotes the -dimensional Euclidean space; R × is the set of all × real matrices.‖ ⋅ ‖ refers to the Euclidean vector norm or spectral matrix norm.Ω is the sample space, F is the -algebras of subsets of the sample space, and P is the probability measure on F. In symmetric block matrices, we use " * " as an ellipsis for the terms induced by symmetry, diag{⋅ ⋅ ⋅ } for a block-diagonal matrix, and () ⋆ ≜ + .
Problem Formulation
Consider a class of switched linear systems defined on a complete probability space (Ω, F, P) and described as where () ∈ R is the system state vector, () ∈ R indicates the control input vector, and () ∈ S ≜ {1, 2, . . ., } represents the switching signal and determines the current system operation mode.For any () = ∈ S, () = and () = are known matrices of compatible dimensions.Time instant represents the current operation mode of the system to another operation mode.The parameter > 0 represents a fixed dwell time of system (1) with mode .If the system occurs to interval [ , + ), where there is no switching, it will surely follow where ℎ represents a very short amount of time and satisfies lim Δ→0 + ((ℎ)/ℎ) = 0.For the time interval [ , + ), if ≥ + , at this time the mode switching follows the mode transition probabilities with TRM Π ≜ ( ) ∈ R × given by where ℎ > 0, ≥ 0, if ̸ = , and = − ∑ ̸ = .In this paper, the designed state feedback controller may have faults, which is referred to be a fault-tolerant controller (FTC) and described by where () is the control gain to be determined.The parameter Δ() is a diagonal matrix and used to describe the controller fault happening or not.Its form is defined as Particularly, we could clearly find that Δ() = if there are no faults.It is seen that there are 2 possible combinations representing the controller faults.Equivalently, Λ has 2 elements.
Remark 1.It is worth mentioning that the fault of controller ( 4) is described by a binary structured uncertainty.Compared with some existing references [27,31,32,[34][35][36] where the fault is modeled to be a vector, this formulation has a better description and more application scope.However, it is also seen that 2 possible combinations are included to represent the controller faults.This will make the computation complexity very large; in particular the underlying system is a switching system with operation modes.Thus, how to reduce the complexity and make the obtained results with concise and easily solvable forms are necessary and meaningful problems.
Main Results
Theorem 4. For given system (1), there exists an FTC ( 4) such that the resulting close-loop system ( 6) is stochastically stable, if for given scalars > 1 and > 0, there exist matrices > 0, , and , such that where Thus, the gain of controller ( 4) is computed as Proof.Replacing with = + Δ() in (7), it is clearly known from Lemma 2 that the resulting closed-loop system is stochastically stable if the following condition is Based on Lemma 3, it is known that conditions (10), (11), and (12) imply the following conditions, respectively.That is, where where where Because of Δ = Δ in addition to Δ = 0 × and Δ = , it is concluded that = 0 × and ∈ R × .Based on ( 20) and ( 22), one gets From condition (13), we have implying By considering representation ( 15), ( 25) is equivalent to Then, we get Based on this, condition ( 19) is guaranteed by which could be implied by condition (12).This completes the proof.
Remark 5.As for stability problem, [4] firstly presented a necessary and sufficient condition.However, when the system synthesis problems such as stabilization are mentioned, there are few results available.The main reason is that some nonlinear terms will be inevitably encountered, which are not handled easily and directly.From the proof of Theorem 4, such nonlinear terms encountered in condition (7) have been done suitably.Though the large quantity of fault combination, fixed, and random dwell time are included, sufficient existence conditions for fault-tolerant controller (4) are given in terms of LMIs, which are more general than ones in [38].Moreover, the established conditions are fault-free.In other words, instead of 2 combinations involved, only two special cases of no fault and all fault are taken into account, which are fewer than ones in [39][40][41].Based on these facts, it is said that the conditions given in this theorem have small computation complexity and could be solved directly and easily.
From Theorem 4, it is seen that the desired controller ( 4) is mode-dependent and needs its operation mode available online.It is very known that this assumption will be limited in some practical applications.In order to deal with the general condition, a mode-independent controller is usually designed and described as where is the common control gain to be determined.Theorem 6.For given system (1), there exists a modeindependent FTC (33) such that the resulting system is stochastically stable, if for given scalars > 1, and > 0, there exist matrices > 0, , , and satisfying (13) and where The other variables are given in Theorem 4.Then, the gain of controller ( 33) is computed by Proof.Similar to the proof of Theorem 4, it is concluded that conditions ( 34)-( 36) could imply where where where Without loss of generality, we only consider (43) in detail.By using Lemma 3 and the Schur complement lemma, it is got that the following condition is guaranteed by condition (43).Moreover, it is concluded from (36) that matrix is nonsingular.Based on representation (38), it is claimed that where is implied by pre-and postmultiplying both sides of (45) with and its transpose, respectively.The next is similar to the proof of (32).On the other hand, as for conditions (39) and ( 41), it is concluded that they can imply conditions ( 20) and ( 22) by pre-and postmultiplying both sides with [ Δ] and [ − Δ], respectively.The next steps are similar to the ones in Theorem 4. This completes the proof.
Remark 7. In order to deal with the mode-independent control problem, a simple and direct way is to select a common Lyapunov function for all modes.Though the gain of controller or filter could be get without any mode information, the choice of mode-independent Lyapunov function usually brings larger conservatism, which may make the design of mode-independent controller fail.In this case, a better way is to make the requirements of mode-independent controller and mode-dependent Lyapunov function be satisfied simultaneously.From Theorem 6, it is seen that such requirements could be satisfied well by letting mode-dependent matrix and common matrix .In other words, the conservatism of the obtained results could be reduced, while the goal of modeindependent control could also be achieved.
Numerical Examples
Example 1.Consider a continuous-time random switching system of form (1) with () ∈ S = {1, 2}, whose parameters are described as follows: The transition rate matrix is given as Without loss of generality, the fixed dwell time of such two subsystems is assumed to be 1 = 0.1 and 2 = 0.2, respectively.As for this example, one could design a stabilizing controller by solving a set of LMIs.In this section, we will compare two types of controller: the standard mode-dependent controller ("" as the subscript, "" as the superscript) and 1: Stability analysis of system closed by of controllers.
Δ
Combinations (1, 2, 3, 4) the mode-dependent fault-tolerant controller ("FTC" as the subscript, "" as the superscript).Particularly, the existence conditions of such two controllers are both LMIs, which are described in conditions ( 20), ( 22), (23), and ( 25 (51) The gains of standard mode-dependent controller are given as which are similar to the ones in [38].After applying the above controllers, respectively, the stability effects of the resulting closed-loop system are given in Table 1.Here, four types of fault combinations are contained in Δ.In this table, "" represents the stable closed-loop system, while "" denotes that the closed-loop system is unstable.Since the above standard mode-dependent controller is without considering faults, the resulting closed-loop system will be unstable for two types of fault combinations.To the contrary, the resulting system closed by the designed mode-dependent fault-tolerant controller (FTC) is always stable.Moreover, though the existence conditions for the desired fault-tolerant controller are within LMI framework, only two special cases of complete fault and no fault are taken into account.Instead of four types of fault combinations involved such as [39][40][41], the complexity of computation could be reduced; particularly system dimension or amount of subsystems is large.Even for a simple case that system dimension = 2 and number of subsystems = 2, there will be 4 fault combinations, where the amount of LMIs is 8.In other words, the amount of the fault combinations or system dimension in addition to quantity of operation mode has a very large influence on the complexity of the calculation.Under the initial condition 0 = [1 −1] , the state response of the resulting closed-loop system by the standard mode-dependent controller is given in Figure 1(b), while Figure 1(a) is simulation of operation mode.From this simulation, it is said that the fault of controller plays a negative effect to system and could make the system unstable.To the contrary, after applying the above mode-dependent fault-tolerant controller, one could get the state response given in Figure 2. It is obvious that the resulting closed-loop system is stable though there are faults in the desired controller.Moreover, when system mode is Mathematical Problems in Engineering (54) It is seen that the obtained controller is mode-independent, while the corresponding Lyapunov function is mode-dependent.Because of this, without selecting a common Lyapunov function, the solvable set of mode-independent controller will be larger.Thus, the results will be less conservative than the ones obtained by mode-independent Lyapunov functions.Under the same fault combinations and the initial condition, we could obtain the state response of the resulting closedloop system given in Figure 3 and stable too.Based on these simulations, it is said that our methods based on fault-tolerant controller are superior to ones without considering faults.
Conclusions
In this paper, the stabilization problem of continuous-time random switching systems has been realized by a fault-tolerant controller, where both fixed and random dwell time are included.Based on the robust method, sufficient conditions for both mode-dependent and mode-independent controllers are established in terms of LMIs, which are also faultfree.Because of all the results without fault information, they are with smaller computation complexity.Compared with the ones obtained by the traditional approaches, the given conditions have fewer LMIs and could be solved easily and directly.Finally, an example has been used to demonstrate the effectiveness and superiority of the proposed methods.
Figure 1 :
Figure 1: Simulation of system closed by mode-dependent controller.
Figure 2 :
Figure 2: State response of system closed by mode-dependent FTC.
Figure 3 :𝐾
Figure 3: The curves of system closed by mode-independent FTC. | 2018-12-29T12:32:14.985Z | 2017-03-20T00:00:00.000 | {
"year": 2017,
"sha1": "3388ae8e1e3384b5c8c6470bc8cbeb82e64d06cc",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/mpe/2017/4840859.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "3388ae8e1e3384b5c8c6470bc8cbeb82e64d06cc",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Engineering"
]
} |
234026894 | pes2o/s2orc | v3-fos-license | Obstacles Faced by College Students in Solving Probability Word Problems
There are many difficulties that can be identified when students solve mathematic problems especially in solving probability word problems. This study was conducted to identify the major obstacles faced by matriculation college students while solving the probability of an event word problems. Seven college students were the sample for this case study. Clinical interviews are used as a data collection. This data collection technique was selected based on the researcher’s observation on the participants as they answered the probability word problem task. The task was given during the interview session. Semi structured interviews are used to obtain in depth information. Think-aloud analysis involves observations leading to individual behaviours in the oral or nonverbal form of participants and the researcher’s field notes. Participants were found to have difficulty interpreting probabilities. There are three categories of difficulties that have been identified, namely not knowing the meaning of the word, not knowing the nature of the probability and not being able to identify the goal of the probability word problem.
INTRODUCTION
Problem solving is a very important learning process in the mathematics curriculum. Based on the Malaysian Education Development Plan, problem solving is an element contained in 21st century skills, which is one of the focus of mathematics learning (Kementerian Pendidikan Malaysia, 2013). Therefore, problem solving skills among students must be taught from young as problem solving is closely related to word problems.
In mathematics, among the disciplines of knowledge that dominate the daily situation is probability. This is because probability is a very important aspect to predict the outcome of future events. The probability of an event dominates daily life activities such as controlling the flow of traffic through the highway system, predicting the number of people of all ages involved in an accident and estimating the spread of rumours (Batanero, Chernoff, Engel, Lee, & Sánchez, 2016). Probability is not about predicting whether a particular event will occur but about determining how that probability is distributed over possible events (Baltaci & Evran, 2016;Galavotti, 2015).
Learning the concept of probability and solving the word problem of probability presents a challenge to the students. This is because students need to master the concept of probability, problem solving process and understand the probability of problem simultaneously when solving probability word problem (Beitzel & Staley, 2015;Galavotti, 2015;Usry, Rosli, & Maat, 2016). The review of previous studies has largely focused on problem solving for a probability topic, such as conditional probability either manually or by using software (Beitzel & Staley, 2015;Gabriel, 2002;Gugga & Corter, 2014;Inzunza, 2006;Xing, 2016), joint events probability (Beitzel, Staley, & DuBois, 2011;Zahner & Corter, 2010) and the Bayes network (Ong & Lim, 2014). The probability of an event seems simple when it involves the sample space, the probability of an event, and the conditional probability. However, there are scarce empirical research on probabilities and events (Corter & Zahner, 2007).
Discussions about the difficulties or obstacles faced by college students while solving probability problems are also limited as most of the studies focused on skills and attitudes of students while solving problems (Zakaria & Yusoff, 2009;Yusof & Taib, 2006;Yusoff & Salleh, 2006). The performance of college students is still unsatisfactory as a question mark as these students have learned the basics of probability while at the secondary level (Danisman & Tanisli, 2017). There must be a reason why these college students still having problems while coping with probability of an event problems.
Thus, this study was conducted to identify solution strategies and obstacles encountered by students while solving probability problems. This study focuses on the difficulties experienced by students to achieve the correct solution to the word problem of the probability of an event. The presentation of this study will only answer the question of what are the obstacles faced by matriculation college students while solving the probability word problem.
METHODS
This study employed a case study design using clinical interview technique. This technique developed by radical constructivism is a direct observation in the context of one to one interaction with observing the behaviour of participants as they solve problems (von Glasersfeld, 2002). Seven participants were selected from a matriculation college in Peninsular Malaysia. Sampling techniques aimed at maximum variation are used to meet the characteristics of study participants who are required to obtain data from non-homogeneous study samples. This sampling plan helps the best to get data and go for saturation data. The instruments involved in the study include semi-structured interview protocols, observation protocols and probability word problem task. The task given has three questions of probability word problems. The probability word problem focuses on the subtopics of Probability of Independent Events and Probability of an Event.
Clinical interview sessions were conducted after lectures and at participants' leisure time. Interview sessions were recorded so that each participant's behaviour such as the eye balls moved to the right and left repeatedly, knocking hands on the table, sweating and so on could be observed and recorded for reference. All audio and visual data are transcribed in verbatim form. Data were encoded and distributed into appropriate categories after the refraction process. The analysis of observational data from field note entries were also coded. Comparative analysis techniques were used to record emerging themes to answer research questions. Student's written answer were also analysed to ensure they had difficulties in facing the problems.
RESULTS AND DISCUSSION
This section presents the findings of the study obtained from the clinical interviews that were conducted. Based on the constant comparative analysis implemented, among the themes emerged from this study is the difficulty of students to interpret probabilities. There are three categories of obstacles that students faced when solving the probability word problem, namely do not know the meaning of the word, do not know the nature of the probability, and cannot identify the goal.
The first obstacle identified was that participants did not know the meaning of the word. In this study, the participants did not know the meaning of some words, such as "subsequent", "given" and "perfect square number". Participants knew there was an underlying meaning of the words "subsequent" and "given", but they could not identify and understand in detail, although the participants tried to read the translations of the questions (Figure 1).
Figure 1. Question of Probability
Participant A mentioned "Perfect square number is a number that can be square root. The number should be an even number, right. No decimals". Participant B draw n label the wrong tree diagram as she did not understand the meaning of "given" where the label of next branches depends on the first label of a tree branch.
When participants are unable to comprehend the implicit meaning of the word, the participants tend to ignore the information conveyed. When the terms "subsequent" and "given" in the given word problem confuses the participants, they misunderstand the meaning by using future events, instead of employing the next event that occurs.
The findings of the interview explain that the participants understand the sentence "probability of event A occurs if event B also occurs " by relating the statement to the conditional probability formula. However, the findings of this study found that participants could not represent the statement "probability A, given B is 0.1" to the mathematical sentence " P (A│B) = 0.1 ". From the interviewed, researchers recognized that participants did not know the short words (probability A, given B is 0.1) used in the question.
The second obstacle is not knowing the nature of probability. The study also found that participants did not understand the concept of probability involving the law of probability. The properties of probability involve sample space as well as set notation. Participants list the outcomes without set notation for questions involving the listing for a sample space. They are able to list all the desired numbers as a calculation path. They also know how to find the probability of a desired event but did not record or represent the probability term with the symbol "P". Participants are also careless when implementing the final solution in the solution process. They fail to re-explain the final value obtained by leaving the calculation result without any statement (Figure 2 and Figure 3). A school has an enrolment of 1000 students. The students buy newspapers daily at the school cooperative store. Sales records show that 200 copies of The Star newspaper and 120 copies of The NST newspaper were sold on a particular day. It is known that 30 students bought both The Star and The NST newspapers then. Find the probability of students did not buy either of the two newspapers.
Two fair dice are rolled at the same time. Find the probability of getting one even and one odd number.
Fatah Amin played "Wheel of Fortune" in a Math Fun Fair. The wheel is divided into 40 equal sectors and numbered 1 to 40. The wheel is spun and allowed to come to rest so that a pointer points within a numbered sector. Find the probability that he can get a number which is either a perfect square or the sum of its digits is 7.
You are leaving for holiday! Since it's March, the weather can still be unpredictable in Langkawi. There is a 30% chance that it will rain on the day you are scheduled to depart. So you call KLIA airport and ask them for some information. They tell you that the probability that a flight will take off on time GIVEN that it rains is 0.10. They also tell you that the probability that a flight will take off on time GIVEN that it doesn't rain is 0.80. What is the probability that is rain, given that the flight takes off on time? Previous studies posit that the examiner was unable to identify the student's concept of probability if a solution is given without any statement (Bobek & Corter, 2010;Gugga & Corter, 2014;Xing, 2016). Charles, Lester, and O'Daffer (2005) also suggest that systematic mathematical problem solving and having certain procedures give a good impression on a student, and also teachers are able to evaluate the solution process smoothly. Thus, students need to be proficient with other numerical properties such as the use of probability symbols, set notations in the list of outcomes and probability values between one and zero (Batanero et al., 2016).
The third obstacle that participants face is not being able to identify the goal in the problem. The participants were found to have difficulty interpreting probability events. From the interviewed, some participants are confused by the problem text. Even though students read the question repeatedly, they still fail to identify the goal of the problem. From the observations, Participants who have difficulty interpreting probability events will read the question slowly and repeatedly even in front of their teacher. When students are unable to interpret probability events, they will assume the related questions are complicated to solve because they are unable to obtain information from the problem. This situation hinders the students to proceed to the next process of problem solving. Therefore, they cannot solve the questions even they tried to do the solutions.
Similarly, Arum, Kusmayadi and Pramudya (2018) discussed pertaining to the difficulty of understanding probability problems. The study found that students who could not identify the goal of the problem indicated that they did not understand the problem.
Based on the observations and interpretations by the researcher, participants assume that the probability term is the same as the mathematical term and vice versa. Inzunza (2006) stated that difficulty faced by students in interpreting and using correct probability terms will disrupt the problem-solving process.
CONCLUSION
Many studies in the field of mathematical problem solving have focused on students' skills while solving problems. Thus, this study is expected to contribute to the lack of empirical studies in identifying weaknesses or difficulties faced by students while solving problems. This is because such studies provide information on the difficulties faced by students in learning and teaching probability, as well as contribute ideas to instructors in developing the pedagogical techniques practiced. Instructors can curate methods and approaches in addressing the issue of student difficulties in mathematical problem-solving process before, during or even after the learning and teaching sessions are implemented. These findings can have solved some of the problems faced by college students. They can use procedural methods in solution strategies without neglecting the probability statement. They have to mastered the nature probability by doing more practices. From the practises, they will know word problems relating the mathematical formula and vice versa.
Besides, understanding probability word problems in terms of using correct terms and chosen right solution strategies can make students performed well. | 2021-05-10T00:04:08.934Z | 2021-01-31T00:00:00.000 | {
"year": 2021,
"sha1": "346941134622d107a12b2b0e0899503c99e65cf3",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.22342/jpm.15.1.12801.83-90",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "bfc54008cc166e7bbe67dc7f8a4ac7098a71b8c4",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Psychology"
]
} |
234041042 | pes2o/s2orc | v3-fos-license | New Challenges for Sustainable Organizations in Light of Agenda 2030 for Sustainability
Sustainability is one of humanity’s most daunting issues at present [...]
Introduction to the Special Issue
Sustainability is one of humanity's most daunting issues at present. Increasing population, escalation of anthropogenic activities, industrialization, modern agricultural practices that pollute water, air, and soil around the world, and ever-increasing greenhouse gas emissions mean that sustainability is now in doubt [1].
In response to these critical concerns, the world has come up with several initiatives including Agenda 2030. Agenda 2030 is a commitment to eradicate poverty and achieve sustainable development worldwide, ensuring that no one is left behind by 2030. Its adoption was a landmark achievement, providing a shared vision towards sustainable development for all. Its 17 Sustainable Development Goals (SDGs) and 169 targets aim to end the plethora of development problems and deliver a better universe [2].
The SDGs are principally linked to country level implementation, but they are the most inclusive and all-embracing participatory global development policy [3][4][5][6][7]. To actualize Agenda 2030, it will be necessary to involve governments and parliaments, the United Nations system and other international institutions, local authorities, indigenous peoples, civil society, business and the private sector, the scientific and academic community, and all the peoples [2]. Agenda 2030 is a partnering-centered global development agenda aimed at developed and developing countries and cannot be achieved without the contributions of the private sector and other participating constituencies [7].
Organizations have important roles to play in SDGs delivery [5][6][7][8][9]. Relatedly, corporate sustainability has become a critical area of debate in academe and practice [10,11]. Recent research studies show that organizations are responding by paying more attention to sustainability issues including accountability and embedding environmental plans into their corporate strategy. However, rightly integrating corporate strategy with SDGs and improving corporate sustainability practice entails unlocking new knowledge on corporate environmental sustainability know-how.
The purpose of the Special Issue entitled "New Challenges for Sustainable Organizations in Light of Agenda 2030 for Sustainability" is to explore new findings and approaches associated with sustainable culture in light of Agenda 2030 for sustainability, thus extending and developing previous academic and managerial knowledge. It encourages submissions investigating-but not limited to-the development and application of innovative and sustainable territorial and organizational models both in profit and in nonprofit organizations. It also welcomes articles that address ethical, legal, technical, territorial, and organizational aspects to support sustainability both inside and outside the organization. Finally, it favors studies that are novel and applicable, capture best practices, and reflect the state-of-the-art.
Form and Contents of the Thematic Issue
The content of this Special Issue focuses on recent advances associated with sustainable culture in light of Agenda 2030 for sustainability which can be explored at three levels of analysis: (1) European Union (EU) countries/regions; (2) National municipalities; (3) Organizations (for-profit and non-profit; private and public).
At the first level of analysis, Postiglione et al. [12] discuss the regional equality within countries and across the EU regions by studying σ-convergence and the conditional βconvergence process. The authors highlight the impact of the interdependencies that occur at the regional level between EU regions and point out that the speed of convergence is slowing down in the European Union. This is especially true following the 2008 economic crisis and the entrance of Eastern regions. Hence, they propose paying additional attention to regions in Eastern Europe in order to ensure cohesion and reduce regional disparities.
At the second level of analysis, Mastronardi and Romagnoli [13] investigate a specific type of Italian municipality, i.e., municipalities distant from the main service supply hubs and thus defined as "inner areas". The authors discuss the role of a new entrepreneurial mode, called community-based cooperatives (CBCs), in contributing to sustainable development in those areas. According to the authors, this objective is achieved in a variety of ways: the strengthening of community wellbeing (social sustainability); the enhancement of endogenous potential (economic sustainability); the recovery of degraded or abandoned natural resources (environmental sustainability); and the creation of fruitful partnerships. Still at the municipal level, Mastronardi and Cavallo [14] open a discussion about economic inequalities. More specifically, the authors point out that inner areas show lower inequality levels with respect to densely populated urban centers. In those areas, the agricultural sector plays a fundamental role. Hence, Basile and Cavallo [15] focus on the nexus between rural identity and perceptive components of authenticity in order to understand the positive relapse of the territory's changes and fruition influencing sustainable development.
At the organization level, four contributions investigate four different sectors. Bonacci et al. [16] focus on the healthcare sector by showing that a good organizational climate becomes a necessary (though not sufficient) condition to create an expert, structured, and balanced workforce (organizational innovation), capable of achieving great performances (working excellence) aligned with the organization's interests and objectives. Poponi et al. [17] deal with spin-offs, typically defined as "Science Based" companies, and their role in the transition from the classic model of linear economics to the innovative model of circular economics. Cappa et al. [18] deal with cultural heritage organizations. The authors propose a visitor-sensing framework where visitors can contribute to generate new scientific knowledge concerning their behavior and preferences, by which museum managers can re-design the cultural offerings of their institutions in ways that generate major economic and social impacts. Finally, Myeong and Jung [19] discuss the potential benefit of blockchain technology in the field of public administration. By enhancing the level of security and transparency, blockchain technology could help to provide future sustainable administrative services (e.g., e-voting systems, individually oriented social welfare services, more transparent recruitment and procurement processes, etc.). | 2021-05-10T00:02:57.735Z | 2021-02-05T00:00:00.000 | {
"year": 2021,
"sha1": "d647b1293b8c8972c298cb02b82e93d7c4750dfc",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2071-1050/13/4/1717/pdf?version=1612524615",
"oa_status": "GREEN",
"pdf_src": "Adhoc",
"pdf_hash": "ae7d18f3e1a37266a83d9f8e415df73dbc6b9e22",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Business"
]
} |
4369110 | pes2o/s2orc | v3-fos-license | Protective effect of Atriplex suberecta extract against oxidative and apoptotic hepatotoxicity
Atriplex suberecta I. Verd is a known phytomedicinal species of Atriplex; however, studies into its bioactivity remain inconclusive. The in vitro and in vivo antioxidative and hepatoprotective potential of A. suberecta ethanol-extract (ASEE) was assessed in the present study. 1,1-diphenyl-2-picrylhydrazyl radical scavenging and β-carotene bleaching assays revealed that ASEE possesses free radical scavenging and anti-lipid peroxidative activities. These results were supported by the in vitro protection of HepG2 hepatoblastoma cells via abating 2,7-dichlorofluorescein-activated oxidative and apoptotic molecules (caspase-3/-7). In carbon tetrachloride-treated rats, the oral administration of ASEE significantly normalized serum biomarkers of liver function (serum glutamate oxaloacetate, serum pyruvate transaminase, alkaline phosphatase, γ-glutamyl transferase and bilirubin) and the lipid profile (total cholesterol, high-density lipoprotein, low-density lipoprotein, triglycerides and malondialdehyde), including tissue non-protein sulfhydryl and total protein levels. These results were also supported by liver histopathology, which demonstrated that the therapeutic effect of ASEE was comparable to silymarin. Furthermore, phytochemical analysis of ASEE revealed the presence of flavonoids, alkaloids, tannins and saponins. Rutin, an antioxidant flavonoid, was identified using the validated high-performance thin-layer chromatography method. In conclusion, this is the first report on the therapeutic potential of A. suberecta against chemical-induced oxidative stress and liver damage.
Introduction
The liver is a vital organ that serves a role in the metabolism of endogenous and exogenous substances where physiological imbalances cause cellular oxidative stress and the formation of toxic free radicals (1). Increased accumulation of intracellular reactive oxygen and nitrogen species together with decreased antioxidant defense results in hepatotoxicity that may progress to liver dysfunction, carcinoma and failure (1). Therefore, developing preventive therapeutic strategies against hepatic oxidative stress and toxicity remains an important issue. Plants contain a number of bioactive secondary metabolites, including flavonoids, polyphenols, alkaloids, saponins and terpenoids, which possess radical scavenging and hepatoprotective activities (2)(3)(4)(5)(6).
The genus Atriplex (subfamily Chenopodiaceae), commonly known as lagoon or sprawling saltbush are widely distributed in arid and semi-arid regions, including the Middle East (7). Globally, ~400 species of Atriplex herbs and shrubs have been recognized (7,8). Of these, the protein-rich shoots of A. halimus are an important fodder for sheep, goats and camels (9). In addition, the protein-rich leaves of A. lampa have been proposed as a potential dietary supplement for animals and humans (10). In traditional medicine, A. halimus decoction has been used to treat syphilis (11) and its leaves have been used to treat heart disease, diabetes and rheumatism in the Arabian Peninsula (12). In addition, methanol and hexane extracts of the aerial parts of A. halimus have been demonstrated to have antimicrobial activity (13). A previous study, in which phytochemical analysis was performed on the aerial parts of A. halimus, revealed the presence of myricetin, quercetin, isorhamnetin glycosides, phenolic acids and esters (14). Recently, triterpenoids isolated from A. laciniata demonstrated antibacterial, antioxidant and antiurease activities (15), including anticholinesterase effects against Alzheimer's and other neurological disorders (16). The fungicidal effects of A. semibaccata, A. portulacoides and A. infalata have been previously reported (17) and the molluscicidal and larvicidal activities of A. inflata have also been identified (18). Furthermore, Gođevac et al (19) revealed that flavonoid glycosides isolated from the aerial parts of A. littoralis exhibited protection against in vitro biochemical and cytogenetic damage to human lymphocytes (19).
In Saudi Arabia, of the 10 reported species of saltbush, A. coriacea, A. dimorphostegia, A. farinosa, A glauca, A. halimus, A. leucoclada and A. tatarica are native, whereas A. canescens, A. semibaccata and A. suberecta were introduced and naturalized (20). A. suberecta I. Verd is a herb with thin and narrow leaves, separate male and female flowers and capsulated fruits (20,21). Compared with other species, there have been few phytochemical and bioactivity studies on A. suberecta. To the best of our knowledge, the only previous study into A. suberecta leaf protein concentrate, suggested its nutritional value was due to its high lysine content (21). The aim of the present study was to investigate the in vitro and in vivo antioxidative and hepatoprotective potential of A. suberecta ethanol-extract (ASEE), including standardization and validation by chromatography.
Materials and methods
Collection of plant material and extract preparation. The clean and healthy aerial shoots of Atriplex suberecta I. Verd were collected from Jazan (Saudi Arabia) and authenticated (voucher specimen no. 16386) by a plant taxonomist at the College of Pharmacy, King Saud University (Riyadh, Saudi Arabia). Briefly, the air-dried leaf powder (300 g) was soaked in 70% ethanol (Merck KGaA, Darmstadt, Germany) for 2 days at room temperature and filtered (Whatman ® Filter paper, grade 1; Sigma-Aldrich; Merck KGaA, Darmstadt, Germany). The extraction process was repeated twice with the same solvent, followed by evaporation using a rotary evaporator (BÜCHI Labortechnik AG, Flawil, Switzerland) under reduced pressure at 40˚C. The obtained semi-solid ASEE (31.5 g) was stored at -20˚C prior to use.
Free-radical scavenging activity of ASEE. The free-radical scavenging ability of ASEE against 1,1-diphenyl-2-picrylhydrazyl (DPPH) was evaluated quantitatively as described previously (23) with minor modifications. In brief, 100 µl of different concentrations (31.25, 62.5, 125 and 250 µg/ml) of the ASEE was mixed with 40 µl DPPH (0.2 mM in methanol) in a 96-well microplate. The control was prepared using the solvent (methanol) only in addition to the same amount of DPPH reagent to remove any inherent solvent effect. Ascorbic acid was used as the standard. Following 30 min incubation at 25˚C the decrease in absorbance (Abs) was measured at λ=517 nm using a microplate reader (ELx800; BioTek Instruments, Inc., Winooski, VT, USA). The experiment was performed in triplicate and the radical scavenging activity was calculated from the following equation: Percentage radical scavenging activity = [1 -(Abs sample /Abs control )] x 100.
Lipid peroxidation assay. The lipid peroxidation activity of ASEE was evaluated using the β-carotene bleaching method as previously described (24) with minor modifications. Briefly, 0.25 mg β-carotene was dissolved in 0.5 ml chloroform and added to flasks containing 12.5 µg linoleic acid and 100 mg Tween-40. The chloroform was evaporated at 43˚C using a Savant™ Universal SpeedVac™ Vacuum system concentrator (Thermo Fisher Scientific, Inc.). The resultant mixture was immediately diluted to 25 ml with distilled water and agitated vigorously for 2-3 min to form an emulsion. A 200 µl aliquot of the emulsion was added to a 96-well plate containing 50 µl ASEE or 500 µg/ml gallic acid (standard). A control containing solvent (emulsion) was also prepared. The test was performed in triplicate and the plate was incubated at 50˚C for 2 h. The Abs was read at λ=470 nm at 30 min intervals using a microplate spectrophotometer. The antioxidant activity was estimated using two different methods; initially the kinetic curve was obtained by plotting Abs of each sample against time and then the antioxidant activity was expressed as percentage inhibition of lipid peroxidation using the following equation: Percentage inhibition = [(As120 -Ac120)/(Ac0 -Ac120)] x 100, where As120 and Ac120 are the Abs of the sample and control at 120 min, respectively, and Ac0 is the Abs of the control at 0 min.
In vitro hepatoprotection assay of ASEE. HepG2 cells were seeded in a 96-well flat-bottom plate (0.5x10 5 cells/well) and grown for 24 h as described above. Liver cytotoxicity was induced by DCFH (IC 50 100 µg/ml) treatment as previously described (25). ASEE was initially dissolved in DMSO (200 mg/ml) and further diluted in RPMI-1640 medium to prepare four doses (25, 50, 100 and 200 µg/ml) and an untreated control containing DMSO only (volume equivalent to 200 µg/ml ASEE). As determined previously (data not shown), the final concentration of DMSO used never exceeded >0.1% and therefore was tolerated by the cultured cells. The culture monolayers were replaced with culture medium containing DCFH (100 µg/ml) and a dose of ASEE, including the untreated as well as the DCFH only-treated controls. The treated cells were incubated for 48 h at 37˚C in a CO 2 incubator, followed by an MTT assay (TACS MTT Cell Proliferation Assay kit, Trevigen, Gaithersburg, MD, USA) according to the manufacturer's instructions. Briefly, MTT reagent (10 µl/well) was added to the cells and incubated for 3 h. The lysis buffer (100 µl/well) was gently added and further incubated for ~1.5 h. The Abs was recorded at λ=570 nm by a microplate reader, and data was analyzed using non-linear regression (Excel software 2010; Microsoft Corporation, Redmond, WA, USA) to determine the percentage cell survival as follows: Percentage cell survival = [(Abs sample -Abs blank )/(Abs control -Abs blank )] x 100.
Anti-apoptotic signaling assay of ASEE. To determine the anti-apoptotic effect of ASEE, caspase-3 and -7 activation was measured using an Apo-ONE ® homogenous caspase-3/-7 assay kit (Promega Corporation, Madison, WI, USA) following the manufacturer's protocol. Briefly, HepG2 toxicity was induced with DCFH (100 µg/ml) and treated with ASEE (25, 50, 100 and 200 µg/m) for 48 h as described above. Caspase-3/-7 reagent (100 µl/per well) was added and mixed by gently rocking the culture plate. Treated cultures were incubated for 5-6 h in the dark at room temperature and the Abs was measured at λ=570 nm. Non-linear regression analysis was performed to determine the percentage cell proliferation and caspase activity as follows: Percentage cell proliferation = [(Abs sample -Abs blank )/(Abs control -Abs blank )] x 100.
Animals and acute toxicity test. A total of 30 male Wistar rats (weight, 200-220 g; age, 8-9 weeks) received from the Experimental Animal Care Center, King Saud University (Riyadh, Saudi Arabia) were kept in polycarbonate cages in a sterile room under a controlled 12 h dark/light cycle at 25±2˚C with 50-60% humidity. The animals were provided standard rodent chow diet (Grain Silos & Flour Mills Org., Riyadh, Saudi Arabia) and water ad libitum. The animals were divided into five test groups (n=6/group/cage) that were each fed different doses of ASEE (50, 100, 250 and 500 mg/kg.bw), including a control group that was fed normal saline instead of ASEE. All ASEE treated rats, including the control group were observed continuously and uninterruptedly for 1 h and then at 30 min intervals for 4 h for any gross behavioral change and general motor activities, including writhing, convulsion, response to tail pinching, gnawing, pupil size and feeding behavior, and additionally monitored for up to 72 h for any mortality. No behavioral change was observed in the treated or control rats. The present study was approved by the Ethics Committee of the Experimental Animal Care Society (King Saud University, Riyadh, Saudi Arabia) and adhered to its guidelines.
Experimental design and treatment. Upon acclimatization to the laboratory conditions for 1 week, the rats (n=30) were randomized and assorted into five groups (GI-GV) with 6 rats in each group. The GI group was fed orally with normal saline (1 ml) and served as the untreated control, the GII group received carbon tetrachloride (CCl 4 ) in liquid paraffin (1:1) only, 1.25 m/kg intraperitoneally (IP). The GIII, GIV and GV groups also received CCl 4 and the GIII and GIV were also treated with ASEE 100 and 200 mg/kg, respectively whereas GV was treated with silymarin (10 mg/kg) used as a comparison to the current experimental and clinical standard (23,26). All treatment was administered for 3 weeks Rat sacrifice, blood collection and liver tissue preparation. Following 3 weeks of treatment, rats (weight, 200-220 g) were anesthetized with pentobarbital sodium (50 mg/kg, intraperitoneally; Sigma-Aldrich; Merck KGaA) and sacrificed by cervical dislocation. Death was confirmed by monitoring the heartbeat, absence of withdrawal to paw pinch and non-response of pupils to light. While under anesthesia, rats blood was collected with a 23G needle via cardiac puncture and sera were separated at 1,000 x g for 10 min at 4˚C, and stored at -20˚C until biochemical analysis. The livers were quickly removed and fixed in 10% neutral buffered formalin (NBF) for 48 h at room temperature. The fixed specimens were processed overnight for dehydration, clearing and paraffin impregnation using an automatic tissue processor (Sakura Finetek Europe B.V., The Netherlands) and cut into 4-µm-thick sections using a rotary microtome (RM2245; Leica Microsystems GmbH, Wetzlar, Germany). Co., Ltd., Horwich, Lancashire, UK). Very low-density lipoproteins (VLDL) and low-density lipoproteins (LDL) were calculated using the two following standard formulas: TG/5 and [Cholesterol-(VLDL+HDL)], respectively. The serum total protein (TP) was estimated using a kit (Crescent Diagnostics, Jeddah, Saudi Arabia) and the following equation: TP = (Abs sample /Abs standard ) x concentration of standard.
Determination of tissue malondialdehyde (MDA) and non-protein sulfhydryl (NP-SH).
For tissue MDA the method reported by Utley et al (27) was followed. Briefly, the liver tissues were homogenized in 0.15 M KCl at 40˚C (Potter-Elvehjem type C homogenizer) to give a 10% w/v homogenate. The Abs of the solution was then read at λ=532 nm and the MDA content (nmol/g wet tissue) was calculated by reference to a standard curve of MDA solution. Hepatic NP-SH was measured according to the method of Sedlak and Lindsay (28). The tissues were homogenized in ice-cold 0.02 mM EDTA and the Abs (λ=412 nm) was measured following the addition of 5,5'dithio-bis(2-nitrobenzoic acid) against the control.
Microscopy and histopathological evaluation. Morphological investigation of the cultured HepG2 cells was performed under a microscope to investigate any changes in the cells cultured with different concentrations of ASEE and DCFH at 24 and 48 h post-treatment. The sections of liver tissues fixed in NBF (for 48 h at room temperature) were stained with hematoxylin and eosin for 2-3 min at room temperature as previously described (29). Tissue sections were histopathologically examined under a light microscope (OMX1200C; Nikon Corporation, Tokyo, Japan) and images (at magnifications x200 and x400) were captured using a mounted digital camera.
Qualitative phytochemical screening of ASEE. Phytochemical screening tests for major secondary metabolites, including alkaloids, flavonoids, tannins and saponins were performed using standard procedures as described previously (30)(31)(32). Briefly, for alkaloids 0.5 mg ASEE was dissolved in 2% hydrochloric acid (Sigma-Aldrich, Merck KGaA) and filtered. Fresh Mayer's reagent (0.68 g mercuric chloride and 2.5 g potassium iodide; Sigma-Aldrich; Merck KGaA) prepared in distilled water (50 ml, final volume) was added to the 3 ml ASEE solution in a test tube. The formation of a yellow precipitate confirmed the presence of alkaloids. For flavonoids, 5 ml ASEE solution was treated with several drops of 20% sodium hydroxide (Sigma-Aldrich; Merck KGaA) in a test tube. The appearance of an intense yellow color that turned colorless following the addition of diluted hydrochloric acid was indicative of flavonoids. For tannins, 0.25 mg ASEE was dissolved in 10 ml water in a test tube and several drops of 5% ferric chloride (Sigma-Aldrich; Merck KGaA) were added. The development of a brown-green or blue-black color indicated the presence of tannins. For saponins, 0.5 mg ASEE was dissolved in 10 ml water in a test tube and agitated vigorously to form a thick persistent froth, which represented a positive result for saponin.
Standardization of ASEE by the validated high-performance thin-layer chromatography (HPTLC) method. The reverse phase (RP)-HPTLC method was used to standardize the 70% ethanol extract of A. subrecta as described previously (33). The chromatography was performed on a 10x10 cm precoated silica gel F254 RP-HPTLC plate using rutin as the standard reference. Several mobile phases were tried to obtain a good resolution and separation of the different compounds present in the ASEE. Based on observations, acetonitrile and water were selected in the ratio of 4:6 as a suitable mobile phase to perform the standardization of ASEE. The standard and the samples were applied on the HPTLC plate by an Automatic TLC Sampler-4 (CAMAG Chemie-Erzeugnisse & Adsorptionstechnik AG, Muttenz, Switzerland). The plate was developed under controlled condition in an Automated Developing Chamber-2 and scanned by TLC Scanner-3 (λ=363 nm) (both CAMAG Chemie-Erzeugnisse & Adsorptionstechnik AG).
Statistical analysis. Data are presented as the mean ± standard error of three (in vitro) and six (in vivo) determinants. Total variation present in a set of data was estimated by one-way analysis of variance followed by Dunnett's post hoc test. Excel 2010 (Microsoft, Tulsa, OK, USA) was used to analyze the data. P<0.05 was considered to indicate a statistically significant difference.
In vitro cytoprotective and anti-apoptotic effect of ASEE on HepG2 cells. Visual observation under a microscope revealed the marked cytotoxic effect of DCFH on the HepG2 cells, which was indicated by apoptosis or altered morphology compared with the untreated cells (data not shown). However, treatment with ASEE resulted in morphological recovery against DCFH toxicity at 24 and 48 h (data not shown). An MTT assay revealed attenuation of the HepG2 cells toxicity by ASEE in a dose-dependent manner (Fig. 3A). Treatment with 50, 100 and 200 µg/ml ASEE significantly restored the cells proliferation to 68, 76 and 110%, respectively compared with the untreated cells (P<0.001; Fig. 3A). Furthermore, in the anti-apoptotic signaling assay, ASEE at doses of 50, 100 and 200 µg/ml significantly downregulated caspase-3/-7 activity to 76, 43 and 18%, respectively compared with the DCFH-only group (P<0.001; Fig. 3B).
Normalization of liver biochemical markers by ASEE.
The acute toxicity test revealed the toleration of ASEE (500 mg/kg.bw) and healthy survival of the animals (data not shown). Furthermore, the therapeutic potential of ASEE (100 and 200 mg/kg.bw) was examined against CCl 4 -induced in vivo hepatotoxicity. In CCl 4 -only treated rats, serum SGOT, SGPT, GGT ALP and bilirubin levels were significantly elevated compared with the control group (P<0.001; Table I), which demonstrated significant hepatotoxicity. By contrast, the administration of ASEE (200 mg/kg) significantly normalized these parameters in line with silymarin, compared with the control group (P<0.01 and P<0.001; Table I). The SGOT, SGPT, GGT ALP and bilirubin levels in the ASEE (200 mg/kg) +CCl 4 groups were significantly reduced compared with the CCL 4 only group (P<0.05, P<0.01 and P<0.001). In addition, in the CCl 4 -injured rats with altered serum lipid profiles, ASEE (200 mg/kg) treatment significantly reduced the cholesterol, TG and VLDL levels, and improved the HDL level, which was comparable to that of the silymarin treated group (P<0.01 and P<0.001; Table II). Furthermore, compared with the increase in MDA level, a decrease in NP-SH tissue and decrease in TP concentrations in the CCL 4 only group was indicated. ASEE (200 mg/kg) significantly normalized these parameters in CCl 4 -injured rats (P<0.01 and P<0.001; Table III). Histopathological improvement by ASEE. The rat liver histopathological analysis revealed CCl 4 -induced necrotic and fatty degenerative changes (panel GII; Fig. 4) as compared to the control group (panel GI ; Fig. 4). In the ASEE group (100 mg/kg.bw/day), congested central vein with mild necrosis and fatty changes were observed (panel GIII ; Fig. 4). In addition, the higher dose of ASEE (200 mg/kg.bw/day) normalized the hepatocyte lesion and resulted in a full recovery (panel GIV; Fig. 4), comparable to that observed in the silymarin group (panel GV ; Fig. 4). The histopathological data therefore confirmed the in vivo hepatoprotective efficacy of ASEE.
Phytochemical screening of ASEE. The qualitative phytochemical screening revealed the presence of flavonoids, alkaloids, tannins and saponins in ASEE (data not shown).
Chromatographic quantification of rutin in ASEE. ASEE was further standardized by a validated RP-HPTLC method using rutin as an antioxidant biomarker. Of the various solvent combinations tested, acetonitrile and water (4:6; v/v) was indicated as the optimal mobile phase for the estimation of rutin in ASEE ( Fig. 5A and B). A sharp and compact spot of rutin was identified at R f =0.67 (Fig. 5C), with clear separation along with different phytoconstituents of ASEE (Fig. 5D) at the optimized mobile phase volume (20 ml) and saturation time (20 min). The estimated content of rutin in ASEE was 1.94 µg/mg (dry weight).
Discussion
Cellular oxidative stress is a process where reactive oxygen and nitrogen species, common toxic products of redox reactions, are increased (1). Oxidative stress is closely associated with the occurrence and development of various conditions, including cirrhosis and carcinoma, which are chronic liver diseases (34). The healthy body has a set of hepatic antioxidant enzymes to prevent and neutralize free-radical induced cellular damage (35). However, exposure to a hepatotoxic agent may cause the generation of free radicals to exceed the protective effects of the antioxidant enzymes (34,35). The effectiveness of hepatoprotective agents is therefore dependent on their ability to attenuate the harmful free radicals and to maintain normal liver functions (3,4,34). In the present study, the in vitro and in vivo antioxidative and hepatoprotective potential of ASEE was investigated. DPPH is a molecule containing a stable free radical, which upon receiving an electron from antioxidant agents undergoes reduction in the intensity of its purple solution, and hence in absorbance (23). As it is recommended to conduct more than one in vitro assay (36), in the present study the antioxidant activity of ASEE was also confirmed by β-carotene bleaching.
In the β-carotene bleaching method linoleic acid generated free radicals attack unsaturated β-carotene to undergo oxidation and subsequently cause it to lose its orange color. During in vitro DPPH free radical scavenging and β-carotene-linoleic acid bleaching assays, ASEE demonstrated antioxidant activity that appeared to be close to the levels of ascorbic and galic acids. Notably, flavonol glycosides from the aerial parts of A. halimus have been revealed to have a clear DPPH radical scavenging ability (37). In addition, septanosides isolated from A. portulacoides have recently been highlighted for their in vitro antioxidant activity using DPPH, ABTS + , Fe 3+ and catalase assays (38). DCFH is typically used to estimate in vitro oxidative stress generated by free-radicals through the oxidation of DCFH into the fluorescent DCF (39). In addition, it is also used as a potent cytotoxic agent against an array of human cell lines (25). In the in vitro hepatoblastoma cell culture model used in the present study, ASEE promoted HepG2 cell proliferation and recovery against DCFH-toxicity in a dose-dependent manner. Apoptotic cell death caused by reactive oxygen or nitrogen molecules is a well known phenomenon (34,35). In the present study the apoptotic-signaling assay revealed a dose-dependent inhibition of caspase-3/-7 activation by ASEE against DCFH-induced HepG2 cell death. In conclusion, ASEE exhibited a promising antioxidative and cytoprotective salutation against chemical-toxicity.
To further confirm the in vitro effects, the in vivo therapeutic potential of ASEE was examined in CCl 4 -injured livers of Wistar rats. CCl 4 is a common hepatotoxin used in the experimental study of liver diseases that induces free-radical generation in liver tissues (23,26). Clinically, CCl 4 -induced acute hepatotoxicity manifests as jaundice and elevated levels of liver enzymes, followed by hepatic necrosis (40). In a previous study, A. lentiformis ethanol and n-butanol extracts were reported to have antioxidant activities, including normalization of liver functions by a significant increase in serum alanine transferase levels (41). In the present study, the significant elevation of serum SGOT, SGPT, GGT, ALP, bilirubin and TP was observed in CCl 4 -treated rats, which indicates damage to the hepatic tissues. Treatment with ASEE demonstrated its therapeutic ability to normalize the serum biomarkers via attenuation of CCl 4 toxicity at a comparable level to treatment with silymarin. In addition to this, ASEE also normalized the serum cholesterol, TC, LDL and HDL levels in the CCl 4 treated rats.
MDA is used as a marker for lipid peroxidation of the cell membrane, which may cause cell damage (42). The level of MDA was reduced in ASEE treated rats suggesting its cytoprotective and curative activities against CCl 4 . In addition, the liver NP-SH level in CCl 4 -treated animals was significantly diminished compared with the control group, suggesting oxidative hepatocellular damage. The administration of ASEE or silymarin replenished NP-SH in the CCl 4 -treated animals demonstrating its protective activity.
The histopathological changes observed in the liver tissues revealed that the administration of ASEE caused the recovery of hepatic damage. This was revealed by the presence of normal hepatic cords and the absence of necrosis and lesser fatty infiltration in CCl 4 -treated rats. These results indicate the in vivo hepatoprotective effects of ASEE by abating the chemical-induced oxidative and apoptotic pathways.
Furthermore, the antioxidative and hepatoprotective activities of ASEE may be attributed to the presence of antioxidant flavonoids, alkaloids, polyphenols and saponins as confirmed by the qualitative phytochemical screening. The hepatoprotective activity of flavonoids is due to their ability to scavenge and reduce cellular free radicals. Rutin, a natural bioflavonoid that is distributed in a range of medicinal plants, is known for its pharmacological properties, including strong its antioxidant and anti-lipid peroxidative activities (43,44). Previously, the in vivo hepatoprotective efficacy of rutin in CCl 4 -treated BALB/cN mice has been reported (45). The identification of rutin in A. subrecta by the validated HTPLC method is in agreement with previous findings and endorses its therapeutic attribution to the prevention and treatment of liver diseases. In conclusion, to the best of our knowledge this is the first investigation into the hepatoprotective effects of A. subrecta and it has revealed its promising antioxidative and hepatoprotective potential against chemical-induced in vitro and in vivo liver injury. These results were supported by the phytochemical analysis and identification of rutin, a well-known antioxidant flavonoid in the plant. Therefore, A. subrecta may be a valuable source of natural antioxidant or health protective agent to manage oxidative stress-associated diseases. However, further investigation into its phytochemical properties and active principles, including an assessment of any other therapeutic contributions is required. | 2018-03-26T00:32:27.740Z | 2018-03-02T00:00:00.000 | {
"year": 2018,
"sha1": "b83569e96528344f402315f078b76565f501d240",
"oa_license": "CCBYNCND",
"oa_url": "https://www.spandidos-publications.com/10.3892/etm.2018.5919/download",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b83569e96528344f402315f078b76565f501d240",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
226311545 | pes2o/s2orc | v3-fos-license | Sequential Doping of Ladder-Type Conjugated Polymers for Thermally Stable n-Type Organic Conductors
Doping of organic semiconductors is a powerful tool to optimize the performance of various organic (opto)electronic and bioelectronic devices. Despite recent advances, the low thermal stability of the electronic properties of doped polymers still represents a significant obstacle to implementing these materials into practical applications. Hence, the development of conducting doped polymers with excellent long-term stability at elevated temperatures is highly desirable. Here, we report on the sequential doping of the ladder-type polymer poly(benzimidazobenzophenanthroline) (BBL) with a benzimidazole-based dopant (i.e., N-DMBI). By combining electrical, UV–vis/infrared, X-ray diffraction, and electron paramagnetic resonance measurements, we quantitatively characterized the conductivity, Seebeck coefficient, spin density, and microstructure of the sequentially doped polymer films as a function of the thermal annealing temperature. Importantly, we observed that the electrical conductivity of N-DMBI-doped BBL remains unchanged even after 20 h of heating at 190 °C. This finding is remarkable and of particular interest for organic thermoelectrics.
■ INTRODUCTION
Conjugated polymers have attracted a great deal of attention as a class of semiconducting materials that hold promise for the development of a wealth of traditional as well as unconventional low-cost and distributed technologies. 1−9 Their versatile chemical synthesis and inexpensive solution processability enable cost-efficient large-scale production of light, flexible, and even biocompatible electronic devices which would otherwise be difficult to realize using traditional inorganic semiconductors. 10−12 The electronic and electrical properties of π-conjugated polymers, and thus, the performance of the resulting (opto)electronic devices, depend strongly on the charge carrier concentration, which can be tuned by the socalled electrical doping. 13 Both p-doping and n-doping are needed to optimize various electronic devices, including organic solar cells, field-effect transistors, and thermoelectric generators. 14−22 This is typically achieved via an electron or proton/hydride transfer between the dopant molecule and the polymer backbone, a process that increases the charge carrier density and hence improves the electrical properties. 23 Conjugated polymers and molecular dopant molecules can either be coprocessed using a common solvent or sequentially processed by exposing the polymer film to the dopant vapors 24−26 or the dopant dissolved in an orthogonal solvent. 27,28 The advantage of sequential doping over coprocessing doping is that the morphology of the doped films remains to a large extent undisturbed, 28 thus yielding electrical conductivities that are superior to those commonly reached with coprocessing methods. 29 Besides a high conductivity, the thermal stability of the electronic properties of doped polymers is a crucial parameter for many applications where high temperature operation is required, as in the case of solar cells and thermoelectrics. Although there exist many doped polymer systems that are stable upon mild thermal annealing, high temperature operation induces unfavorable diffusion and sublimation of the dopant molecules, 30 which eventually degrade the electrical properties and yield reduced device performance. 31−33 This is, for instance, the case of poly(3-hexylthiophene) (P3HT) pdoped with 2,3,5,6-tetrafluoro-7,7,8,8-tetracyanoquinodimethane (F 4 TCNQ), which shows poor thermal stability above 90°C because of the sublimation of F 4 TCNQ. 34 Possible strategies to improve the thermal stability of doped polymers are the use of conjugated polymeric dopants 35 or polyelectrolyte counterions 36 as well as the insertion of polar side chains in the polymer backbone to enhance the miscibility and compatibility of the dopant/polymer systems, 37,38 thus extending the operating temperature range. 39 However, the introduction of oligoethylene glycol side chains in the polymer backbone significantly lowers the polymer glass transition temperature, an effect which is known from solar cells to promote diffusivity of the acceptor molecule, 40 thus resulting in poor long-term stability. In this respect, the use of laddertype conjugated polymers with a rigid and planar backbone might promote long-term stability at higher temperature. Poly(benzimidazobenzophenanthroline) (BBL) is the most notable example of electron-transporting ladder-type conducting polymers which shows an exceptionally high glass transition temperature (>500°C). 41 BBL can be chemically 42 as well as electrochemically 43 doped to a high extent, thus reaching conductivity values that are larger than 1 S/cm. The high electrical conductivity derives from an extended charge carrier delocalization in the planar polymer backbone, which favors fast charge transport. 42 The main drawback is the limited solubility of BBL in common organic solvents that impede coprocessing with most of the typically used n-type dopants. For this reason, doping of BBL has most commonly been achieved by sequential processing via exposure to the volatile tetrakis(dimethylamino)ethylene (TDAE) dopant. However, because of the high surface pressure of TDAE and its reactivity with molecular oxygen, 44 the electrical properties of vapor-doped BBL films are typically not stable under ambient conditions and degrade quickly at elevated temperatures. 35 Here, we report on the sequential doping of BBL with the air-stable benzimidazole derivative N-DMBI as a promising method for the development of thermally stable n-type organic conductors. We observe that the doping level and thermoelectric properties of N-DMBI sequentially doped BBL films can be tuned by simply varying the thermal annealing temperature. Although the electrical conductivity of N-DMBI sequentially doped BBL is on par with that achieved with TDAE vapor doping (i.e., 1.1 ± 0.3 S cm −1 ), N-DMBI sequentially doped BBL films exhibit far superior thermal and air stability as compared to the TDAE vapor-doped counterpart. Insights into the effect of processing conditions on the thermoelectric properties were gained through a combination of electrical measurements (electrical conductivity and Seebeck coefficient), electron paramagnetic resonance (EPR), UV− vis−NIR, and Fourier transform infrared (FTIR) spectroscopies, grazing incidence wide-angle X-ray scattering (GI-WAXS), fast scanning calorimetry (FSC), and thermogravimetric analysis (TGA). Alongside achieving greatly improved stability with N-DMBI sequential doping when compared to TDAE vapor-doped BBL, we also attain a better understanding of the doping process and the polymer−dopant interactions in BBL.
■ RESULTS AND DISCUSSION
Electrical Measurements. Initially, we attempted to carry out doping of BBL by coprocessing with N-DMBI. Because BBL is only soluble in highly acidic solvents, N-DMBI and BBL were dissolved and blended in methanesulfonic acid (MSA). After spin-coating, the films were dipped into deionized water to remove residual MSA and then thermally annealed at 110°C for 1 h. Very low conductivity values (<1 × 10 −3 S cm −1 ) were observed for all of the coprocessing doped BBL films, regardless of the N-DMBI-to-BBL monomer unit molar ratio (3−100%). Because N-DMBI is a weak base and the n-doping process progresses via either hydrogen removal or a thermally activated hydride transfer process, 45 we ascribe the low conductivity of the coprocessed films to inefficient doping with N-DMBI in the strongly acidic MSA solution.
Because of the marginal solubility of BBL in conventional organic solvents, sequential doping can be conveniently performed by choosing a suitable orthogonal solvent for the molecular dopant. Here, N-DMBI was dissolved in chloroform and spin-coated on top of the annealed dry BBL films, as schematically illustrated in Figure 1c. The films were then thermally annealed at different temperatures, as specified below. Figure 2a shows the evolution of the electrical conductivity of sequentially doped BBL films after thermal annealing for 60 min at temperatures ranging from 70 to 250°C . When BBL films are thermally annealed at 70°C, the conductivity is about 2.8 (±0.9) × 10 −3 S cm −1 and dramatically increases up to 1.
. These values are comparable to those reported for TDAE vapor-doped and electrochemically reduced BBL films (1−2 S cm −1 ), 42,46 suggesting a successful sequential doping process. Note also that the temperature range at which the conductivity reaches a maximum is compatible with most commercial, flexible plastic substrate materials such as polyether ether ketones, polyimides, and polyarylates. 47 At higher thermal annealing temperatures, the conductivity starts to decline, dropping to 0.7 ± 0.4 and 0.2 ± 0.03 S cm −1 for T = 230 and 250°C, respectively ( Figure 2a). We further compared the conductivities of samples annealed for different times ( Figure S1a). The temperature-dependent conductivity has a similar trend for all annealing times (10, 60, and 180 min), except for temperatures starting at 210°C, above which the longer annealing times result in a more pronounced loss of conductivity. We attribute this to the diffusion and evaporation of N-DMBI, which becomes more pronounced above 215°C, as indicated by the mass loss observed by TGA ( Figure S2). Variable-temperature conductivity measurements were performed on the N-DMBI sequentially doped BBL films annealed at 210°C for 60 min, which yielded the highest conductivity ( Figure 2b). The activation energy for charge transport was determined to be 112 ± 18 meV, which is comparable to the value previously reported for TDAE vapordoped BBL (∼120 meV). 42 We then measured the Seebeck coefficients (S) of the sequentially doped BBL films annealed at different temperatures in an inert environment. Figure 2c shows that S decreases monotonically upon increasing the thermal annealing temperatures from 70 to 210°C, going from −212 ± 20 The negative sign of S agrees with the n-type nature of the sequentially doped films. The Seebeck coefficient of BBL films, annealed for different times, follows a similar but inverted trend as the conductivity, with longer annealing times at temperatures higher than 210°C resulting in increased S ( Figure S1b). Because the electrical conductivity and S are inversely interrelated, the power factor (PF = S 2 σ) of sequentially doped BBL reaches a maximum at 1.5 ± 0.3 μW m −1 K −2 for an intermediate thermal annealing temperature of 190°C (Figure 2d).
Thermal Analysis. TGA of N-DMBI indicates mass loss above 200°C, with 5 wt % mass loss at 215°C during heating at 10°C min −1 (Figure S2a), which we assign to evaporation of the dopant. A differential scanning calorimetry (DSC) first heating thermogram of neat N-DMBI solidified by casting from chloroform shows a distinct melting peak at T m = 110°C ( Figure S2b). Cooling and second heating thermograms indicate that N-DMBI does not recrystallize but instead displays a distinct glass transition temperature T g = −3°C. We also carried out FSC to investigate the thermal behavior of N-DMBI/BBL bilayer films (see the Experimental Section for details). Repeated heating up to 195°C results in flat and reproducible FSC thermograms ( Figure S3), which indicates that no new thermal transitions arise because of contact of the dopant and polymer. Instead, heating up to 245°C leads to a change in the slope of FSC thermograms, that is, a change in heat flow with temperature, which we explain with mass loss because of evaporation of the dopant (cf. Figure S3). Overall, our thermal analysis experiments allow us to rationalize the observed changes in conductivity upon annealing of N-DMBI/ BBL films. We argue that annealing up to 200°C leads to a strong increase in conductivity (cf. Figure 2a) because of diffusion of the dopant into the underlying BBL layer, which can readily take place above the T m of N-DMBI. At temperatures above 210°C, the dopant instead starts to evaporate, which correlates well with the optical and scattering analyses reported below (vide infra).
EPR Analysis. To shed light on the effect of thermal annealing on the charge density of sequentially doped BBL films, we performed EPR spectroscopy at room temperature. The samples were flame-sealed inside EPR quartz tubes filled with N 2 . As shown in Figure 3a, an EPR signal arising from polarons was observed in the sequentially doped BBL film annealed at 70°C. The extracted spin density continuously increases with increasing annealing temperature from 70 to 210°C (Figure 3b), which agrees with the increases in conductivity observed over the same temperature range. Interestingly, however, unlike conductivity, the spin density continues to rise even at annealing temperatures above 210°C.
This further increase at T > 210°C is accompanied by a surge in the EPR linewidth (Figure 3c), which we ascribed to an increase in energetic disorder induced by the decomposition of dopant molecules into radicals. 48 These immobile radicals could act as Coulomb scattering centers and negatively impact the charge carrier mobility, thus contributing, together with the evaporation of the dopant molecules, to the reduction in electrical conductivity observed in Figure 2a.
Optical Absorption Spectroscopy. We then used UV− vis−NIR absorption spectroscopy to investigate the optical absorption of pristine and sequentially doped BBL films on glass slides upon thermal annealing at different temperatures ( Figure S4). The spectrum of pristine BBL features an intense absorption peak at ∼580 nm, assigned to the π−π* transition. 42 After sequential doping with N-DMBI, a broad absorption band centered around 800 nm appears, indicating the formation of negative polarons. 42 Compared to pristine BBL, the absorption of sequentially doped BBL is slightly blueshifted. To study the optical changes arising from sequential doping of BBL in more detail, we deposited BBL on CaF 2 windows, which due to their wide transparency range enable transmission measurements with both UV−vis−NIR and FTIR spectroscopies of the same film. The UV−vis−NIR and differential UV−vis−NIR spectra in the range 500−2500 nm are presented in Figure 4, along with the differential FTIR spectra in the range 4000−900 cm −1 (see Figures S5 and S6 in the Supporting Information for the combined and raw FTIR spectra, respectively). The differential spectra were calculated by subtracting the spectrum of the N-DMBI/BBL film annealed at 70°C from the spectra of the same film measured after annealing at higher temperatures.
Three wide polaronic absorption bands are visible for the sequentially doped BBL after annealing, located at 800, 1330, and 3600 nm ( Figure S5). After the N-DMBI layer is deposited Figure 4. (a) UV−vis−NIR spectra, (b) differential UV−vis−NIR spectra, (c) differential FTIR spectra, and (d) stacked differential FTIR spectra of the same sequentially doped BBL film annealed progressively for 10 min at the indicated temperature. The differential spectra in b, c, and d are calculated by subtracting the spectrum of the sequentially doped BBL film annealed at 70°C from the spectra measured after annealing at higher temperature. The spectrum labeled N-DMBI 70°C in (d) is an inverted and scaled reference spectrum of an N-DMBI film annealed at 70°C for reference. Table S1 in the Supporting Information for the peak wavelengths of this band at different annealing temperatures). The gradual blue-shift back to the value of pristine BBL is consistent with our argument that N-DMBI first gradually diffuses into the BBL film with the excess being expelled from the surface at the highest annealing temperatures (230 and 250°C). Because of the red shift, we present the differential spectra for samples annealed at higher temperatures by subtracting the spectrum of sequentially doped BBL annealed at 70°C in Figure 4. From the differential UV− vis−NIR spectra in Figure 4b, it can be seen that higher annealing temperatures increase the polaronic absorption bands until 210°C, after which the polaronic absorption begins to decrease. This decrease coincides with the above discussed decomposition and evaporation of N-DMBI, and indicates that the doping level of BBL decreases at the highest annealing temperatures. Vibrational Spectroscopy Characterization. The FTIR spectra in the range 4000−900 cm −1 are shown in Figure S6, with the corresponding differential FTIR spectra shown in Figure 4c,d. The spectra in Figures 4d and S6b are stacked for clarity. The FTIR spectrum of sequentially doped BBL is a superposition of the vibrational absorption bands of BBL and N-DMBI (see Table S2 in the Supporting Information for vibrational band assignment). In the differential spectra in Figure 4, only small changes are observed when the sample is annealed at 110°C, which corresponds to the melting temperature of N-DMBI ( Figure S2). Annealing the sample at 150°C makes the changes more visible, with the formation of two new absorption bands between 1350 and 1250 cm −1 that we assign to polaronic absorption in BBL. At 190°C, a shift in the BBL CO vibration at 1700 cm −1 becomes visible, with the splitting of this band, forming a new absorption at 1650 cm −1 at higher annealing temperatures. We have previously shown that the CO absorption splitting is the spectral fingerprint of polarons in the BBL structure. 49 The annealing at 190 and 210°C progressively decreases the N-DMBI absorption in the aromatic and methylamino C−H stretching absorptions around 3000 cm −1 and in the aromatic ring stretch between 1600 and 1500 cm −1 along with an increase in the polaronic absorption bands. No further loss of N-DMBI is observed after annealing at higher temperatures, suggesting that only N-DMBI that has reacted with BBL remains in the film. In order to track the amount of N-DMBI in sequentially doped BBL, we integrated the N-DMBI peaks in the C−H vibration and aromatic ring stretching regions after annealing at various temperatures ( Figure S7). This shows that the amount of N-DMBI decreases drastically after annealing at 190°C, with only a small amount (∼10%) remaining after annealing the film at 210°C because of the evaporation of N-DMBI on top of the BBL film. This is in line with the observed decreases in conductivity and matches also with the observed weight loss in the TGA measurements of N-DMBI ( Figure S2). Furthermore, we see a progressive decrease in the polaronic absorption band at 1250 cm −1 . This is a clear indication that sequential doping of BBL with N-DMBI works progressively when annealed until 210°C, after which we observe a loss in N-DMBI content. Note that both UV−vis− NIR and FTIR measurements indicate a reduction in the polaron concentration for T > 210°C, suggesting that the radicals contributing to the EPR signal (cf. Figure 3) do not sit on BBL chains.
ACS Applied Materials & Interfaces
Film Microstructure and Molecular Packing. GIWAXS was performed to investigate the microstructure evolution of sequentially doped BBL films annealed at various temperatures. The 2D-GIWAXS diffraction images are shown in Figure 5, with the corresponding line-cut plots in Figure S8 and the variation of the (100) and (010) d-spacings of the pristine and doped BBL films in Figure S9. Undoped BBL has a preferential edge-on orientation with a lamellar (100) peak in the q z plane at 0.79 Å −1 (d-spacing = 7.92 Å) and an in-plane π−π stacking (010) peak at q xy = 1.83 Å −1 (d-spacing = 3.43 Å). This is in good agreement with the previously reported diffraction pattern of undoped BBL thin films. 35,50 Sequential doping decreases the out-of-plane lamellar stacking of BBL ( Figure S9a), whereas the π−π stacking is slightly increased (Figures S9b). Several peaks that are attributed to the diffraction of N-DMBI aggregates ( Figure S10) are also visible for the samples annealed between 70 and 190°C ( Figure 5). When annealed at 210°C, the N-DMBI peaks largely disappear, corresponding well with the FTIR data indicating a sharp decrease in the amount of N-DMBI because of evaporation, along with a sharp increase in the intrachain stacking and gradual decrease in the interchain stacking. This suggests that N-DMBI is removed from the sequentially doped BBL films when annealed at T ≥ 210°C.
Electrical Stability Measurements. Finally, we tested the thermal and ambient stability of BBL sequentially doped with N-DMBI and compared it to that of TDAE vapor-doped BBL. As shown in Figure 6a, the electrical conductivity of TDAE vapor-doped BBL films dropped by 2 orders of magnitude upon heating the samples at 190°C for 10 h. In contrast, the electrical conductivity of N-DMBI sequentially doped BBL remained unchanged even after 20 h of heating, revealing the remarkable thermal stability of these films. Furthermore, although both N-DMBI sequentially doped BBL and TDAE vapor-doped BBL films are stable in a N 2 atmosphere, the former exhibits significantly higher stability in air, which we attributed to the higher ambient stability of N-DMBI as compared to TDAE. As shown in Figure 6b, after exposing the films to ambient conditions for 5 h, N-DMBI sequentially doped BBL shows conductivities 2 orders of magnitude higher than TDAE vapor-doped BBL. The conductivity of N-DMBI sequentially doped BBL did not decrease further after 24 h of exposure. Although N-DMBI is a relatively air-stable n-type dopant, 51 TDAE is inherently unstable in ambient and undergoes chemiluminescence in the presence of oxygen. 44 We believe that this is at the origin of the improved ambient stability of N-DMBI-doped BBL. Our results show conclusively that N-DMBI sequentially doped BBL is more stable than TDAE vapor-doped BBL under ambient conditions and especially at high temperatures, even though the conductivities of both were similar before exposure.
■ CONCLUSIONS
In conclusion, we show that N-DMBI sequential doping of BBL is an effective way to obtain highly conductive films while achieving ambient and thermal stability far superior to those of state-of-the-art TDAE vapor-doped BBL. Sequential doping with N-DMBI is thermally activated, and the doping level can be reproducibly tuned by simply changing the annealing temperature. Optical spectroscopy measurements show that the polaron concentration in the doped films follows the same trend as the conductivity, and the doping level increases until the evaporation temperature of N-DMBI is reached. Our work offers feasible guidelines for developing efficient n-type organic electronic devices with improved stability.
■ EXPERIMENTAL SECTION
Thin-Film Preparation and Sequential Doping. All thin films were deposited on top of glass substrates, which were cleaned by sonicating in water, acetone, and isopropanol, for 10 min each, followed by drying with nitrogen. BBL (purchased from Sigma-Aldrich) was dissolved in methanesulfonic acid (MSA) at a concentration of 7.5 mg/mL. To ensure full dissolution of BBL, the solution was stirred at 70°C for 2 h. The warm solution (70°C) was deposited onto the glass substrates by spin-coating at 500 rpm for 30 s. Immediately after spin-coating, the BBL films were dipped into deionized water to remove residual MSA. The obtained BBL films were dried first in an oven at 100°C to remove the water and then thermally annealed on a hot plate at 200°C in a glovebox filled with nitrogen for 1 h, yielding films with a thickness of 40 nm. To sequentially dope the BBL films, solutions of N-DMBI (purchased from Sigma-Aldrich) were first dissolved in CHCL 3 at 10 mg/mL and deposited on top of pristine BBL thin films by spin-coating at 1500 rpm. To tune the doping levels, the sequentially doped BBL films were annealed on a hot plate at various temperatures ranging from 70 to 250°C inside the glovebox.
Electrical Characterization. Electrical conductivity and Seebeck coefficient were measured inside a nitrogen-filled glovebox using a Keithley 4200-SCS. For measurements reported in Figures 2a,c,d and S1b, Au electrodes with a Ti adhesion layer (Au/Ti = 25 nm/5 nm, L/W = 0.5 mm/15 mm) were deposited on top of glass substrates prior to polymer layer deposition. For conductivity measurements in Figures 2b and S1a, we fabricated electrodes with a shorter channel length (L/W = 30 μm/1000 μm). The temperature gradient (ΔT) across the sample was applied with two Peltier modules, and the thermovoltage (ΔV) was measured between two separate electrodes (L/W = 0.5 mm/15 mm). The S was calculated from the slope of ΔV measured at six different ΔT values. It is worth to mention that we used an electrode configuration (aspect ratio of the electrodes We/Le = 30) which takes the effect of the contact geometry into consideration. 52 This configuration enables minimizing the error in determining S, resulting in smaller sample-to-sample variation.
Spectroscopy Characterization. Optical UV−vis−NIR spectra of pristine BBL and sequentially doped BBL films were measured under a nitrogen atmosphere at room temperature with PerkinElmer Lambda 900. FTIR spectra in the mid-IR region were measured inside an air-tight sample holder with a N 2 -purged Bruker Equinox 55 FTIR spectrometer in the transmission mode between 4000 and 900 cm −1 with a resolution of 4 cm −1 and a zero-filling factor of 2 using 200 scan averaging.
Quantitative EPR experiments were carried out at the Swedish Interdisciplinary Magnetic Resonance Centre (SIMARC) at Linkoping University, using a Bruker Elexsys E500 spectrometer operating at 9.8 GHz (X-band). All spectra were recorded in the dark at room temperature. Quantitative spin counting was calibrated with a standard sample. All EPR spectra were normalized using an effective detection volume of the samples.
Thermal Analysis. TGA was carried out under nitrogen flow between 25 and 500°C with a scan rate of 10°C/min using a Mettler Toledo TGA/DSC 3 + instrument. Differential scanning calorimetry (DSC) measurements were carried out under nitrogen flow between −50 and 150°C with a scan rate of 10°C/min using a Mettler Toledo DSC2 calorimeter. The sample weight for both TGA and DSC was 4 mg. FSC was carried out under nitrogen flow between Grazing Incidence Wide-Angle X-ray Scattering. GIWAXS was measured at Argonne National Laboratory at Beamline 8-ID-E at the advanced photon source (APS). The substrate surface was aligned at an incident angle of 0.130−0.140°with regard to the incoming Xray beam. The samples were irradiated with a 10.915 keV X-ray beam in air for 2 summed exposures of 3 s (altogether 6 s). The scattered beam was recorded with a Pilatus 1 M detector located 228.165 mm away from the sample. Finally, the captured images were processed by employing the GIXGUI software. The background was subtracted by fitting the curves to an exponential decay, and the peaks were fitted to Gaussian functions. ■ ASSOCIATED CONTENT * sı Supporting Information | 2020-11-13T14:06:23.594Z | 2020-11-12T00:00:00.000 | {
"year": 2020,
"sha1": "1ad010d0d16019476a5eda4f6a0a527a14671bca",
"oa_license": "CCBY",
"oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acsami.0c16254",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "39d6210883a4afbbd7c9d25ffdbfdd9faeb52f98",
"s2fieldsofstudy": [
"Materials Science",
"Chemistry"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
} |
12195201 | pes2o/s2orc | v3-fos-license | On the road to standardization of D2 lymph node dissection in a European population of patients with gastric cancer
The amount of lymph node dissection (LD) required during surgical treatment of gastric cancer surgery has been quite controversial. In the 1970s and 1980s, Japanese surgeons developed a doctrine of aggressive preventive gastric cancer surgery that was based on extended (D2) LD volumes. The West has relatively lower incidence rates of gastric cancer, and in Europe and the United States the most common LD volume was D0-1. This eventually caused a scientific conflict between the Eastern and Western schools of surgical thought: Japanese surgeons determinedly used D2 LD in surgical practice, whereas European surgeons insisted on repetitive clinical trials in the European patient population. Today, however, one can observe the results of this complex evolution of views. The D2 LD is regarded as an unambiguous standard of gastric cancer surgical treatment in specialized European centers. Such a consensus of the Eastern and Western surgical schools became possible due to the longstanding scientific and practical search for methods that would help improve the results of gastric cancer surgeries using evidence-based medicine. Today, we can claim that D2 LD could improve the prognosis in European populations of patients with gastric cancer, but only when the surgical quality of LD execution is adequate.
the upper mesenteric artery; and (4) retropancreatic, which is associated with LNs of the hepatoduodenal ligament, upper mesenteric vessels and common hepatic artery. Both the left subdiaphragmatic and abdominal routes drain lymph from the upper third of the stomach. The lymphatic efflux from the gastric body drains primarily through the abdominal route, and lymph efflux from the distal stomach drains through abdominal, upper mesenteric and retropancreatic routes [6] . Metastases to regional LNs are diagnosed in 37%-65% of patients with tumors in the gastric corpus, in 44%-80% of patients with tumors in the proximal stomach, and in 50%-59% of patients with tumors in the distal stomach [7,8] . The involvement of regional LNs depends directly on the depth of primary tumor invasion. In intra-and sub-epithelial tumors, regional lymphogenous metastases are diagnosed in 0%-5.5% and 19%-31% of patients, respectively [7,9] . In muscle or subserosal layer invasions, regional LN involvement increases to 30%-62%; in serous membrane tumors, regional LN metastases are found in 74% of patients, and 90%-91% in cases with infiltration of adjacent organs [7] . The first one-piece tissue dissection of regional lymphogenous metastasis during the course of GC surgery was carried out in 1962 by Jinnai et al [10] . Since then, the concept of extended radical LD has become an essential stage in the strategy of GC surgical treatment in Japan. Research in the field of lymph node (LN) topography and extended clinical efficiency formed the basis of the first edition of "General Rules for the Gastric Cancer Study", which was published in the early 1960s under the auspices of the Japanese Research Society for Gastric Cancer [11] . The first English edition of these guidelines was published in Europe in 1995. Subsequently, research performed by the Japanese Gastric Cancer Association (JGCA) formed the basis for a second English edition based on the Japanese classification of gastric cancer by the JGCA [12] as well as Japanese gastric cancer treatment guidelines [13] . These guidelines describe the following groups of stomach LNs (Table 1, Figure 1).
According to the classification of gastric cancer by the JGCA (1998) [12] , the stomach lymphatic system consists of three LN compartments. Each of these is a temporary barrier that prevents tumor cells from entering the lymphatic system. Grouping stomach lymph collectors into compartments created the basis for determining the gradation of category "N" at staging and a theoretical basis for the extension of LD according to tumor site as reported in the following table (Table 2) [12] . The LN groups 12b, p and above are classified as N3 -in the given classification-this is equivalent to distant metastases.
Of note, in the last version of tumor-node-metastasis (TNM) classification introduced by the Union for International Cancer Control (UICC) [14] , category "N" is determined not by the topography but rather by the number of affected regional LNs. Accordingly, in the last version of JGCA guidelines (2011) [13] , the extension of nodal dissection is defined according to the extension of during surgical treatment of gastric cancer has been quite controversial. We can now claim that D2 lymph node dissection improves the prognosis in European populations with gastric cancer, but only when the surgical quality of the lymph node dissection execution is adequate.
INTRODUCTION
Radical surgery for malignant tumors traditionally includes mandatory one-piece removal of regional lymph nodes (LNs). This approach was introduced over 100 years ago by an American surgeon, W.S. Halsted, and has been used to determine the extent of surgery in basic sites of neoplasia including tumors in the gastrointestinal tract. Despite its high clinical effectiveness and use as a standard treatment in Asia, extensive D2/D3 lymph node dissection (LD) has not been widely used in gastric cancer (GC) surgery in Europe and the Americas until recently.
Indeed until recently, European clinical recommendations for cancer treatment did not suggest D2 LD as a surgical standard of care [1] . The relevance of this issue is also evident when considering the surgical standard of Western randomized trials on multimodal treatment for GC. The MAGIC trial set the standard for combined treatment of GC in the European Union, and D2 LD was performed in only 42.5% of patients [2] . The US standard multimodal treatment for GC is based on the INT 0116 trial [3] in which an extended LD was performed in only 10% of patients. In a large-scale clinical trial on perioperative chemoradiotherapy effectiveness (the CRITICS trial; ongoing in Europe), the planned extension of LD is more limited than D2 [4] . Thus, the issue of standardization in lymphadenectomy extension for GC in Western countries remains relevant.
LYMPHNODAL DISSECTION IN GASTRIC CANCER
Lymphatic efflux from the stomach travels through a complex multidirectional network [5] . Lymph from different sections of the stomach is drained into the paraaortal LN collector through one of four routes: (1) left subdiaphragmatic via the LN in the circulation of the left lower diaphragmatic artery; (2) abdominal via the LN along the left gastric, splenic, and common hepatic arteries and the celiac trunk; (3) upper mesenteric that receives lymph from the subpyloric LNs and runs along Table 1 The lymphatic system of the stomach [12] gastric resection as reported in the following figures.
LD extended beyond these definitions are classified as D2 +. Their effectiveness remains controversial; therefore, they are currently not recommended for routine use in clinical practice [13] . Gastric cancer classification by JGCA (1998) has demonstrated its high efficiency in several clinical studies [5,15,16] . LN staging based on topography laid the grounds for JGCA's classification. These are considered anatomical in contrast to the rather mechanistic quantitative approach of the UICC classification. This allows for consideration of disease propagation and for more accurate prognosis. In support of this thesis, the correlated survival of patients with lesions of various LN groups has been studied patients with the same number of regional lymphogenous metastases, survival differed depending on the LN collectors in which lesions were located [17] . Thus, localization as well as the quantity of metastatically-affected regional LNs has a probable prognostic value. According to Y. Noguchi [18] , in N0, LN lesion groups 1-6 (N1 according JGCA), LN lesion groups 7-12 (N2), and LN groups 13-16 (N3), the 5-year survival rate was 85%, 60%, 25% and 11%, respectively.
A significant advantage of the second JGCA gastric cancer classification in terms of practical application is its direct link with the volume of LD based on the staging principle of lymphogenous metastasis. Of note, the Japanese classification uses the term "regional lymph node". This is defined not only by the lymph node topography, but also by the site of the primary tumor in the stomach; the UICC classification does not provide this differentiation.
Another obvious advantage of the classification offered by JGCA [12] lies in the possibility of extrapolating data about the regional LN condition into the UICC classification. The reverse conversion is not possible; therefore, it is not possible to conduct a comparative analysis of retrospective studies in a different series. Western pathologists and surgeons criticize the Japanese GC classification mainly because of its complexity and also because precision mapping is laborious in practice. However, the Eastern and Western GC classifications are finally approaching each other. This tendency can be observed in the latest edition of the LNs along the lesser curvature №4sa LNs along the short gastric vessels №4sb LNs along the left gastroepiploic vessels №4d LNs along the right gastroepiploic vessels №5 Suprapyloric LNs №6 Infrapyloric LNs №7 LNs along the left gastric artery №8а LNs along the common hepatic artery (anterosuperior group) №9 LNs at the celiac trunk №10 LNs at the splenic hilum №11р LNs along the proximal splenic artery №11d LNs along the distal splenic artery №12a LNs in the hepatoduodenal ligament (along the hepatic artery) №12b LNs in the hepatoduodenal ligament (along the bile duct) №12р LNs in the hepatoduodenal ligament (behind the portal vein) №13 Retro-pancreaticoduodenal LNs №14а LNs along the superior mesenteric artery №14v LNs along the superior mesenteric vein №15 LNs along the middle colic vessels №16 Para-aortic LNs №17 LNs on the anterior surface of the pancreatic head №18 LNs along the inferior margin of the pancreas №19 Infradiaphragmatic LNs №20 LNs in the esophageal hiatus of the diaphragm LNs: Lymph nodes. Figure 1 Topography of stomach lymph node groups [12] .
Figure 2
Lymph node dissection levels in distal subtotal gastrectomy [13] . Distal gastrectomy TNM UICC classification and the latest editions of the JCGA gastric cancer treatment guidelines [13,14] .
vs WESTERN POSITION
Results of a retrospective analysis of LD D2 were first published in Japan in 1970 by Mine et al [19] . The authors reported a slight increase in the survival rate among patients with pN0 and a probable increase in the 5-year survival rate from 10% to 21% in the group pN+. Similar results were reported in a study by Kodama et al [20] , who indicated an increase in the 5-year survival rate from 33% to 58% in the entire group of patients.
In the 1970s and 1980s, Japanese surgeons developed a doctrine of aggressive preventive GC surgery based on the extended (D2) and super-extended (D3) LD volumes [21] . Concurrently, in Europe and the United States, the most common LD volume was D0-1. Due to the relatively lower GC incidence rates in the West, Figure 3 Lymph node dissection levels in gastrectomy [13] . Figure 4 Lymph node dissection levels in proximal subtotal gastrectomy [13] . European and American surgeons continued to reframe the ideology and master the techniques of extended interventions in GC cases until the end of the 1990s. This eventually caused a scientific conflict between the Eastern and Western schools of surgical thought. Japanese surgeons used D2 LD in surgical practice, whereas European surgeons insisted on repetitive clinical trials in the European patient population. They reasoned that certain biological differences in GC were present in the "Eastern" type [22] . One of the most significant publications from that time was a study of a European population of patients with GC by Pacelli et al [23] . The authors reported a probable increase in the 5-year survival rate from 30% (D1, LD) to 49% (D2, 3 LD) for patients with stage Ⅲ GC and from 50% to 65% in the entire group of patients. Similar results were obtained by a group of German surgeons supervised by Siewert et al [24] during the course of a prospective multicentric trial of nearly 2500 patients. A probable increase in the survival rate was reported in patients with stages Ⅱ-ⅢA GC. However, in patients with pN2 (TNM UICC) or with extensive tumor invasion of the gastric serosa, D2 LD was not associated with increased survival. Over time, researchers increasingly noted the low credibility of non-randomized studies. The results of the first randomized trials published by Dent et al [25] and Robertson et al [26] featured high rates of postoperative complications and mortality. However, the results did not provide high levels of credibility because of the small numbers of patients enrolled. The first large-scale randomized multicentric study of the efficacy of D2 LD in a population of European patients with GC was carried out in the 1990s. This study, known as the Dutch trial [27] , involved 1078 randomized patients and was organized by the Dutch Gastric Cancer Group. At the same time, the British MRS (Medical Research Society) carried out its own trial [28] with 400 randomized patients. The first results of these studies were preliminarily published in 1997 at the Second International Gastric Cancer Congress (IGCC) in Munich. However, the necessity of compliance with the full volume of D2 LD dramatically increased the frequency of splenectomies (up to 37% in the Dutch study and up to 65% in the British) and resections of the pancreas (30% in the Dutch study and 56% in the British) in all groups. These studies showed a dramatic increase in the number of postoperative complications after D2 LD (from 25% after performing D0-1 in the control group up to 43% in the Dutch trial and from 28% to 46% in the British trial). They also showed an increase in the postoperative mortality rate (from 4% to 10% in the Dutch trial and from 6.5% to 13% in the British trial) [27,28] . In the Eastern Asian series however, the rate of postoperative complications was 17%-21% [29,30] . The postoperative mortality rate after D2 LD in Eastern clinics was also significantly lower than in Europe-less than 2% in the Japanese nationwide registry [31] and less than 1% [30] or even zero [29] in specialized centers.
After a 5-year follow-up of European randomized studies, the expected increase in survival of D2 LD group was not achieved; the 5-year survival in the Dutch trial was 45% in group D1 LD and 47% in group D2 LD. In the British trial, it was 35% in group D1 LD and 33% in group D2 LD [32,33] (Figure 5).
Thus, the European oncology society preliminarily concluded that the extended LD volumes used in European GC patients were ineffective. This was based on evidence-based medicine and relied on the results of the two major Western randomized trials. However, a detailed analysis of this study and all potential reasons for the lack of a positive result were shown at the 1999 IGCC in Seoul. The summary of this analysis was later published in the New England Journal of Medicine [34] . Despite a good design and detailed statistical analysis, the study had some serious shortcomings that made the results ambiguous. These included: The large number of participating surgical centers (about 80 clinics), which resulted in surgeons obtaining an insufficient amount of practical experience in the surgical procedures required for the study. For instance, some surgeons performed fewer than 5 D2 LD surgeries per year. This not only potentially affected the level of postoperative complications and mortality, but also led to a reduction in LN removal in the course of D2 LD and consequently to a reduction in radical surgeries [34] .
There was a lack of surgery standardization (there were no clear criteria for splenectomy or spleen-saving dissection of the 10 th LN group, instrumental or manual anastomosis, etc.). Conversely, surgeons participating in the randomized trial in Taiwan performed a minimum of 80 D2 LD surgeries before the study began. The results of that study revealed a possible increase in survival rates when extended volumes of LD were performed [35] . The median number of LNs removed is an important indicator of LD quality. Significant geographic fluctuations of this indicator in the performance of D2 LD have now been established. There are diametrically polar indicators in European randomized trials. In the British study, the median number of removed LNs was 17 [28] ; in the Dutch study, the number was 30 [32] . There were 25-26 LNs removed in the Western retrospective studies [36,37] and 54 LNs removed in Japanese specialized centers [30] . The minimum adequate number of LNs to be removed in gastric cancer surgeries-according to the requirements of TNM UICC (2009) [14] -is 15. This level of LD was provided in 86% [36] to 95% [37] of patients in the Western retrospective studies and in 100% of patients in the Japanese studies [30] . According to Siewert et al [24] , the efficiency of LD execution can meet the standards of D2 only when a minimum of 26 LNs are removed.
The average frequency of metastatic lesions in LNs of group №10th (LNs of the splenic hilum) in various tumor sites in the stomach is 8.8%. Metastatic lesions in these LNs are likely to worsen the prognosis [38] . The application of splenectomy on principle including for LN dissection of the 10 th group was not effective in patients with GC until recently. A small study conducted in Korea by Yu et al [39] demonstrated a tendency toward increased survival after splenectomy; however, this result was not statistically significant. A meta-analysis conducted in 2009 by Yang et al [40] also confirmed an increase in the 5-year survival rate of patients with GC after splenectomy. According to other authors [38] , unless the tumor has invaded the spleen, splenectomy is necessary only in case of LN lesions in group №4sa. Therefore, despite the fact that LN dissection of the 10 th group is regulated by the JGCA guidelines (2011) [13] , the role of splenectomy as a standard stage of D2 LD remains controversial. The answer to this question will likely be clarified soon after the publication of the results of a large randomized trial investigating the efficacy of splenectomy in Japanese patients with cancer of the upper third of the stomach (JCOG 0110 that began in Japan in 2002) [41] . Despite the previous pessimistic results, Hartgrink et al [42] conducted a second analysis of the "Dutch material" in 2001. They found a significant increase in survival in group D2 LD, especially in patients with metastases in LNs of the first stage of metastasis (N1 by JGCA). After 15 years of observation of patients during the Dutch trial, no significant difference in survival between groups under observation has not been noted. However, when the most controversial group of patients with splenectomies and resection of the pancreatic gland was excluded from the analysis, the 15-year survival rate increased dramatically from 22% in D1 LD to 35% in D2 LD (p = 0.006) [43] (Figure 6).
In 2013, the results of meta-analysis obtained by 12 randomized controlled major European trials on LD D2 effectiveness were published. These clearly proved the thesis concerning an increased risk of postoperative complications with D2 LD and the possible increase in survival only in the group that did not have splenectomy and resection of the pancreatic gland [44] . Therefore, in the latest European oncology guidelines, D2 LD is the standard surgical procedure but only in highly specialized centers with extensive experience in such surgeries as well as postoperative care [45] .
According to the Japanese guidelines on the gastric cancer treatment issued by JGCA (2011) [13] , the algorithm of surgical treatment in patients with GC is as follows (Figure 7).
The amount of LD required during surgical treatment of gastric cancer surgery has been quite controversial. Today, however, in light of evidence-based medicine, one can observe the results of this complex evolution of views: D2 LD is considered an unambiguous standard of GC surgical treatment in specialized centers according to national recommendations in Germany [46] , the United Kingdom [47] and Italy [48] as well as mutual recommendations of the European Society of Medical Oncologists, Surgical Oncologists and Radiation Therapists (ESMO-ESSO-ESTRO) [45] . Such a consensus of the Eastern and Western surgical schools became possible due to the longstanding scientific and practical search for methods that would help improve the results of GC surgeries using evidence-based medicine [49] . In Western surgical terminology, D2 LD is now called a standard volume of intervention, whereas D2 + LD is an extended operation.
This debate into the effectiveness of extended (D2 + LD) interventions in GC cases remains open. A wellknown clinical study conducted by Sasako et al [34] did not demonstrate an increase in survival after D2 + para-aortic LD for patients with resectable GC. However, many recent studies have demonstrated the possibility of increased survival after the application of extended LD in a selected group of patients with a high risk of metastasis in LNs of the N°16 station [50,51] . Furthermore, the effectiveness of laparoscopic D2 LD in GC cases remains undetermined. Today, clinical research is underway in the KLASS-2 trial, which aims to determine the effectiveness of such interventions. The impact of interventions with D1 +, D2 and D2 + LD on the risk of intraperitoneal progression of GC after surgery [6] remains unknown.
CONCLUSION
The data show that D2 LD can improve the prognosis in European GC patients, but only when the surgical quality of LD execution is adequate. As part of the 10 th IGCC in 2013 in Verona, Italy, the former president of the European Society of Surgical Oncology, Professor C. van de Velde, noted in his expert lecture that "the only way to improve the efficiency of surgical treatment of gastric cancer in Europe is to place patients in specialized surgical centers, provide training so that individual surgeons could specialize on the issue of LD D2 and an objective and permanent audit on quality of lymphadenectomy in each surgical center". [13] . | 2018-04-03T00:27:30.035Z | 2016-06-15T00:00:00.000 | {
"year": 2016,
"sha1": "2cdedd2d78da05a9fd62bd9933bbd1329b452e9f",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.4251/wjgo.v8.i6.489",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "91f2f9b6b3788e7c929c4bc8af26f481553d7039",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
145703365 | pes2o/s2orc | v3-fos-license | Belonging , connectedness and social exclusion
Promoting connectedness and/or a sense of belonging are strategies used in addressing social exclusion. While belonging and connectedness are often used interchangeably, this paper demonstrates that while they may be co-existent, it is equally possible to have one without the other. Hence, this paper contends that these two concepts need to be carefully distinguished by those planning and delivering social work services. Furthermore, consideration of both connectedness and belonging enables a more nuanced understanding of social exclusion which challenges the assumption that inclusion and exclusion are binary opposites, and that it is possible to be both included and excluded at the same time.
Understandings of social exclusion have broadened from originally being closely associated with economic factors to recognise a wider range of restrictions which limit participation in social or cultural arenas which the majority of the population takes for granted . For those who endure inequalities which are seemingly unable to be overcome, a sense of 'dismemberment' (Scanlon & Adlam, 2008, p. 529) or of no longer belonging in the mainstream of society becomes a very real possibility. There has been also been a growing recognition that profound alienation of segments of the community is neither in the best interests of the individuals concerned nor of civil society and this has been the impetus for the establishment of a range of social interventions .
Irrespective of how social exclusion has been defined, persons and communities who are socially excluded form the majority of users of social work services, and an understanding of social exclusion can assist social workers make sense of the often profound feelings of alienation which they may encounter regularly in their work: The concept of social exclusion attempts to help us make sense out of the lived experience arising from multiple deprivations and inequities experienced by people and localities, across the social fabric, and the mutually reinforcing effects of reduced participation, consumption, mobility, access, integration, influence and recognition. The language of social exclusion recognises marginalising, silencing, rejecting, isolating, segregating and disenfranchising as the machinery of exclusion, its processes of operation. (Taket et al., 2009, p. 3) Social work responses which seek to address social exclusion are often unclear as to whether the aim is to promote a feeling of belonging or enhance the sense of connectedness for participants. This paper contends that although belonging and connectedness often coexist, it is necessary to understand the differences between these states and not to assume they are synonymous. Hence, after defining both belonging and connectedness, this paper will explore a range of relationships between them.
Belonging
Most people have an inherent need to feel they belong and experience varying degrees of belonging in a range of relationships, settings and places, or believing there to be others with whom we share a viewpoint. These may include families, friendship groups, religions, sporting clubs, political or social movements, places of work or learning, local communities or whole nations (Crisp, 2010). Belonging may be formalised with members identified as those who have been party to some form of contract, paid fees and/or undergone some ceremony of initiation or welcome. However belonging doesn't necessarily require formalisation, and even when processes of formalisation have occurred, a sense of belonging may not ensue. Furthermore, people don't always feel they readily belong in the places or groups where one might imagine they fit, or they may feel they belong in situations in which they might readily be considered to be an outsider (Rochford et al., 2008).
Belonging necessarily creates a situation of insiders and outsiders. Belonging involves becoming an insider within a group, organisation or a somewhat less structured network of people with common attributes or beliefs. Insiders may become privy to "secrets" kept from the wider world (Bradley, 2009) and indeed may not even realise such secrets exist until such time as they are no longer deemed outsiders by the in-group (McGirr, 2004).
Belonging is not necessarily something people actively consider except at points of crisis or when they stumble into a new experience of belonging. Nevertheless, some level of belonging is a constant requirement, providing a sense of surety in order to be oneself, and to face the risk of rejection or failure which emerges when we set out on new enterprises, meet new people or go to new places (Callaghan, 1998).
Connectedness
Whereas belonging is associated with subjective notions of identity (Furlong, 2003;Ward, 2009), connectedness relates more to participation in societal organisations or social networks. It has been claimed that "the language of social connectedness recognises acceptance, opportunity, equity, justice, citizenship, expression and validation as the machinery of connectedness" (Taket et al., 2009, pp. 3-4). However it is possible to be connected but not feel any of the emotional attachment which is associated with belonging.
Many social workers, especially those involved in groupwork or community development initiatives, are often involved in programs which aim to enhance the sense of connectedness for participants, that is increasing the numbers of people they know and/or the number of organisations with which they meaningfully relate. Hence, connectedness refers to the number and strength of connections which potentially could be measured (Standsfeld, 2006).
The relationship between belonging and connectedness
Reflection on the literature about belonging and connectedness, suggests at least four different forms of association between belonging and connectedness: connectedness as a precursor to belonging, connectedness reinforcing belonging, connectedness but not belonging, and belonging without connectedness. Each of these will be discussed in turn.
Making connections as a precursor to belonging
Becoming involved in community groups or organisations is one way of establishing one's identity in a complex and fragmented society (Lynch, 2007). While social norms within a family or friendship group can provide strong motivation to make particular connections (Bellamy et al., 2002;Evans & Kelley, 2004), for some it may be more important to seek out others with like interests or viewpoints. However there may well be a gulf to be crossed after making connections before feeling one truly belongs.
One of the signs by which others let you know that they believe you belong is how they greet you. Titles may give way to first names or in some instances polite greetings may be traded for affectionate greetings which might readily be perceived as insults by outsiders: Even when friends are greeted they are often subjected to verbal scorn and abuse, sworn at and called names, and all in the spirit of friendship. If you really like someone, you heap a lot of gentle abuse upon him or her, to show him or her that you care so much. If you don't like someone, you are often very polite and affable, because as yet there is no basis upon which to launch into the friendly banter … (Tacey 2009, p. 47) Conversely, bullying, teasing, taunting, being made fun of by one's peers or connections can be ways of telling someone that they don't belong (Hendry & Reid, 2000). The desire to be seen by others as belonging, may see individuals yielding to group pressures. The following finding from a study of rural Scottish youth, some of whom experienced bullying and teasing, could relate to many others: Fitting in was part of the larger group of skills needed to "get along" and be able to handle life situations in general. Thinking about what might inoculate a person or make him more vulnerable was common. Complying with a basic set of social standards in appearance, especially dress and behaviour, was a typical tactic to prevent criticism and teasing. Conformity gave one both a defence against certain types of trouble and a gesture towards acceptance. (Hendry & Reid, 2000, p. 711) It is not just at the interpersonal level that people choose to become connected to people or situations where they feel the potential to belong. In the UK (and undoubtedly this applies also in North America and Australasia), the concept of a "normal" university student is someone not long finished secondary school, white, middle or upper class and who speaks English as their first language. Potential students who do not fit this profile disproportionately seek places in the less elite British universities which have the highest proportions of "nontraditional" students like themselves. This reflects a desire to study in an institution where they believe there will be others like them to connect with and that there is some possibility of them feeling they belong (Read et al., 2003).
It is unclear whether it is the number of connections, or the strength of the connections one has, that results in a sense of belonging. For example, it has been suggested that the more connections one has, the greater the likelihood of unemployed persons gaining employment and reducing the potential for exclusion in the wider community (Ngai et al., 2008). However, others would suggest the quality of one's friendships is more important than the number of friends as a preventive measure against social exclusion (Wong, 2006).
Maintaining connections reinforces a sense of belonging
Communities which have high levels of poverty, crime and family breakdown, along with low levels of employment tend to be characterised only in terms of overall levels of disadvantage without recognition of positive features of the community. A recent study of young adults in the Teeside region of England provides a pertinent reminder as to why this approach is problematic. Teeside emerged as a major centre of industry in the 19 th century but in recent times has suffered very high levels of unemployment as the result of one quarter of all jobs, and half of all manufacturing and construction jobs in the region disappearing in the 1970s and 1980s. While there is no denying that for many young adults in Teeside frequent periods of unemployment are a feature of their lives, they are not disconnected but rather maintained strong connections with their communities, with no question that they did not belong. In particular the researchers noted: The neighbourhoods and people we studied were relatively rich in terms of supportive social networks of family and friends and bonding capital. Subjectively, informants stressed their strong sense of place attachment, of community and of social inclusion, not exclusion. Informants came from -and were included in -a locally embedded … working community. (McDonald, 2008) Another example of how maintaining connections contributes to an ongoing sense of belonging can be found in the social arrangements often observed in migrant communities. Migration, whether chosen or forced, creates considerable potential for disconnection with one's past, with the importance of these connections only becoming recognisable once one has moved away (hooks, 2009). For those for whom where they come from is fundamental to their sense of identity, maintaining connections with their origins, either directly or indirectly through maintaining cultural links, can reinforce a sense of belonging. Consequently organisations which contribute to the forging and maintenance of social and cultural ties for ethnic minorities are often claimed to be the most important institutions for their members. This includes ethnospecific religious organisations, of which for a substantial proportion of their membership, participation may be more a matter of maintaining cultural links than opportunities for religious expression (Pui-Lan, 2006). Connecting with others in such organisations may be important not just for migrants but also for their descendants. Subsequent generations of migrants can be left in a state of limbo, especially those whose looks and name betray the family's origins, but for whom those origins are little more than stories they have spent their life listening to (Brady, 2007). Hence, organisations which connect migrants and their families provide a core for the community, for both individuals and the community to develop and maintain their cultural identity, provide support in times of difficulties and disappointments, and promote health and wellbeing (Este & Bernard, 2006).
Connected but not belonging
Being connected may frequently be sufficient but even those who are seemingly wellconnected can fear loneliness and/or isolation which are associated with feelings of not belonging. Interviews with service users who were dying or bereaved have revealed that one of the most important things for such individuals is knowing that there were family and friends who cared about them (Lloyd, 1997). Although such caring relationships may have emerged through mutual participation in various groups or organisations, merely joining a group is no guarantee that such levels of belonging will emerge.
It can be difficult to feel a sense of belonging even in settings which are ostensibly open to all members of the community. One of the criticisms sometimes applied to religious and charitable organisations is that they have become middle class clubs which are seemingly much more concerned with maintenance of establishment values than championing the needs of the most deprived and excluded members of society, who may not be made to feel welcome and even encouraged to leave. For example, single mothers have been made to feel they don't belong in some contexts which are supposedly "family focused", but have a narrow view as to what "family" means (Tomlinson 1995).
Just as explicit disapproval of particular characteristics including values and lifestyles can result in people not feeling they belong, so too can the failure to recognise the existence of particular segments of the community and their particular needs or issues. One group for whom this applies are young people who are bisexual: Bisexual-identifying or bisexual-behaving young people are both 'outside' (excluded from) heteronormative and 'homonormative' constructs of sexual binaries, and yet 'belonging' (included) in the sense that they may 'pass' as a 'normal' heterosexual or 'normal' homosexual. Subsequently, they are 'outside' (excluded from) the dominant constructs of gay community and heterosexual society in Australia while simultaneously 'belonging' (included) due to their same-sex and opposite-sex attractions and relationships. (Martin & Pallotta-Chiarolli, 2009, p. 143) As one young person explains: It's like we're the X-files or something. We're not straight A files or gay B files. It's like we mess up their [gay and straight communities'] tidy sex files. But that means they make you feel like you're messed up yourself, as if there's no way their filing system is what's really fucked. (Marita, adolescent research participant in Martin & Pallotta-Chiarolli, 2009, p. 143) Another group who are not usually considered to be excluded, and who in some domains may be well connected but who often feel that in the wider society they don't readily belong, are women who have chosen to be childless and have a career. When there is a dominant ideology that choosing not to be a mother represents deviance, selfishness and even a failure to do one's duty to the nation by contributing to the production of the next generation, women who make such a choice are often made to feel they belong only with the minority of others like themselves (Carey et al., 2009).
Being connected but not belonging is problematic, as in the above examples, when one's sense of identity is violated or invalidated. Nevertheless, there may be many others -individuals, groups or organisations, with which one has connections, which may be strong and meaningful, but which don't generate a sense of belonging, and indeed one may not want to belong to.
Belonging but not connected
Some people make a choice to limit their connections with others as a strategy to feeling a sense of belonging in a diverse community. The American cultural critic bell hooks observed this phenomenon among those around her when she found herself living in New York City: New York City was one of the few places in the world where I experienced lonelinesss … I attributed this to the fact that there one lives in close proximity to so many people engaging in a kind of pseudo intimacy but rarely genuine making community. To live in close contact with neighbours, to see them every day but to never engage in fellowship was downright depressing. People I knew in the city often ridiculed the ideas that one would want to live in community -what they loved about the city was the intense anonymity, not knowing and not being accountable. (hooks, 2009, p. 24) Historically, length of participation has been regarded as predictive of a sense of belonging to a particular community on the basis that those who are long-established are more likely both to be more involved in community activities and have larger social networks (Sampson, 1991). However urban renewal can lead to situations in which older people, despite their long tenure may become very isolated and alienated from their local communities. In one commentary on gentrification in part of Manchester, it was observed that: … there is no sense of a past, historic, community that has moral rights on the area: rather the older working-class residents, when they are seen at all, are seen mainly as residues. (Savage, Bagnall & Longhurst, 2005, p. 44) Even if they wanted to, long-term elderly residents may find it hard to connect with newcomers, in areas which have been gentrified. Typically the newcomers will be much younger and living a very different lifestyle associated with having far greater financial resources. The presence, let alone the needs, of older people in their midst can readily go unrecognised (Phillipson, 2007).
If belonging can occur when any sense of connectedness with others has been dismantled, it can equally emerge for those who have never had a clear sense of connection with other members of a community or even an intention to become connected with others. Sometimes referred to as "elective belonging": Individuals attach their own biography to their 'chosen' residential location, so that they tell stories that indicate how their arrival and subsequent settlement is appropriate to their sense of themselves. People who come to live in an area with no prior ties to it, but who can link their residence to their biographical life history, are able to see themselves as belonging to the area. (Savage et al., 2005, p. 29) Hence, a sense of belonging can emerge prior to strong or indeed any connections being made. For example, in a study of single mothers who were welfare recipients, "Vicky" reported a much greater sense of belonging as she moved to a low income suburb where she felt "normal" from a much wealthier area where she was unable to "fit in": [Low-income Suburb]'s quite a good area to be poor in! Although there is sort of some money around, but generally people aren't as superficial. Yeah, or aren't as judgemental about that sort of thing as they are in other places. A lot of people here are the working class kind, in the suburbs. So I don't feel, you know, that disadvantaged. Although when I lived in [High-income Suburb] before I moved over to this side of town, I did feel it a lot more there. (Vicky, in Cook, 2009, pp. 60-61) Similarly, a mature age student in her first year of studying at one of Scotland's ancient universities recalls: When I was in the [oldest building] sitting in the lecture theatre you're kind of thinking 'I wonder who sat here before me', you know there's a wonderful sense of history about the place. (in Christie, Tett, Cree, Hounsell & McCune, 2008, p. 573) For this student, a sense of belonging came not from interactions with fellow students, but from the sense of achievement of having been admitted to a prestigious university in which the proportion of students from disadvantaged backgrounds is low. However, rather than reinforcing a sense of belonging, interactions with other students had the potential to diminish belonging. As another student in the same study commented: I know that the way that I speak is working class and I have got an accent, and being in an environment where there is lots of middle and upper class students and when they are presenting, and they are able to project themselves, it just seems to be a completely different thing for me, because I suppose I am class conscious … and I didn't feel very confident in front of middle and upper class people because I know that I carry an accent. (in Christie et al., 2008, p. 578) A sense of disconnection combined with a desire for belonging is evident in the emergence of virtual communities, particularly those which utilise the internet (Cheng, 2006). While the internet has the potential to enable direct connections to be made with similar others, sensing that there are like others out there somewhere, may in itself lead to feeling a sense of belonging to a virtual community. Nevertheless, actively fostering a sense of belonging through making amorphous connections is not without controversy (Simmonds, 2000). Whether participation in a virtual community signifies connectedness is debatable, and perhaps this depends on the forms of participation. Arguably there might be a stronger rationale for suggesting that participation in interactive forums, whether these are synchronous or asynchronous, fits better with the proposition that connectedness leads to belonging. However, for those who "lurk" or follow communications passively, one might well contend that belonging without connectedness is possible.
Discussion
This paper has sought to distinguish connectedness and belonging and concludes that while these may be co-existent or there may be a causal association between them, both connectedness and belonging can also exist alone without the other. This distinction is important for social workers and other professionals involved in the planning and delivery of programs designed to address social exclusion. For example, it should not be assumed that a program which aims to increase connectedness will necessarily result in participants feeling a greater a sense of belonging or vice versa.
Consideration of both connectedness and belonging provides the potential for a more nuanced understanding of social exclusion which challenges the assumption that inclusion and exclusion are binary opposites, that one is only ever included or excluded (cf. Sheppard, 2006). As many of the examples in this paper demonstrate, the reality for most people is they live concurrently with experiences of inclusion as well as exclusion (Pease, 2009). Within social work, this applies equally to service users as to social workers, recognising that in some contexts service users may become service providers (Meeuwisse, 2008). Recognising belonging and social connectedness as two distinct, although often related, concepts which are in opposition to social exclusion enables social exclusion to be understood as a dynamic rather than necessarily fixed state of existence (Savage & Carvill, 2009). Nevertheless, for this to occur, policy and practice agendas need to ensure that connectedness is not promoted to the extent that recognition of the need for belonging is minimised or lost altogether or vice versa. For example, an access and equity program which builds connectedness by attracting disadvantaged students into higher education but does not address issues of alienation or a sense of not belonging after these students have commenced studying, is likely to have difficulties retaining them in their courses until graduation (Read, Archer & Leathwood, 2003). Conversely, a media campaign which attempts to promote pride in one's local community, to promote a sense of belonging, may have limited effectiveness if there are not adequate opportunities for members of the community to meet and engage with each other in meaningful ways, in other words, to build connectedness (Crisp, 2000).
It must be conceded that conceptualising connectedness and belonging in opposition to social exclusion is not entirely unproblematic. The recognition of many individuals and groups either feeling they don't belong or do not have meaningful connections in their communities, may greatly increase the proportion of the population considered to be experiencing social exclusion. However, this has the potential to deflect attention of policy makers and service providers away from the most disadvantaged sectors of the community (Meeuwisse, 2008). Nevertheless, it may well be a reasonable expectation that those who endure significant disadvantage are among those who no longer believe they belong in the wider society (Scanlon & Adlam, 2008). | 2018-12-12T12:59:34.327Z | 2010-10-29T00:00:00.000 | {
"year": 2010,
"sha1": "154682998f52e137bbe18ace19fd63231239f3aa",
"oa_license": "CCBY",
"oa_url": "http://josi.org.au/articles/10.36251/josi.14/galley/14/download/",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "154682998f52e137bbe18ace19fd63231239f3aa",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Sociology"
]
} |
219958843 | pes2o/s2orc | v3-fos-license | Tumor cells derived-exosomes as angiogenenic agents: possible therapeutic implications
Angiogenesis is a multistep process and various molecules are involved in regulating it. Extracellular vesicles are cell-derived particles, secreted from several types of cells and are known to mediate cell-to-cell communication. These vesicles contain different bio-molecules including nucleic acids, proteins, and lipids, which are transported between cells and regulate physiological and pathological conditions in the recipient cell. Exosomes, 30–150 nm extracellular vesicles, and their key roles in tumorigenesis via promoting angiogenesis are of great recent interest. In solid tumors, the suitable blood supply is the hallmark of their progression, growth, and metastasis, so it can be supported by angiogenesis. Tumor cells abundantly release exosomes containing different kinds of biomolecules such as angiogenic molecules that contribute to inducing angiogenesis. These exosomes can be trafficked between tumor cells or between tumor cells and endothelial cells. The protein and nucleic acid cargo of tumor derived-exosomes can deliver to endothelial cells mostly by endocytosis, and then induce angiogenesis. Tumor derived-exosomes can be used as biomarker for cancer diagnosis. Targeting exosome-induced angiogenesis may serve as a promising tool for cancer therapy. Taken together, tumor derived-exosomes are the major contributors in tumor angiogenesis and a supposed target for antiangiogenic therapies. However, further scrutiny is essential to investigate the function of exosomes in tumor angiogenesis and clinical relevance of targeting exosomes for suppressing angiogenesis.
Background
Angiogenesis, rising new capillaries and blood vessels from pre-existing vasculature bed, occurs in physiological status during embryonic development and wound healing, is essential for tumor growth and metastasis [1]. Tumor growth needs blood supply to provide oxygen and nutrients for metabolic functions. This need to be accomplished typically through angiogenesis [2]. Tumorassociated blood vessels are abnormal compared to other organs, they represent abnormal morphology, excessive branch, abundant and bulges and blind ends, irregular endothelial cells (ECs) lining, and defective pericyte coverage and basement membrane [3]. These features arise because of excessive and sustained pro-angiogenic signaling. The ECs of the tumor environment show structural and molecular character that discern them from those of in normal organs [3]. In malignant tumors, tumor cells get invasive activities and make a stromal response including robust angiogenesis [3]. Therefore, tumor development from a benign to a malignant step is classically related with an angiogenic switch, initiating the growth of a vascular bed that is aggressively rising and infiltrative [3,4]. Angiogenic programming of solid tumors is a multidimensional process planned by tumor cells in concert with different stromal cells and their active products, which comprise growth factors and cytokines, the extracellular matrix and secreted extracellular vesicles (EVs) [5]. EVs are cell-derived vesicles that play the pivotal roles in intracellular communication and also in biological events such as angiogenesis [6]. These vesicles contain different kinds of active biomolecules from donor cells, which affect function, fate, and morphology of target cells [7]. Exosomes released by different cells in the tumor microenvironment have emerged to be a key modulator of cell communication [8]. Most cells produce exosomes, however, tumor cells actively produce exosomes due to presence of stress condition such as hypoxia, which induces exosome biogenesis and secretion [9]. In cancer patients, particularly individuals with progressive or metastatic tumors, have significantly amplified numbers of exosomes in the plasma rather than healthy donors [10]. Tumor cell derived-exosomes can reach to neighboring tumor cells and surrounding cells such as ECs and promote tumorigenesis [6]. The modulation of the tumor microenvironment and the construction of a pre-metastatic niche are the key reprogramming dealings that are mediated by exosomes [11]. Exosomes facilitate tumor angiogenesis and seem to regulate mechanisms involved in vessel growth. In this review, we discuss kinetics of exosomes and also describe recent knowledge available for the roles tumor cell derivedexosomes play in the process of angiogenesis.
Angiogenesis
The growth and development of new blood vessels from the pre-existing vascular bed is a highly controlled multistep process known as angiogenesis (Fig. 1). This process has well been studied in literatures [12][13][14]; it occurs in both natural growth and development as well as in the progression of diseases such as cancer. Two mechanisms are involved in angiogenesis: sprouting angiogenesis and intussusceptions [12,15]. Previous studies suggest that hypoxia preferentially leads to sprouting angiogenesis, while hemodynamic factors induce intussusceptive angiogenesis. Although the detailed mechanisms of sprouting angiogenesis are more understood, the exact mechanisms that regulate intussusceptive angiogenesis are not fully understood. In general, there are two key cell types such as ECs and mural cells in vessels that regulate angiogenesis. Besides, different factors including, VEGF, angiopoietins, EGF, FGF, and TGF-α promote angiogenesis, however, there are also angiogenesis inhibitors like thrombospondin-1/2, angiostatin, interferons, collagen IV fragments, and endostatin [12][13][14]. Matrix metalloproteinases (MMPs), including MMP1 and MMP2, degrade the capillary wall by disbanding the basement membrane. Once degraded, a new branch point is formed within the wall of the existing vessel [16].
Sprouting angiogenesis is multistep, so inducing sprout and tip cell formation are the initial step for angiogenesis [12] (Fig. 1a). Sprouting is regulated by the balance between pro-angiogenic factors including VEGF and factors that induce quiescence like pericyte contact and VEGF inhibitor. Tip cells have a critical role for new vessels grow [17]. In conditions that favor angiogenesis, some ECs react to angiogenic factors like VEGF-A and sprout, while others fail to respond [17]. These cells called ''tip cells''; therefore, VEGF-A authorizes ״tip cells״ for invasion and migration [17] (Fig. 1a). Selection of "tip cells" is organized by Notch family receptors and their transmembrane ligands DLL4 (Delta-like ligand 4) [18]. Expression of DLL4 and its Notch receptors are activated by the interaction of VEGF with endothelial cells [19]. Tip cells sprout is directed by VEGF gradients, which is mediated by the interaction of VEGF-VEGFR-2. These cells act as motile guidance cells that dynamically spread filopodia to discover attractive or revolting signals that are present in the environment [12,18]. However, following the tip cells, there are endothelial stalk cells that have fewer filopodia but are highly proliferative and support adherent and tight junctions to promote the stability and formation of the budding vascular lumen [17,20] (Fig. 1a). In this scenario, VEGF-VEGFR2 signaling orchestrates migration and proliferation of ECs, recruitment of pericytes, and tube formation [21]. Other factors may include abhorrent or attractive matrix cues and guidepost cells in the tissue environment. Transformation of sprouts into mature vessels consists of inhibition of ECs proliferation and migration of new capillaries, the stability of already existing new vascular tubes (fusion of the newly formed vessels with others), and enrollment of mural cells including pericytes and vascular smooth muscle cells [22,23]. Different signaling pathways contribute to the endothelium/pericytes cross talk, of which three of the best known are angiopoietin-1 (ANG-1)/Tie2, transforming growth factor (TGF-β)/TGF-R, and ephrinB2/ EphB4, promoting endothelial quiescence and new vessel stabilization [12,22,23].
Intussusception seems to bean energetically and metabolically process than sprouting angiogenesis owing to relatively low cell proliferation and prevention of the extracellular matrix degradation [24]. Hemodynamics forces are a major contributing factor for inducing intussusceptive angiogenesis, additionally, VEGF-A plays an essential role in shear stress-based splitting of capillaries [24,25]. This process has already been defined in the chicken chorioallantoic membrane (CAM) model and also in skeletal muscle [24,25]. As shown in Fig. 1b, it is characterized by the establishment of intraluminal tissue pillars, leading to the splitting of vessels [25]. The formation of intraluminal pillars continues through a multistep event that initiates with the protrusion of the opposite ECs membrane into the vascular lumen (Fig. 1b). In keeping, the fusion of the ECs protrusions forms the intraluminal pillars and the ECs links are restructured, so that a gap is shaped in the center of the pillar. Next, this pillar is occupied by supporting cells such as fibroblast and pericytes, which produce the extracellular matrix. Consequently, these events lead to split the vascular fragment into two isolated new vessels, branching vessel [24,25] (Fig. 1b).
Definition of EVs
Extracellular vehicles (EVs), heterogeneous phospholipid bilayer spherical bags, are released from a variety of cell types. Exosomes and microvesicles and large vesicles are the most recognized subclasses of EVs, which have gained attention due to their pivotal roles in diseases [26] (Fig. 2). EVs as intracellular communication tools contribute to regulating neighboring cells function via delivering many kinds of materials such as nucleic acids, proteins, lipids, and carbohydrates [26]. These vesicles are present in almost bio-fluids including blood plasma, breast milk, urine, bile, cerebrospinal fluid (CSF), bronchoalveolar lavage fluid, saliva, peritoneum, and semen [7,27]. Established in 2011, International Society for Extracellular Vesicles (ISEV) is a globally scientific organization that concentrates on the study of EVs including microvesicles (MVs), exosomes, oncosomes, and other membrane-bound vesicles that are produced by cells. According to the guidelines of ISEV, the terms apoptotic bodies (ABs), MVs, and exosomes have been traditionally used for the cataloging of the three main EVs subpopulations. This traditional classification is based on EVs origin, size, and specific markers. ABs are the largest EVs (1000-6000 nm) originating from apoptotic cells (Fig. 2). MVs, shedding vesicles, releasing directly from the cell membrane in both physiological and pathological conditions, represent 100-1000 nm with an irregular shape (Fig. 2). Exosomes the smallest subpopulation of EVs (30-150 nm) generating from endosomal compartments inside cells [28]. In the next section, we describe exosomes biogenesis and secretion.
As shown in Fig. 2, three pathways have been suggested for exosomes that affect target cells as: endocytosis, receptor/ligand interaction, and direct fusion with the plasma membrane of target cells [46,47]. Exosomes can reach target cells through different endocytosis pathways including phagocytosis, pinocytosis, and receptormediated endocytosis [46,47]. Exosomes may dock at the plasma membrane of the target cell and activate/ inhibit intracellular signaling by ligand-receptor interaction. Direct fusion is another way by which exosomal membrane fuse directly with the target cell membrane and exosomes content discharge into the cytoplasm of the target cell. Understanding the detailed mechanisms behind exosome delivery pathway is worthy for designing exosome-based therapies.
Exosomes cargo
Exosomes contain different types of biological molecules transferred from source cells to target cells [48]. Analysis of exosomes cargo has received much attention in the past decade because identifying exosomes cargo improve our knowledge about detailed mechanisms involved in formation, loading, and also key functions of exosomes in different conditions; and further, provide us a new avenue to use them as a biomarker and therapeutic approaches for the treatment of various diseases [49]. Several databases have been established to collect and present exosomes cargo from different sources. For example, Exocarta (http://www.exoca rta.org) has presented about 563 proteins, 4764 miRNAs, 1,639 mRNAs, and 194 lipids of exosomes from various organisms [50]. In addition, Vesiclepedia (http://www.micro vesic les.org) has presented about 1254 EVs-related studies and classified nearly 38,146 different RNA molecules, 349,988 proteins, and 639 lipids [51]. In 2019, a database (http://bioin fo.life.hust.edu.cn) has been established to analyze small RNA sequencing of different EVs from various sources [52]. Cell condition can affect exosomes cargo and alteration in them may be used as a biomarker of such diseases [53].
Protein cargo
Protemome profiling of exosomes have characterized different types of protein within exosomes and also on exosomes membrane [54][55][56]. Different exosomes represent distinct cargo as their source cells have an individual profile; however, as mentioned above, exosomes from different sources contain common markers. Exosomes harbor membrane molecules including integrins, adhesion molecule 1 (ICAM-1), cytoskeleton components (annexins, actin, and tubulin), component from endocytosis pathway like TSG101 and Alix proteins; exosome loading molecules such as CD63, CD81, and CD9, and intracellular trafficking molecules like various Rab and SNAREs proteins, and also many other signaling proteins [28,57]. As mentioned previously, several mechanisms have been involved in the specific sorting of proteins into exosomes, such as ESCRT, tetraspanins, and lipid-based mechanisms. Additionally, exosomes contain common lipids for example ceramides, phosphatidylethanolamine, phosphatidylserine, diacylglyceride, bisphospatidic acid, sphingomyelin, and cholesterol [58].
Nucleic acid cargo
In addition to proteins, exosomes also are full of different types of RNAs that can be delivered into recipient cells. Using RNA sequencing analysis, Huang et al. confirmed that miRNAs were the most abundant in exosomes purified from the human plasma, making up over 76.20% of all mappable reads and 42.32% of all raw reads [59]. Other RNA species such as long non-coding RNA (3.36%), ribosomal RNA (9.16%), transfer RNA (1.24%), piwi-interacting RNA (1.31%), small nuclear RNA (0.18%), and small nucleolar RNA (0.01%) have been characterized in exosomes. Once miRNAs are loaded into exosomes, they can transfer between cells, resulting in an intercellular trafficking network, which, in turn, induces functional and phenotypic changes in recipient cells [60]. Different kinds of miRNAs are present within various exosomes, for example, miRNAs such as miRNA-1, miRNA-21, miRNA-29a, miRNA-214, miRNA-320, and miRNA-126 mediate regulation of angiogenesis, exocytosis, metastasis, hematopoiesis, and tumorigenesis [61]. In addition, a growing body of evidence showed that long RNAs like long non-coding RNAs (lncRNAs) and circular RNAs (circRNAs) are loaded into exosomes, and they participate in a variety of biological processes such as cancer [62]. For instance, lncRNA TUC339 was the most highly expressed one in exosomes from human hepatocellular cancer, which is involved in tumor cell growth and adhesion [63]. It seems that the exosomal RNAs loading process is a regulated and complex mechanism [64]. Cells use distinct mechanisms for sorting specific nucleic acids into the exosomes [65]. DNA strands have recently been revealed to be transferred by exosomes. Exosomes derived from sera of Pheochromocytomas and paragangliomas patients, heritable endocrine tumors, contain DNA strands that would be used as biomarker of these tumors [66]. Furthermore, the presence of DNA strands within exosomes has been suggested to be associated with processes such as cell senescence and inflammation [67]. As we know, DNA is mostly restricted to the nucleus and does not usually interact with the cytoplasmic the exosomal secretory pathway for secretion [67]. Although micronuclei as cytoplasmic structures may likely participate in loading of DNA strands into exosomes, the underlying mechanisms remain mysterious [68].
Tumor cell derived-exosomes and angiogenesis
Tumor derived-exosomes are essential factors for the formation of new vessels at the early stage of tumor progression (Fig. 3). Exosome purified from primary human malignant mesothelioma (MM) can induce migration, vascular remodeling, and angiogenesis in a MM model [2]. Proteomic analysis showed that these exosomes contain oncogenic cargo inducing cell migration and tube formation molecules [44]. Murine multiple myeloma derived-exosomes have been shown to induce the formation of the metastatic niche in bone marrow and promote angiogenesis in vivo [69]. Exosomes derived from Glioblastoma multiforme (GBM), a vascularized and aggressive type of brain cancer [70], are full of angiogenic proteins that promote angiogenesis in vitro and in vivo models [71,72]. These exosomes support ECs proliferation and tubulogenesis. Similar results have been reported by Skog et al, who found that exosomes from GBM cells contain mRNAs belonging to ontologies like angiogenesis, and increased tubulogenesis of human brain ECs in vitro [73]. In other solid tumors, for example, nasopharyngeal carcinoma, proteomic analysis showed that pro-angiogenic proteins were increased, while antiangiogenic proteins were decreased [74]. These exosomes induced migration and tubulogenesis of ECs. Similarly, exosomes derived from different breast cancer cell lines and pancreatic carcinoma cells are angiogenic and induce angiogenesis [75][76][77]. Exosomes released from hypoxic lung cancer cells transfer miRNA-23a to ECs that promotes angiogenesis through targeting tight junction protein ZO-1 and prolyl hydroxylase and increasing vascular permeability [78]. Besides solid tumors, exosomes from chronic myelogenous leukemia (CML) cells have been shown to promote angiogenesis via direct interaction with ECs [79,80].
Exosomes uptake by ECs
Exosomes can interact with target cells such as ECs and immune cells to facilitate angiogenesis. Exosomes-mediated reprogramming in the recipient cells is depended on the exosomes uptake routes. Exosomes can affect T cells via the direct receptor-ligand interaction, however in the ECs case, exosomes use internalization pathway to affect ECs [71,80,81]. PKH26-labeled exosomes have been shown to deliver their cargo into the cytoplasm of ECs after 4 h of co-culturing ECs cells with PKH26-labeled exosomes [80,82]. In ECs, the endocytosis pathway may be the main route for uptaking exosomes [83]. After entry exosomes, they are directed to the perinuclear zone and traffic to cortex and enter into the actin filaments richen area forming pseudopods during tubulogenesis [80]. Interestingly, exosomes in these points are found in clusters and after cellular remodeling, these exosomes may move to other neighboring cells by nanotubular structures detectable by confocal microscopy. These observations may support the idea that ECs during tubulogenesis communicate with other ECs and other surrounding cells within the tubule network [80,82]. Even though the uptake ways of tumor derived-exosomes by ECs is the mostly studied, other mechanisms may be involved, particularly, the receptor-ligand interaction [84].
Mechanisms involved in exosomes-induced angiogenesis
Once internalized into recipient cells, exosomes cargo can regulate fate, function, and phenotype of recipient cells [71,85]. Exosomes docking on cell surface may activate/inhibit the signaling pathway in ECs through receptor-ligand interaction [86]. Therefore, exosomes can engage different signaling pathways of recipient cells to affect recipient cells function [87]. The exact signaling pathways behind angiogenesis driven by exosomes are poorly known. The pivotal roles of protein cargo of the cancer derived-exosomes in cancer progression and angiogenesis have been documented [88,89]. The proteomic content, even angiogenic profile of exosomes from different tumor cells widely differ in various tumor cells (Table 1). However, these differences may arise from the bias of researchers in targeting proteins of interest. Analysis of exosomes from GMB cells showed that these exosomes are enriched with pro-angiogenic factors including VEGF, angiogenin, TGFβ, IL-8, IL-6, MMP2, MMP9, TIMP-1, TIMP-2, and CXCR4 chemokine receptor [73,90,91]. MM-derived exosomes abundantly contain bFGF, VEGF, HGF, Serpin E1, and MMP-9 [69]. Exosomes from nasopharyngeal carcinoma cells contain a high level of pro-angiogenic proteins including CD44 isoform 5 (CD44v5), ICAM-1, and MMP13, while contain a low level of antiangiogenic protein, thrombospondin-1 [74,92]. You et al. found that these exosomes contain HAX-1 protein that induces migration ability and angiogenesis in ECs [92]. Analyzing exosomes from the colorectal carcinoma ascites showed that these exosomes carry angiogenic proteins like Plexin B2 and tetraspanin-8 [93]. Melanoma cancer released-exosomes bear VEGF, IL-6, and MMP2 [94]. Exosomes from lung adenocarcinoma are enriched with sortilin, which increases the expression of angiogenic genes including IL-8, VEGF, endothelin-1, thrombospondin-1, and uPA in ECs [95]. Breast cancer derived-exosomes transfer proangiogenic Annexin II to ECs and induce angiogenesis via the tPA-dependent manner in vitro and in vivo [76]. Beckham et al. declared that exosomes from bladder cancer patients contain EDIL-3 proteins that facilitate migration and angiogenesis [96]. Heparanase, an enzyme involved in exosomes biogenesis and loading, is present in tumor cell derived-exosomes and contributes to migration and tube formation of ECs [97,98]. Prostate cancer released exosomes contain TGF-β1 proteins that mediate differentiation of fibroblast into myofibroblast, promoting angiogenesis in vitro [99]. Exosomes produced by pancreatic adenocarcinoma have a high level of Tspan8 that promote proliferation, migration, and sprouting in ECs. Furthermore, these exosomes mediate maturation of endothelial progenitor cells [100]. Tumor derived-exosome can induce epithelial-mesenchymal transition (EMT) in different cancer cells. EMT cells produce exosomes with angiogenic Rac-1 and PAK-2 proteins that induce angiogenesis in ECs [101]. In GBM cells, Zeng et al. showed that EMT cells derived-exosomes induced cell migration, invasion, and angiogenesis [96]. Nucleic acids content of tumorderived exosomes mediate angiogenesis in ECs upon exosome internalization. For example, colorectal cancer cells release exosomes transferring proliferation-related mRNAs such as RAD21, CDK8, and ERH to ECs and increase proliferation of ECs and subsequently support angiogenesis [102]. Lang et al. found that GBM-derived exosomes contain lncRNA POU3F3 that promote angiogenesis in ECs [71]. In addition, in another study, it was demonstrated that these exosomes transfer lncRNA-CCAT2 to ECs, which subsequently inhibits apoptosis and enhances angiogenesis [71]. Enriched in exosomes, miRNAs can deliver into target cell cytoplasm and control different mRNAs expression and cell function of target cell [103]. Obviously, exosomal miRNAs fascinated attentions have key roles in increasing the adversarial effects of tumors [104]. For example, in human nasopharynx cancer, tumor-derived exosomes actively transfer miRNAs including miRNA-106a-5p, miRNA-891a, miRNA-24-3p, and miRNA-20a-5p that promote cell proliferation and survival through suppression of MARK1 protein signaling pathway [105]. miRNAs cargo of tumor-derived exosomes are also involved in angiogenesis through regulating ECs function and morphology [106,107]. Exosomes protect miRNAs from enzymatic degradation, thus increases the stability of exosomal miRNAs compared to circulating ones. In Table 2 a list of angiogenic exosomal miRNAs is presented. Lung cancer cells release exosomes enriched with miRNA-21. This miRNA is an oncogenic and angiogenic molecule that enhances expression and secretion of VEGF, inducing angiogenesis in ECs [108]. Other miRNAs such as miR-NA23a and miRNA-210 which are present in exosomes of lung cancer cells and are implicated in inducing angiogenesis in ECs [78,109]. Besides, exosomal miRNA-192 has been shown to inhibit angiogenesis [110]. Umezu et al. demonstrated that hypoxia-resistant multiple myeloma (HR-MM) cells release exosomes containing miR-135b that enhance angiogenesis in ECs through targeting HIF-1 [111]. Lung cancer cells secrete exosomes enriched with miR-23a, which facilitate the angiogenesis by targeting tight junction protein ZO-1 and prolyl hydroxylase [78]. Mao et al. found that hypoxia increased miR-494 loading into exosomes of non-small cell lung cancer (NSCLC) through the HIF-1α-mediated mechanism. In keeping, they showed that these exosomes down-regulated PTEN and activated Akt/eNOS pathway in ECs and consequently promoted angiogenesis [112]. In addition, miR-210 cargo of exosomes purified from leukemia cells induced the tubulogenesis in human endothelial cells [113]. The possible mechanisms that tumor-derived exosome cargos promote angiogenesis have been presented in Fig. 3b. The biomarker potential of exosomal miRNAs has frequently been reviewed in literature [114,115]. As miRNAs bearing exosomes can be distributed to biofluids, therefore, liquid-biopsy from urine, plasma, and CSF is a non-invasive method for obtain exact information about tumor environment/status [116]. For example, it was demonstrated that miRNAs such as miRNA-205, miRNA-214, miRNA-141, miRNA-203, miRNA-200 a,b,-c, and miRNA-21 are present in exosomes isolated from patients suffering from ovarian tumors and they could be serve as biomarkers [117].
Exosomes from hypoxic cells
Hypoxia plays a critical role in inducing tumor angiogenesis. In tumors with a high level of growth and metabolism rate, oxygen deficiency, thus hypoxia contributes to induce angiogenesis via hypoxia-inducible transcription factors [118]. Under hypoxic condition, cancer cells release more exosomes, which represent proangiogenic properties [9]. In addition, researchers have declared that exosomes from hypoxic tumor cells are enriched with exosomal markers such as CD81, CD63, and HSP-70 [78,113,119]. Normoxic condition is another factor that affects components of exosomes. Hypoxic cells release exosomes differing from those derived from normoxic cells. For example, exosomes from hypoxic GBM cells contain higher levels of IGFBP3, IGFBP5, and LOXL2 than those from the same cells exposed to normoxic conditions [120]. In support, Kucharzewska and co-workers showed that these exosomes significantly promoted proliferation, migration, and tubulogenesis of recipient ECs as compared to exosomes from normoxic cells [72]. They confirmed that hypoxic exosomes have high mRNA and protein levels of IL-8 and IGFBP3 that induce proliferation and migration of pericytes, the angiogenic cells in vitro [72]. However, some researchers indicate exosomes from both hypoxic and normoxic cells represent the same physical features and by the same way deliver their cargo to ECs [111,113]. Similarly, under hypoxic culture condition, exosome from lung and leukemia cancer cells increased permeability of ECs and angiogenesis [78,113]. Hypoxia also alters miRNA cargo of exosomes from different cancer cells including lung cancer cells and MM cells [78]. Huang et al. found that exosomes released from hypoxic colorectal tumor cells induced angiogenesis via HIF-1/Wnt4/β-catenin signaling pathway in ECs [111]. These facts show that hypoxic condition which is frequently observed in the tumor environment plays a key role in tumor angiogenesis, therefore in tumorigenesis; thus it seems that it is a promising approach to design new treatment strategies against hypoxia-induced angiogenesis.
Targeting exosome-induced angiogenesis
Understanding the molecular mechanisms behind exosome-induced angiogenesis is the key factor for the progression of a new approach for cancer therapy. It seems likely that new therapeutic strategies which target exosome biogenesis and/or exosome-induced angiogenesis can reduce tumorigenesis. Most types of tumors are vascularized and produce exosomes, thus it would be major progression to discover the underlying mechanisms involved in angiogenesis driven by tumor-derived exosomes and to identify the ways for inhibiting angiogenesis and thus, improve the current therapies outcome. Corrado et al. reported that Carboxyamidotriazole orotate (CTO) decreased angiogenesis ability of imatinib-resistance CML cells [121]. CTO targets the expression of IL-8 and cellular adhesion of ECs, which promoted by tumor exosomes. Thus, CTO inhibits the action of these exosomes on ECS-CML interaction and migration of ECs, suppression exosome-induced angiogenesis [121]. Treatment of CML cells with curcumin altered exosomes cargo. In this regard, curcumin increased miRNA-21 and antiangiogenic proteins sorting into exosomes but decreased sorting of proangiogenic proteins into exosomes. Consequently, ECs lost their function upon uptake these exosomes in vitro [122]. In support, Docosahexaenoic acid (DHA) which used as an adjuvant to breast cancer therapy has been shown to target exosomes loading and biogenesis. DHA treatment increased the level of miRNA-23b, miRNA-320b, and miRNA-27b in exosomes. Furthermore, these exosomes are internalized by ECs and suppressed tubulogenesis in ECs without affecting VEGF expression [123]. Pharmacological inhibitors which are capable of targeting exosome biogenesis, loading, and secretion may be a potential agent for inhibiting angiogenesis [124]. For example, GW4869 and Manumycin A have been reported to block exosomes formation from MVB. Other compound such as Calpeptin, Y27632, and Imioramine can inhibit MVs formation [124]. Datta and colleagues showed that manumycin A inhibited exosome biogenesis in prostate cancer cell lines (C4-2B, PC3 and 22Rv1) [125]. Indeed, Manumycin A inhibits Ras activity, a small GTPase involved in exosome biogenesis [126]. It has been reported that chemicals, compounds, and peptides can inhibit EVs uptake by target cells, which may suppress EVs function on target cells [47]. However, despite progress in EVs biology, the detailed mechanisms behind their generation and function are still elusive. Furthermore, the main concern is that these compounds do not specifically target tumor cells and may show side effect on other cells, suppressing promising EVs. Important efforts would still necessary to examine their impact on EVs secretion from normal cells. Approaches to preferentially distribute them to tumor cells may be vital. Certainly, the drugs that are previously approved for use in humans, due to some indications, might have a more straightforward way to usefulness than those are compounds that have not ever been established as therapeutics.
Cancer stem cell derived-exosomes and tumor angiogenesis
Cancer Stem Cells (CSCs), a small subpopulation of self-renewal cells within tumors, give rise to heterogeneous tumor cells populations that make up tumor. CSCs produce angiogenic exosomes containing stem cell markers including CD44, CD133, CD90, and CD105 [127,128]. Grange et al. found that exosomes from human renal cancer stem cells express CD105 marker, which induced angiogenesis and facilitate the metastatic niche formation [127]. Conigliaro et al. declared that CD90 positive exosomes from liver cancer cells increased tube formation and cellular adhesion in ECs. Further scrutiny showed that these exosomes significantly up-regulated the expression of VEGF and its receptor [82]. Besides, miRNA cargo of exosomes from human prostate cancer cells is different from those of bulk cells. These findings indicate exosomes from CSCs represent distinct cargo, and therefore CSCs stand for a promising target for therapies.
Mesenchymal stem cells derived-exosomes and tumor angiogenesis
Mesenchymal stem cells (MSCs), self-renewal cells, can differentiate into various lineages such as osteoblasts, fibroblasts, adipoblasts, chondroblasts, pericytes, and even other cell types [129]. They usually used as a source of cell therapy owing to their profound regenerative capability and immunosuppressive effects [130]. One of the fundamental mechanisms of MSCs usefulness appears to rise from their paracrine activity. Exosomes from MSCs orchestrate the main mechanisms of action of MSCs after transplanting into target sites [131]. The critical role of MSCs-exosomes in tumor proliferation, invasion, and also angiogenesis is still controversial. Some laboratories declared that these exosomes support tumorigenesis, however, others found that they suppress tumor tumorigenesis [132], thus, MSCs-exosomes represent a dual effect on tumor angiogenesis. In the case of antiangiogenic effects, it was demonstrated that exosomes from mouse bone marrow (BM) MSCs inhibited angiogenesis in breast cancer cells via suppressing VEGF expression [133]. The authors declared that miRNA-16 within these exosomes down-regulates VEGF. Pakravan et al. demonstrated that exosomes from BM-MSCs contain miRNA-100 that declined VEGF expression in breast cancer cells and suppressed angiogenesis in vitro by modulation the HIF-1α/mTOR signaling [134]. miR-100 is antitumor miRNA and down-regulated in different cancer cells. Thus, exosomal transfer of miRNA-100 compensates low level of miRNA-100 in tumor cells and participates to inhibit tumorigenesis. Further, authors found that conditioned media from MSCs exosome-treated breast cancer cells inhibited migration and proliferation of ECs [134]. Recently, Rosenberger et al. found that exosomes from menstrual MSCs had potential to inhibit angiogenesis in ECs via increasing apoptosis and inhibiting VEGF secretion [135]. Similarly, exosomes from menstrual MSCs blocked angiogenesis in prostate PC3 tumor cells via inhibition of VEGF secretion, NF-κB activity, and producing reactive oxygen species (ROS) [136].
In contrast, exosomes from MSCs increased tumor growth and angiogenesis. For example, Human BM-MSCs have been shown to increase angiogenic molecules in the gastric tumor in vivo. Co-implantation of SGC-7901 cells with MSCs exosomes up-regulated the transcript level of α-SMA, VEGF, MDM2, and CXCR4; and protein levels of VEGF, Bcl-2 phosphorylated ERK1/2, and CXCR4 [137].
Mesenchymal stem cells derived-exosomes as drug-carriers
Anticancer drugs have several disadvantages including side effects on normal tissues, solubility, short half-life, and limitation in the passing through the physiological barriers, hence, nanocarriers have been advanced to overcome these limitations [138,139]. Recently, nanocarriers such as liposome, ligand-conjugated nanoparticles, and magnetic nanoparticles have been examined for delivering therapeutic agents to cancer cells. However, these nanoparticles sometimes have limitations due to their synthetic structures and non-targeting effects on tissues [140][141][142]. Compared to synthetic carriers, EVs are safe, cell-origin, and natural carriers that show long half-time and non-immunogenic properties for drug delivery systems. They contain various proteins and nucleic acids, which can be modified by available techniques [143]. Besides, based on the origin of tissue/cell, EVs can home into their origin sites, which make them ideal and specific carrier for targeting tumor cells [144]. Overall, two strategies are used to design the exosome-based nanocarriers from cells and exosomes as I: direct engineering process and II: indirect engineering process.
In direct engineering process, exosomes purified from the optional cells directly engineered with exogenous therapeutic agents like synthetic compounds, drugs, and biomolecules. In indirect engineering process, source cell (MSCs/tumor cells) are genetically modified for producing optional exosomes or are incubated with the therapeutic drugs to load drugs into exosomes [145]. These exosomes now are carriers for therapeutic agents and known as exosome-based nanocarriers, which are capable of delivering drugs to target cells. In this regard, provide a high amount of exosomes at the same time being safe and non-immunogenic is the hallmark of exosomebased nanocarriers for delivering therapeutic agents. MSCs produce abundantly nonimmunogenic, beneficial, and safe EVs among other cells [146]. In addition, MSC-EVs do not show some limitation such as malignant transformation, genetic variability, rejection, and cytotoxicity [147]. MSCs-EVs play key roles in improving cardiovascular disease, liver disease, acute kidney injury, lung disease, and cutaneous wound healing [145,148]. These facts support an idea that exosomes from MSCs may serve as the beneficial vehicle for cancer therapies.
Clinical trials
Consistent with recent development in molecular mechanisms of tumor angiogenesis, clinical trials are becoming more common. By 10 April 2020, the National Institutes of Health at (Clinicaltrials.gov) recorded 90 clinical trials related to tumor angiogenesis in different cancers. A search through the records showed that the majority of clinical trials are identical to studies involving different solid tumors (15.55%) (Fig. 4). In addition, 13.33% and 12.22% clinical trials belong to breast and lung cancer respectively. All of these studies confirmed the fact that angiogenesis is a useful tool for cancer treatment in a clinical situation.
Perspective
Cancer growth and metastasis are depended to angiogenesis, which is structured by a complex interaction between cells, molecular pathways, and soluble factors such as exosomes. The essential role of cancer cell derived-exosomes in angiogenesis has been recently described. These exosomes were found to be effective inducers of angiogenesis in vitro and in vivo through functional reprogramming and phenotypic modulation of ECs and other cells resident in the tumor microenvironment [71,72]. Tumor cell derived-exosomes contain pro-angiogenic signaling molecules like proteins and RNAs. The formation of new vessels, which arises at an early step of the tumor growth, has been depended to the levels of exosomes from tumor cells. Cancer cells abundantly produce exosomes, and plasma of patients with cancer is enriched in tumor cell derived-exosomes. As the molecular content (such as miRNAs and proteins) of these exosomes recapitulate contents of the parent cell, they appear as possible non-invasive biomarkers of tumor progression and tumor angiogenesis [149] (Fig. 5a). This is hopeful for the design of a liquid biopsy model, which would allow for measurement of the tumor angiogenic profile in real time and repetitively. Application of these exosomes as biomarkers in following up of cancer progression or responses against antiangiogenic therapies enables to significantly improve patient management and also drug selection. They could develop as a tool for patient-specific diagnosis and server as personalized anti-angiogenic therapy. Thus tumor cell derivedexosomes may serve as future biomarkers of cancer diagnosis, staging, response to therapy and prognosis. Furthermore, upcoming efforts should focus on silencing or eliminating exosomes that selectively encourage malignant, but not benevolent, angiogenesis, thus adding novel treatment opportunities to current anti-angiogenic therapies [124] (Fig. 5b). An interesting approach has been suggested by Marleau and colleagues based on the effective elimination of circulatory exosomes by extracorporeal hemofiltration associated with affinity agents like exosome-trapping antibodies and lectins. This platform was proposed to capture and trap particles < 200 nm from the whole circulatory system [150]. However, this platform traps non-tumoral exosomes, which have normal physiological roles. Recent discoveries have revealed that it is possible to inhibit the biogenesis and release of exosomes from different cells [151]. Some researchers have endeavored to investigate exosome-inhibitors as research tools for exploring the exosomes kinetic; however, others assessed the inhibitory potential of such compounds in various disease models such as cancer [124,151]. Most of experiments were done in preclinical, therefore, clinical trials are essential for validation and confirmation. However, the main concern remains about non-targeting effects of exosome-inhibitors (drug/ compound) on exosomes biogenesis of healthy cells. For example, for cancer, it seems likely that substantial efforts would still be essential to study their effects on . Exosome can be used as a drug delivery system. In this regard, source cells may be co-cultured with a drug to obtain exosomes containing drug or source cells may be genetically engineered to produce artificial exosomes. In addition, drugs are incubated with isolated exosome to load drugs into exosomes (c). Until now, exosome-based drug delivery system have been examined in vivo and in vitro Page 13 of 17 Ahmadi and Rezaie J Transl Med (2020) 18:249 exosomes release from both healthy and tumor cells as well as to design methods to selectively deliver inhibitors to tumor cells. Exosome-therapy may be a promising tool for inhibition tumor. Select a popper sour cell to obtain exosomes for suppressing angiogenesis is a gold standard. Despite the promising function of MSCsderived exosomes in regenerative medicine [152], as mentioned above, MSCs-exosomes exhibit both pro and anti-angiogenic properties, therefore the exact effect of their exosomes on tumor angiogenesis remain elusive. Another interesting approach that exosomes can be used as a therapeutic agent is the drug delivery potential of them [153][154][155] (Fig. 5c). Exosomes can serve as exosome-based nanocarriers that deliver therapeutic agent to target cells. Exosomes from a safe source such as MSCs may be loaded with anticancer/antiangiogenic compounds or genetically engineered for targeting tumor cells, suggesting the exosome-based nanocarriers for treatment of cancers in drug-delivery system. The advent of safe nano-carriers with high efficiency is the core goal of nano-medicine. Thus, the development of exosomesbased nanocarriers has opened a hopeful opportunity for the delivery of therapeutic agents. However, the majority of studies performed in vitro and animal models, therefore, the safety, specificity, and proficiency of this method in clinical trials remains still more mysterious. Our knowledge of EVs/exosomes biogenesis, loading, and function are still limited, therefore, to implement many of the ideas mentioned above, further studies of EVs (especially exosomes) from tumor cells are required.
Conclusion
Tumor cell derived-exosomes have been shown to play a pivotal role in tumor angiogenesis, promoting tumor growth and metastasis. These exosomes contain various types of angiogenesis-related nucleic acids and proteins that trigger functional and phenotypic changes in ECs and support the vessel formation and growth. Tumor derived-exosomes may serve a potential biomarker for diagnosis of cancer. In addition, suppression of exosomes biogenesis from tumor cells may prevent tumor angiogenesis. However, future studies on the current topic are therefore required to elucidate the biological role of tumor cell derived-exosomes in tumor angiogenesis and to determine the clinical application of targeting these exosomes for preventing angiogenesis. Besides, the key role of exosomes from MSCs in tumor angiogenesis is still controversial. Due to favorable features, MSCsderived exosomes may be useful in exosome-based nanocarriers for drug delivery. This is a vital issue for future research in exosomes-based cancer therapy. | 2020-06-22T13:53:41.514Z | 2020-06-22T00:00:00.000 | {
"year": 2020,
"sha1": "3f0d1ecb076e55c9cd8f45afec5aa84b895fa2c3",
"oa_license": "CCBY",
"oa_url": "https://translational-medicine.biomedcentral.com/track/pdf/10.1186/s12967-020-02426-5",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3f0d1ecb076e55c9cd8f45afec5aa84b895fa2c3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
234525729 | pes2o/s2orc | v3-fos-license | Increasing the efficiency of urban public transport (UPT) services through the use of multimodal transport technologies
The present paper investigatesthe problem of providing integrated transport services to passengers traveling from suburbs to the city center. Options for the organization of the route “From the resort of healing waters - Tashkent railway station” will be studied. By using the multimodal transportation technology, and by taking an account the following demandslike the timetable, comfort, the price of transportation of passengers, the possibility of independent mobility to the different destinations will be organized. The article proposes a set of measures of improving public transport services on the routes in suburbs or from suburbs to the city center.
Introduction
The uninterrupted operation of the transport system is an important factor in the sustainable socioeconomic development of the city, the suburbs in general. Passenger transportation is the main task of urban public transport (UPT), which determines the living standards of the urban population and an unhindered environment for the development of society. As it was stated by researcher Safronov [1] "... the city's transport network is part of the life support system of this area and has infrastructural significance". One of the ways to improve the quality of transport services to the population is to create modern conveniences by reducing the total time and cost of long-distance trips through the integrated use of all available modes of transport and the creation of an integrated transport system [2,3]. The expansion of urban areas and the increase of theirinfluence, employment, education and cultural recreation are leading to an increase of intercity passengers. Therefore, there is a need to integrate the transport system, which serves the transportation of passengers in the suburbs and areas close to the city, into the UPT system. By this, it will be possible to raise the efficiency of urban public transport. It should be noted that passengers are often transported from the suburbs by long-distance bus, minibuses and cars to the city center or vice versa to the last destinations located around the city. This also has a negative impact on the complexity of the problem, such as grows oftraffic-jam in the city center, air-pollution and increase of CO 2 emission, and the lack of parking spaces [4,5,6]. In large agglomerations, on suburban roadsthe organization of regular passenger traffic is an important tool in combating "congestion", which helps to reduce inefficient costs for the development of private transport infrastructure and increases population GDP's productive time [7]. 4.75 million passengers move in Tashkent and around Tashkent per day. 1.45 million (30%) of them uses UPT, while the remaining 3.3 million passengers use the service of illegal taxis. (there are about 10-20 thousand illegal taxis) [8]. In Tashkent, cars are socially important mode of transport, but due to the rise of the amount of cars in the city centers, traffic jams are also increasing. Traffic congestion can be alleviated through the establishment of modern road infrastructure. However, the amount of toxic gases emitted by cars in the city center equal to 411.59 thousand tons per year, from which 383.08 thousand tons are belong to exhaust gases . This figure has increased by 37% in the last 10 years, negatively affecting the urban ecosystem [8]. This situation can be resolved by reducing the number of cars in the city and attracting the population to the use of the UPT system. The efficiency of "Toshshahartransportxizmat", the largest passenger transport company in Tashkent, is declining due to insufficient government subsidies and inefficient use of modern equipment and technologies. By the end of 2019, a total of 155 regular passenger routes were attached to the community, and 1249 buses of various capacities served passengers, distributed in 8 palaces. The coefficient of technical readiness is 0.76, while the average age of vehicles in bus depots is 6.1 (average norm -5 years) years. 19.2 per cent of the vehicles in the community returned to the stations from the routes due to technical failure. Due to these negative factors, the coefficient of use of stations in the society is 0.72, which leads to a decrease in the quality of transportation [9]. It is possible to increase the flow of passengers by attracting new equipment and technologies. To do this, there is a need to ensure the competitiveness of the UPT, taking into account the changing needs of passengers. The decrease in the share of passenger transport in buses in the market of UPT services is primarily due to the lack of quality of passengers' services and absence of a single transport system in coordination with other private carriers (direct taxis) or other modes of transport (metro, railway).
Methods
Assessing the advantages and disadvantages of different modes of transport, we will consider the issue of providing the urban area with UPT. This can be achieved by studying the existing transport infrastructure in the urban area, determining which transport is preferable to use, and developing a strategy that prioritizes it. We will look at all possible options for moving around the city from the city center to different types of transport routes ( Figure 1).
Figure 1. Types of transport and connections available in and around Tashkent
In the formation and implementation of any system of multimodal transport technologies in longdistance and suburban passenger communications, it is necessary to take into account the basic principles of its organization. Firstly, the applied innovative technologies should be aimed at increasing the efficiency of transportation, fully meeting the needs of passengers in transport, reducing the total travel time and providing quality service. Secondly, it should cover the economic, social, organizational and structural interests of all carriers [10]. This can be achieved by integrating (connecting) subway or rail transport (electric) to a single network, covering the remaining areas of directional taxis, cars (taxis) and buses if other carriers are available in certain limited areas of the city. In this case, the coordination of the work of all carriers on the basis of multimodal transport technology is highly effective. In the scheme of integration of all modes of transport, passengers will be able to choose the mode of transport to their destination, which will increase the popularity of UPT and create the basis for efficiency. In passenger transport, multimodal transportation technology means using multiple modes of transport to travel to a destination with one ticket, under the responsibility of one carrier. This transportation technology allows passengers to take advantage of all modes of transport. With the help of multimodal transportation technology, passengers can organize a trip from one destination to another, depending on their needs and capabilities. You can choose one of the following methods: minimum travel time; the cheapest price; convenient and comfortable conditions of transport; the optimal ratio of indicators.
As an example, we took the "Healing waters resort" 10 km far from Tashkent, as the "point A" and "Tashkent Railway Station" which is located in the center of Tashkent as the "B-point". The scheme of multimodal transportation technology for travel from the city to the city center using various mode of transport is as follows (Figure 2). Levels of comfort, conditional unity in points: 1-badly, 2-lower, 3-satisfactory, 4-middle, 5-higher.
Results and Discussions
By analyzing the obtained tables, we will be able to choose the transportation options that meet the different requirements of passengers. a) As the best option for the "Price" criterion, passengers are advised to choose the following route ( Figure 3). As an alternative to the price criterion: Travel time -5+35+ 5+45+5 total 90 minutes. Cost 1500+1500 sum. Average price of convenience 3000 sums. Average comfort levels 3 points. b) It is recommended to choose the following direction as the best option for the "convenience" criterion ( Figure 4).
Figure 4. Constructing an optimal route according to the "Convenience" criterion
The cost is as follows: 12000+1500 total 13500 sum. The average price of convenience is total 13500 sum. Travel time: 35+5+25 total in minutes 65 minutes are spent. Average comfort levels 4 points. It promotes the use of UPT for commuters traveling around the city, making it easier for them to park their cars outside the city. In this case, the car may be more convenient, but it will cost the passenger a total of 60 minutes and 23,000 sums. Average comfort levels 3 points. c). The best option for the criterion of "travel time" is a taxi from this direction to the subway and from the subway to the destination ( Figure 5). It is also possible to use the subway after a car to reduce travel time, but this method can be inconvenient due to the fact that it takes time to walk to the subway after leaving the car in the parking lot. At the same time, the travel time in the method is 25+5+25+25 for a total of 55 minutes. The cost of this trip will be 2000 + 1500 for a total of 3500 sums. Average comfort levels 4 points. d). The cost of travel has a great importance to passengers, but observations show that the option of choosing a cheaper route leads to fatigue of passengers due to the low level of convenience. The most convenient route is very expensive. Therefore, a balanced trip can be created by following the principle of "door to door" multimodal transport technology ( Figure 6).
Figure 6. Scheme of construction of a balanced route
As it can be seen from Figure 6, the efficiency of the use of route modes of transport in the UPT as the main link for passengers in the combination of addresses is high in all respects. However, the main advantages of overground and underground metro can be recognized as the main connecting rib of multimodal transport. The advantage of this type of transport is the regularity, high level of security, low prices and convenience. An important disadvantage of subway transportation to the center is the lack of capacity for door-to-door transportation [11]. At the same time, in order to give priority to the use of UPT types, it is necessary to involve bicycle transport as a mode of multimodal transport technology, with the establishment of bicycle rental outlets at all transport crossings in UPT. At the same time, it is advised to the city administration designate the city center as a special "green zone" and take measures to make it paid or restrict the entry of cars there. There is a need to establish bicycle rental offices in front of all institutions, organizations, museums, theaters in the region, as well as in front of special stations of the UPT. At the same time, cycling infrastructure will be created throughout the region. Specially equipped bicycle lanes as a "cultural route" aims to attract the city to local and foreign tourists. In this case, trips we recommend will allow passengers to travel by using the multimodal method "door to door". Here, the total cost for the multimodal route is: 2000 + 1500 = 3500 sums. Passengers within the green zone will be provided with a free service for up to 15 minutes if they present a ticket for UPT. Special bikes near the destination will need to be handed over to the rental office within 15 minutes. The average cost of convenience is 3,500 sums. Travel time: 50 minutes. Average comfort levels 5 points. To speed up the door-to-door transportation of passengers, it is possible to use cars. However, their number is growing disproportionately compared to the infrastructure being built for cars in the city center. The rise the amount of cars in the city leads to traffic-jams, lack of parking spaces, excessive ICECAE 2020 IOP Conf. Series: Earth and Environmental Science 614 (2020) 012091 IOP Publishing doi:10.1088/1755-1315/614/1/012091 8 noise, and inconvenience effect negatively on environment. In this situation, the role of alternative transport to light road transport in the city center can be highlighted by the development of cycling infrastructure. Through the development of cycling infrastructure, passengers will be able to go "door to door". Enriching multimodal transport technologies in the UPT with the infrastructure of bicycle transport will allow providing high convenience and speed of passage within the framework of this innovative project.
Conclusions
Thus, by using multimodal systems, passengers will be able to select independently the mode of transport by choosing the price of the desired route, the time spent on it, the convenience created and other features. The future of cities depends on the living standards and culture of the city's population, adapting to it with a positive acceptance of innovative techniques and technologies. Their adaptation will determine the future state of the urban environment, the level of development of passenger transport links in all types and forms. | 2020-12-24T09:12:30.053Z | 2020-12-18T00:00:00.000 | {
"year": 2020,
"sha1": "89acd76a39c6988a7e7ab338a4173e998cf99237",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/614/1/012091",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "fe6316174d78a2194db8d873525fcd05106e77d0",
"s2fieldsofstudy": [
"Engineering",
"Business"
],
"extfieldsofstudy": [
"Physics",
"Business"
]
} |
201177848 | pes2o/s2orc | v3-fos-license | Handedness, language areas and neuropsychiatric diseases: insights from brain imaging and genetics
The brain signature and genetic basis of handedness are unclear. Wiberg et al. show that left-handers have higher functional connectivity between language networks, and identify four genomic regions associated with handedness. Variants within these regions appear, by influencing brain architecture, to predispose both to left-handedness and to certain neuropsychiatric diseases.
Introduction
One of the most remarkable features of human motor control is that $90% of the population has had a preference for using their right hand over the left since at least the Paleolithic period (Faurie and Raymond, 2004), and this skew in distribution of handedness is a uniquely human trait. It is widely believed that the lateralization of language in the left hemisphere accounts for the evolution of righthandedness in the majority of humans (Corballis, 2003). There are well-established associations between left-handedness and several neurodevelopmental disorders (Brandler and Paracchini, 2014); in particular, a meta-analysis of 50 studies concluded that non-right-handedness was significantly more common in participants with schizophrenia [odd ratio (OR) = 1.55, 95% confidence interval (CI) 1.25-1.93] (Hirnstein and Hugdahl, 2014).
Neuroanatomical studies of human handedness have been equivocal, most likely owing to small-to mediumsized study populations (Hatta, 2007;Guadalupe et al., 2014). While studies dedicated to one specific cortical feature, such as the shape and depth of the central sulcus (Amunts et al., 1996;Sun et al., 2012), or the gyrification pattern of Heschl's gyrus (Marie et al., 2015), have shown differences in left-handers, no significant cortical area correlates of handedness were found in the largest study sample so far (106 left-handed subjects, 1960 righthanded subjects) (Guadalupe et al., 2014). Functional imaging in the motor cortex has largely been inconclusive (Hatta, 2007). Conversely, differences in the lateralization pattern of language function have been consistently observed, with left-handers showing more bilateral or right-hemispheric language activation (Tzourio et al., 1998;Pujol et al., 1999;Knecht, 2002;Joliot et al., 2016).
Another unresolved issue is whether such a population bias in handedness is under genetic influence. While lefthandedness runs in families (Medland et al., 2009), and concordance of handedness is greater in monozygotic twins than dizygotic twins, with an estimated heritability of 25% (Medland et al., 2006), significantly associated loci for human handedness in the general population have thus far remained elusive (Eriksson et al., 2010).
UK Biobank is a prospective cohort study of $500 000 participants who have allowed linkage of their physical data, including genetics, with their medical records, lifestyle questionnaires, and cognitive measures. An imaging extension includes six distinct modalities covering structural, diffusion and functional imaging of the brain, with an automatic pipeline generating thousands of image-derived phenotypes (IDPs), which are distinct individual measures that can be used for correlation with other phenotypes, or for genetic analysis (Miller et al., 2016;Alfaro-Almagro et al., 2018;Elliott et al., 2018).
Using imaging, genotype and handedness data from UK Biobank, we aimed to discover correlations between: (i) handedness phenotype and IDPs; (ii) genotype and handedness; and (iii) handedness-related genotypes and IDPs. Supplementary Fig. 1 summarizes the key findings from the three arms of this study.
Imaging-handedness analysis
All UK Biobank imaging data were processed following pipelines designed to create a set of IDPs that summarizes the information across all brain structural and functional modalities (Miller et al., 2016;Alfaro-Almagro et al., 2018). These pipelines were developed mostly using FSL tools (Jenkinson et al., 2012), using well-known, validated and robust approaches for each set of IDPs: FSL-VBM (voxel-based morphometry) (Good et al., 2001;Douaud et al., 2007) and FreeSurfer Fischl et al., 1999) for regional grey matter volumetric, thickness and area measures, tract-based spatial statistics (TBSS) (Smith et al., 2006) and Autoptx (De Groot et al., 2013) for regional diffusion measures, or FSLnets for functional connectivity (see list of URLs provided in the Supplementary material). The description of this recently expanded set of 3144 IDPs has been recently published (Elliott et al., 2018), including full details of their estimated heritability that is summarized in Supplementary Table 1. Briefly, these comprised mainly regional volumetric, area and thickness measures; subcortical measures of MRI modalities sensitive to e.g. venous vasculature or microbleeds and white matter lesions, white matter tract measures of physical connection ('structural connectivity') between brain regions using diffusion indices, and measures of spontaneous temporal synchronization ('functional connectivity') between pairs of brain regions. IDPs were quantile normalized to ensure normality, and confounds, including age, sex, interaction between age and sex, head size, as well as various variables related to the MRI acquisition protocol, were included in the model. We tested the effects of self-reported handedness directly in UK Biobank (Data Field 1707), and results were Bonferronicorrected for multiple comparisons across all 3144 IDPs. This analysis was performed on the subset of imaged UK Biobank participants that had been preprocessed using the pipelines mentioned above (second release: $9000), by directly contrasting 721 left-handers with 6685 right-handers (all analyses excluded ambidextrous subjects).
Genotype-handedness analysis
After performing quality control of UK Biobank genotype (including restricting samples to individuals of white British ancestry), we undertook three genome-wide association analyses across 547 011 genotyped single nucleotide polymorphisms (SNPs) and $11 million imputed SNPs, with genetic sex and genotyping platform used as covariates:
Enrichment and correlation analyses with clinical phenotypes of handedness-associated SNPs
To identify the biological pathways and gene ontologies enriched in this genome-wide association study (GWAS), we performed a SNP-based enrichment analysis and a genebased analysis. We then analysed the average expression of the mapped genes across 53 tissue types, to gain insight into the relative tissue expressions of these mapped genes in a broad range of tissues. We also performed linkage disequilibrium (LD) score regression on summary-level statistics for the left-versus right-handers GWAS to estimate the SNP heritability, and to estimate the genetic correlation between handedness and various neurological and psychiatric diseases from publicly available summary-level GWAS data. Finally, we looked for correlations with clinical phenotypes collected directly from the entire UK Biobank population (corrected for multiple comparisons across phenotypes, n = 1345 and loci, n = 4).
Genotype-imaging analysis
For the genotype-imaging study, we used BGENIE v1.2 to carry out GWA analyses of the significant loci for handedness against each of the IDPs [see URLs in the Supplementary material for BGENIE, and the Oxford Brain Imaging Genetics (BIG) web browser, which allows users to browse associations by SNP, gene or phenotype]. Results were considered significant after Bonferroni correction for multiple comparisons across all IDPs (n = 3144) and loci (n = 4).
As all the participants' diffusion images are non-linearly registered to a common space (Alfaro-Almagro et al., 2018), we were then able to carry out a voxel-by-voxel analysis of the most consistent result identified with our IDPs using regression against the count of the non-reference allele (0, 1 and 2). This was performed to display the full spatial extent of the relevant variants' effects, and to investigate whether any apparent lateralization of the IDP results might be due to slight differences in significance (relative to threshold). Results were considered significant after a conservative Bonferroni correction for multiple comparisons across space (number of voxels in the image mask used to carry out the statistical analyses).
These significant voxelwise results in the white matter were then subsequently used as starting points (seed masks) for the virtual reconstruction and identification of the tracts to which they belong. For this, we ran the probabilistic tractography tool from FSL (probtrackx) with default settings (Behrens et al., 2003) on 100 randomly chosen imaged UK Biobank participants.
Further details of the methodology and results are given in the Supplementary material.
Handedness and imaging: left-handers have stronger functional connectivity between right and left language networks
Directly comparing all 3144 IDPs between the brainscanned UK Biobank participants (721 left-handers and 6685 right-handers) yielded numerous significant results, all but one using resting-state functional MRI measures (Supplementary Table 2).
The top 10 associations were all measures of functional connectivity between pairs of resting-state networks ('edges'), the most prevalent being the homologue of the language network in the right hemisphere, encompassing Broca's area (BA44 and 45), regions around the superior temporal sulcus, as well as premotor and primary motor regions centred around the tongue and mouth. Overall, these functional connectivity results showed, in left-handers, (i) a stronger connectivity between right and left (homologous) language networks ( Fig. 1A and B); and (ii) a weaker connectivity between the right homologous language network and the default-mode network (DMN) and salience network (Supplementary Table 2A, 'stronger connectivity' corresponds to higher absolute values of partial correlation between the time courses of the two restingstate networks involved).
The locus rs13017199 is $40 kb upstream of, and an expression quantitative trait locus (eQTL) of MAP2 (microtubule-associated protein 2), rs3094128 is $1.2 kb downstream of TUBB (tubulin beta class 1), $1 kb upstream of FLOT1, and an eQTL of MICB. rs199512, which lies in an intron of WNT3, is an eQTL of MAPT and MAPT- Figure 1 Language-related grey matter regions functionally involved with self-reported handedness are connected by white matter tracts associated with rs199512. (A and B) Left-handedness was most strongly associated with an increase in functional connectivity (temporal correlation) between right homologous language functional network (in green, encompassing Broca's areas, the planum temporale and superior temporal sulcus, Z 4 5), and a split of the left language functional network (in red-yellow, Broca's areas and planum temporale shown in A, superior temporal sulcus shown in B, Z 4 5). These language-related functional networks are overlaid on the cortical surface. (C) Voxelwise effects in white matter associated with rs199512 (in red, P 5 3.6 Â 10 À7 ) were used as seeds for probabilistic tractography, which reconstructed the arcuate and superior longitudinal fasciculus (III) (in blue-light blue, thresholded for better visualization at 250 samples). Results are overlaid on the MNI T 1 -weighted template (axial views: z = 27, 12,À3 mm; sagittal views: x = À39, 39 mm). These white matter tracts clearly link the grey matter areas present in lateralized right-and left-sided language functional networks (in green and red-yellow, respectively, also shown in A and B). Supplementary Fig. 2), and biological plausibility.
AS1, and is located within a large LD block within a common inversion polymorphism referred to as the MAPT (microtubule associated protein tau) locus (Table 1 and Supplementary Table 3).
Handedness loci are enriched for neuronal development and neurodegenerative phenotypes
The gene-based analysis demonstrated that the four gene sets and gene ontology terms with the most overlapped genes pertained to neuronal morphogenesis, differentiation, migration and gliogenesis (Supplementary Table 4). The SNP-based enrichment analysis on all nominally significant SNPs (P 4 5 Â 10 À5 ) in the same GWAS showed that the top two enrichments were for Parkinson's disease (P = 2.6 Â 10 À19 ) and neurodegenerative disease (P = 2.8 Â 10 À12 ); of the top 10 enrichments, eight were for neurological disorder phenotypes (Supplementary Table 5). Consistent with this enrichment, positional gene mapping of the left-versus right-handers GWAS summary statistics revealed a set of genes that are highly expressed in several brain tissues ( Supplementary Fig. 2).
Genetics of handedness correlate with psychiatric phenotypes and Parkinson's disease
First, we performed LD score regression to examine correlations between our GWAS of left-handedness and neurodegenerative and psychiatric phenotypes obtained from publicly available GWAS datasets, on the LDHub interface. Our most statistically significant correlations were with schizophrenia (r g = 0.1324, P = 0.0021), Parkinson's disease (r g = À0.2379, P = 0.0071), and at a trend level with anorexia nervosa (r g = 0.1504, P = 0.011) and bipolar disorder (r g = 0.1548, P = 0.025) (Supplementary Table 6). Then, by investigating correlations with clinical phenotypes collected directly from the UK Biobank participants, we found significant positive associations between our handedness-associated loci and numerous mental health phenotypes (rs199512 and rs3094128), as well as a negative correlation between the allele predisposing to left-handedness at rs199512 and having a family history of Parkinson's on the maternal side (and at trend level on the paternal side: beta = 0.0014, P uncorrected = 0.004) ( Table 2).
Genetics of handedness and imaging: rs199512 is associated with structural connectivity between language areas
Consistent with rs199512 being in a gene coding for-and an eQTL of-proteins involved in brain development and axonal guidance (WNT3, MAPT, MAPT-AS1), this SNP yielded many highly significant associations with measures of white matter structural connectivity (diffusion imaging IDPs) (Supplementary Table 7). In particular, those differences were revealed most strongly in tracts linking Broca's and temporoparietal junction areas (arcuate/superior longitudinal fasciculus III), i.e. specifically the same brain regions found differentially functionally-connected in our direct handedness and imaging analysis (Fig. 1C).
Discussion
Through our top SNP associated with handedness, rs199512, we have identified a common genetic influence on handedness, Parkinson's disease and many mental health phenotypes (such as neuroticism, or mood swings, Table 2) and the integrity of the arcuate and superior longitudinal fasciculus (III) fasciculus. These language-related tracts have been consistently associated with schizophrenia and auditory hallucinations (Hubl et al., 2004). The lack of lateralization in our white matter results might be surprising at first, but seems to be consistent with a recent study that failed to find any significant associations of handedness with grey matter asymmetries (Kong et al., 2018). We also found no significant difference in the IDP of grey matter structure (grey matter volume, as well as cortical thickness and area) between left-and right-handers, including that of any asymmetry, although we did not assess specifically the shape/depth of the central sulcus, or the gyrification pattern of the Heschl's gyrus. Of note, however, the white matter tracts associated with rs199512 link grey matter regions known to show the strongest asymmetries from an early developmental stage (Dubois et al., 2010).
Remarkably, all of the grey matter regions connected by these language-related white matter tracts specifically make up the functional (homologous) language networks that differ between left-and right-handers (Fig. 1). Our findings of a stronger functional connectivity-in this case higher positive functional connectivity-between right and left (homologous) language networks in left-handers is consistent with imaging studies that have showed more symmetrical functional activations in language comprehension and language generation in left-handers (Tzourio et al., 1998;Pujol et al., 1999;Knecht, 2002;Joliot et al., 2016). One of the early studies demonstrated a linear relationship between the rate of right-lateralization of language dominance and the degree of left-handedness (Knecht, 2002), while the largest functional study to date (153 left-handers, 137 right-handers) showed a strong atypical pattern of lateralization for language production in 7% of left-handers, but in no right-handers (Joliot et al., 2016). Additional evidence for the stronger involvement of right languagerelated brain regions in left-handers could be seen in our results with a weaker suppression of the DMN influence (Anticevic et al., 2012) on the right language functional network. Except for one single 'edge' in the lower dimension decomposition (Supplementary Table 2B: ICA25 edge 50, r = 0.04, just above significance level at 10 À6 ), we found no effect of handedness on motor networks. While there might be confounds to such functional connectivity measures (Friston, 2011;Duff et al., 2018), we found no association in particular with physiological measures of heart rate and blood pressure (systolic and diastolic) for the two topmost significant edges (visualized in Fig. 1A and B).
As the effect of polymorphisms related to handedness could be seen specifically in language-related tracts connecting the brain regions of the language networks, our functional connectivity findings may thus be the hallmarks in the adult brain of some very early genetically-guided events happening in the white matter cytoskeleton during development. Such genetic effects in the human white matter probably mirror similar, very early cytoskeletal processes observed in the development of chirality in gastropods and amphibians (Davison et al., 2016). It is thus perhaps unsurprising that, in total, three of the four loci correlating with handedness in our GWAS are indeed associated with genes strongly involved in brain development and patterning (MAP2, TUBB/MICB, WNT3/MAPT). In particular, microtubules (MAP2, TUBB, MAPT)-as integral components of the neuronal cytoskeleton-play a key role in neuronal morphogenesis and migration. WNT3 has also been shown to act as an axon guidance molecule and, Two loci associated with left-handedness (rs199512 and rs3094128) were also significantly associated with numerous mental health variables and with familial history of Parkinson's disease in the genotyped UK Biobank participants. Only results surviving correction for multiple comparisons across loci (n = 4) and across clinical phenotypes (n = 1345, Supplementary material) are presented, and these are ranked by effect size. Clinical phenotypes directly related to neurological or mental health symptoms are highlighted in bold. a Direction refers of the correlation between the phenotype in question and the allele that predisposes to non-right-handedness for rs199512 and rs3094128. A positive value indicates that the allele predisposing to non-right-handedness is positively correlated with the phenotype.
strikingly, as a gradient for retinotopic mapping along the medial-lateral axis (Schmitt et al., 2006). Of note, rs3094128 is an eQTL of MICB, which is crucial to brain development and plasticity and may mediate both genetic and environmental involvements in schizophrenia (McAllister, 2014). There is a plethora of literature demonstrating a preponderance of left-handedness in an array of psychiatric disorders, including meta-analyses in schizophrenia (Hirnstein and Hugdahl, 2014), supporting the view that there is a genetic link between handedness, brain lateralization and schizophrenia (Berlim et al., 2003;Francks et al., 2007). In line with this, we found a statistically significant positive correlation between left-handedness and schizophrenia using LD score regression (Supplementary Table 6).
Perhaps the best-known pathological associations of MAPT are Parkinson's and Alzheimer's diseases, with evidence for genetic overlap between these two neurodegenerative disorders within this extended MAPT region (Desikan et al., 2015). Several polymorphisms in and around MAPT have been discovered in GWAS of Parkinson's disease (Satake et al., 2009). Those SNPs likely account for the genetic enrichment observed between handedness and Parkinson's disease in both our LD score regression and SNP-based enrichment analyses. We also identified a strong negative relationship between the allele predisposing to left-handedness at rs199512, which is an eQTL of MAPT and MAPT-AS1 (Supplementary Table 3), and a diagnosis of Parkinson's disease for the mother of the UK Biobank participants (Table 2). This association-only seen in the parent of the participants-is probably a reflection of the relatively young recruitment age in the UK Biobank (40-69), meaning that only a few genetically-susceptible individuals will have developed the disease themselves at that age. The negative association between the left-handedness predisposing allele and a maternal family history of Parkinson's disease is consistent with the LD score regression analyses, where Parkinson's disease also showed a negative correlation with left-handedness, in contrast to the majority of phenotypes examined. Notably, rs199512 is also in an intron of WNT3, which has itself been implicated in Parkinson's disease (Simó n-Sá nchez et al., 2009).
Findings from previous GWAS and neuroimaging studies of human handedness have been equivocal, with a few exceptions (Hatta, 2007;Scerri et al., 2011;Brandler et al., 2013), most likely owing to small-to medium-sized study populations (Guadalupe et al., 2014). The considerable size of the UK Biobank cohort and imaging sub-cohort ($400 00 and $9000, respectively) has allowed us to discover novel loci and correlations between handedness and imaging phenotypes. The relatively modest effect sizes of the associated variants, and the heritability estimate of handedness explained by all SNPs in the left-versus right-handers GWAS (0.012), are consistent with a polygenic model of inheritance incorporating many variants of very low effect size. Similarly, the strongest effect of handedness in the brain explained about 1.4% of the variance seen in functional connectivity between the two languagerelated networks, in contrast with the larger-albeit still modest-effects seen in studies using dedicated language task-functional MRI or functional transcranial Doppler sonography (Tzourio et al., 1998;Pujol et al., 1999;Groen et al., 2013;Joliot et al., 2016). This is the first study to identify in the general population genome-wide significant loci for human handedness in, and eQTL of, genes associated with brain development, microtubules and patterning. While replication in a large, well-powered independent cohort is needed to confirm these associations, it is striking that the associated loci are also strongly positively correlated with schizophrenia and negatively correlated with Parkinson's disease. In particular, our most significant SNP, rs199512, was not only directly associated with mental health phenotypes and familial history of Parkinson's disease in the UK Biobank participants, but also with structural connectivity measures in white matter tracts connecting language-related brain areas. Thus, this locus has biological plausibility in contributing to differences in neurodevelopmental connectivity of language areas. The lateralization of brain language function was strongly related to handedness; whether increased bilateral language function gives left-handers a cognitive advantage at verbal tasks remains to be investigated separately in a large dataset offering both well-characterized verbal cognition testing (not available in UK Biobank) as well as FSLnets-like functional connectivity measures, such as the Human Connectome Project (Anticevic et al., 2012;Van Essen et al., 2013;Somers et al., 2015). This study thus represents an important advance in our understanding of human handedness and offers mechanistic insights into the observed correlations between chirality and microtubules in the brain, and suggests an overlap of genetic architecture between handedness and certain neurodegenerative and psychiatric phenotypes. | 2019-08-23T02:03:33.690Z | 2019-09-05T00:00:00.000 | {
"year": 2019,
"sha1": "0a32c528541b89c8601424b4b03b3060dcf9c408",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/brain/article-pdf/142/10/2938/30746105/awz257.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "f916b41828604cec3c10054ff02ed1d349dac6d6",
"s2fieldsofstudy": [
"Psychology",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
52104940 | pes2o/s2orc | v3-fos-license | Beyond Weight Tying: Learning Joint Input-Output Embeddings for Neural Machine Translation
Tying the weights of the target word embeddings with the target word classifiers of neural machine translation models leads to faster training and often to better translation quality. Given the success of this parameter sharing, we investigate other forms of sharing in between no sharing and hard equality of parameters. In particular, we propose a structure-aware output layer which captures the semantic structure of the output space of words within a joint input-output embedding. The model is a generalized form of weight tying which shares parameters but allows learning a more flexible relationship with input word embeddings and allows the effective capacity of the output layer to be controlled. In addition, the model shares weights across output classifiers and translation contexts which allows it to better leverage prior knowledge about them. Our evaluation on English-to-Finnish and English-to-German datasets shows the effectiveness of the method against strong encoder-decoder baselines trained with or without weight tying.
Introduction
Neural machine translation (NMT) predicts the target sentence one word at a time, and thus models the task as a sequence classification problem where the classes correspond to words. Typically, words are treated as categorical variables which lack description and semantics. This makes training speed and parametrization dependent on the size of the target vocabulary (Mikolov et al., 2013). Previous studies overcome this problem by truncating the vocabulary to limit its size and mapping out-of-vocabulary words to a single "unknown" token. Other approaches attempt to use a limited number of frequent words plus sub-word units (Sennrich et al., 2016), the combination of which can cover the full vocabulary, or to perform character-level modeling (Chung et al., 2016;Lee et al., 2017;Costa-jussà and Fonollosa, 2016;Ling et al., 2015); with the former being the most effective between the two. The idea behind these alternatives is to overcome the vocabulary size issue by modeling the morphology of rare words. One limitation, however, is that semantic information of words or sub-word units learned by the input embedding are not considered when learning to predict output words. Hence, they rely on a large amount of examples per class to learn proper word or sub-word unit output classifiers.
One way to consider information learned by input embeddings, albeit restrictively, is with weight tying i.e. sharing the parameters of the input embeddings with those of the output classifiers (Press and Wolf, 2017;Inan et al., 2016) which is effective for language modeling and machine translation (Sennrich et al., 2017;Klein et al., 2017). Despite its usefulness, we find that weight tying has three limitations: (a) It biases all the words with similar input embeddings to have a similar chance to be generated, which may not always be the case (see Table 1 for examples). Ideally, it would be better to learn distinct relationships useful for encoding and decoding without forcing any general bias. (b) The relationship between outputs is only implicitly captured by weight tying because there is no parameter sharing across output classifiers. (c) It requires that the size of the translation context vector and the input embeddings are the same, which in practice makes it difficult to control the output layer capacity.
In this study, we propose a structure-aware output layer which overcomes the limitations of previous output layers of NMT models. To achieve this, we treat words and subwords as units with textual descriptions and semantics. The model consists of a joint input-output embedding which learns what to share between input embeddings Table 1: Top-5 most similar input and output representations to two query words based on cosine similarity for an NMT trained without (NMT) or with weight tying (NMT-tied) and our structure-aware output layer (NMT-joint) on De-En (|V| ≈ 32K). Our model learns representations useful for encoding and generation which are more consistent to the dominant semantic and syntactic relations of the query such as verbs in past tense, adjectives and nouns (inconsistent words are marked in red). and output classifiers, but also shares parameters across output classifiers and translation contexts to better capture the similarity structure of the output space and leverage prior knowledge about this similarity. This flexible sharing allows it to distinguish between features of words which are useful for encoding, generating, or both. Figure 1 shows examples of the proposed model's input and output representations, compared to those of a softmax linear unit with or without weight tying. This proposal is inspired by joint input-output models for zero-shot text classification (Yazdani and Henderson, 2015;Nam et al., 2016a), but innovates in three important directions, namely in learning complex non-linear relationships, controlling the effective capacity of the output layer and handling structured prediction problems.
Our contributions are summarized as follows: • We identify key theoretical and practical limitations of existing output layer parametrizations such as softmax linear units with or without weight tying and relate the latter to joint input-output models.
• We propose a novel structure-aware output layer which has flexible parametrization for neural MT and demonstrate that its mathe-matical form is a generalization of existing output layer parametrizations.
• We provide empirical evidence of the superiority of the proposed structure-aware output layer on morphologically simple and complex languages as targets, including under challenging conditions, namely varying vocabulary sizes, architecture depth, and output frequency. The evaluation is performed on 4 translation pairs, namely English-German and English-Finnish in both directions using BPE (Sennrich et al., 2016) of varying operations to investigate the effect of the vocabulary size to each model. The main baseline is a strong LSTM encoder-decoder model with 2 layers on each side (4 layers) trained with or without weight tying on the target side, but we also experiment with deeper models with up to 4 layers on each side (8 layers). To improve efficiency on large vocabulary sizes we make use of negative sampling as in (Mikolov et al., 2013) and show that the proposed model is the most robust to such approximate training among the alternatives.
Background: Neural MT
The translation objective is to maximize the conditional probability of emitting a sentence in a target language Y = {y 1 , ..., y n } given a sentence in a source language X = {x 1 , ..., x m }, noted p Θ (Y |X), where Θ are the model parameters learned from a parallel corpus of length N : (1) By applying the chain rule, the output sequence can be generated one word at a time by calculating the following conditional distribution: where f Θ returns a column vector with an element for each y t . Different models have been proposed to approximate the function f Θ (Kalchbrenner and Blunsom, 2013;Sutskever et al., 2014;Bahdanau et al., 2015;Cho et al., 2014;Gehring et al., 2017;Vaswani et al., 2017). Without loss of generality, we focus here on LSTM-based encoder-decoder model with attention Luong et al. (2015).
Softmax Linear Unit
The most common output layer (Figure 3a), consists of a linear unit with a weight matrix W ∈ IR d h ×|V| and a bias vector b ∈ IR |V| followed by a softmax activation function, where V is the vocabulary, noted as NMT. For brevity, we focus our analysis specifically on the nominator of the normalized exponential which characterizes softmax. Given the decoder's hidden representation h t with dimension size d h , the output probability distribution at a given time, y t , conditioned on the input sentence X and the previously predicted outputs y t−1 1 can be written as follows: where I is the identity function. From the second line of the above equation, we observe that there is no explicit output space structure learned by the model because there is no parameter sharing across outputs; the parameters for output class i, W T i , are independent from parameters for any other output class j, W T j .
Softmax Linear Unit with Weight Tying
The parameters of the output embedding W can be tied with the parameters of the input embedding E ∈ IR |V|×d by setting W = E T , noted as NMT-tied. This can happen only when the input dimension of W is restricted to be the same as that of the input embedding (d = d h ). This creates practical limitations because the optimal dimensions of the input embedding and translation context may actually be when d h = d.
With tied embeddings, the parametrization of the conditional output probability distribution from Eq. 3 can be re-written as: As above, this model does not capture any explicit output space structure. However, previous studies have shown that the input embedding learns linear relationships between words similar to distributional methods (Mikolov et al., 2013). The hard equality of parameters imposed by W = E T forces the model to re-use this implicit structure in the output layer and increases the modeling burden of the decoder itself by requiring it to match this structure through h t . Assuming that the latent linear structure which E learns is of the form E ≈ E l W where E l ∈ IR |V|×k and W ∈ IR k×d and d = d h , then Eq. 4 becomes: The above form, excluding bias b, shows that weight tying learns a similar linear structure, albeit implicitly, to joint input-output embedding models with a bilinear form for zero-shot classification (Yazdani and Henderson, 2015;Nam et al., 2016a). 1 This may explain why weight tying is more sample efficient than the baseline softmax linear unit, but also motivates the learning of explicit structure through joint input-output models.
Challenges
We identify two key challenges of the existing parametrizations of the output layer: (a) their difficulty in learning complex structure of the output space due to their bilinear form and (b) their rigidness in controlling the output layer capacity due to their strict equality of the dimensionality of the translation context and the input embedding.
The structure-aware output layer is a joint embedding between translation contexts and word classifiers. Figure 1: Schematic of existing output layers and the proposed output layer for the decoder of the NMT model with source context vector c t , previous word y t−1 ∈ IR d , and decoder hidden states, h t ∈ IR d h .
Learning Complex Structure
The existing joint input-output embedding models (Yazdani and Henderson, 2015;Nam et al., 2016a) have the following bilinear form: where W ∈ IR d×d h . We can observe that the above formula can only capture linear relationships between encoded text (h t ) and input embedding (E) through W. We argue that for structured prediction, the relationships between different outputs are more complex due to complex interactions of the semantic and syntactic relations across outputs but also between outputs and different contexts. A more appropriate form for this purpose would include a non-linear transformation σ(·), for instance with either:
Controlling Effective Capacity
Given the above definitions we now turn our focus to a more practical challenge, which is the capacity of the output layer. Let Θ base , Θ tied , Θ bilinear be the parameters associated with a softmax linear unit without and with weight tying and with a joint bilinear input-output embedding, respectively. The capacity of the output layer in terms of effective number of parameters can be expressed as: But since the parameters of Θ tied are tied to the parameters of the input embedding, the effective number of parameters dedicated to the output layer is only |Θ tied | = |V|.
The capacities above depend on external factors, that is |V|, d and d h , which affect not only the output layer parameters but also those of other parts of the network. In practice, for Θ base the capacity d h can be controlled with an additional linear projection on top of h t (e.g. as in the Open-NMT implementation), but even in this case the parametrization would still be heavily dependent on |V|. Thus, the following inequality for the effective capacity of these models holds true for fixed |V |, d, d h : This creates in practice difficulty in choosing the optimal capacity of the output layer which scales to large vocabularies and avoids underparametrization or overparametrization (left and right side of Eq. 11 respectively). Ideally, we would like to be able to choose the effective capacity of the output layer more flexibly moving freely in between C bilinear and C base in Eq. 11.
Structure-aware Output Layer for Neural Machine Translation
The proposed structure-aware output layer for neural machine translation, noted as NMTjoint, aims to learn the structure of the output space by learning a joint embedding between translation contexts and output classifiers, as well as, by learning what to share with input embeddings ( Figure 1b). In this section, we describe the model in detail, showing how it can be trained efficiently for arbitrarily high number of effective parameters and how it is related to weight tying.
Joint Input-Output Embedding
Let g inp (h t ) and g out (e j ) be two non-linear projections of d j dimensions of any translation context h t and any embedded output e j , where e j is the j th row vector from the input embedding matrix E, which have the following form: where the matrix U ∈ IR d j ×d and bias b u ∈ IR d j is the linear projection of the translation context and the matrix V ∈ IR d j ×d h and bias b v ∈ IR d j is the linear projection of the outputs, and σ is a nonlinear activation function (here we use Tanh).
Note that the projections could be high-rank or low-rank for h t and e j depending on their initial dimensions and the target joint space dimension.
With E ∈ IR |V|×d j being the matrix resulting from projecting all the outputs e j to the joint space, i.e. g out (E), and a vector b ∈ IR |V| which captures the bias for each output, the conditional output probability distribution of Eq 3 can be rewritten as follows:
What Kind of Structure is Captured?
From the above formula we can derive the general form of the joint space which is similar to Eq. 7 with the difference that it incorporates both components for learning output and context structure: where W o ∈ IR d×d j and W c ∈ IR d j ×d h are the dedicated projections for learning output and context structure respectively (which correspond to U and V projections in Eq. 14). We argue that both nonlinear components are essential and validate this hypothesis empirically in our evaluation by performing an ablation analysis (Section 4.4).
How to Control the Effective Capacity?
The capacity of the model in terms of effective number of parameters (Θ joint ) is: By increasing the joint space dimension d j above, we can now move freely between C bilinear and C base in Eq .11 without depending anymore on the external factors (d, d h , |V |) as follows: However, for very large number of d j the computational complexity increases prohibitively because the projection requires a large matrix multiplication between U and E which depends on |V|.
In such cases, we resort to sampling-based training, as explained in the next subsection.
Sampling-based Training
To scale up to large output sets we adopt the negative sampling approach from (Mikolov et al., 2013). The goal is to utilize only a sub-set V of the vocabulary instead of the whole vocabulary V for computing the softmax. The sub-set V includes all positive classes whereas the negative classes are randomly sampled. During back propagation only the weights corresponding to the subset V are updated. This can be trivially extended to mini-batch stochastic optimization methods by including all positive classes from the examples in the batch and sampling negative examples randomly from the rest of the vocabulary. Given that the joint space models generalize well on seen or unseen outputs (Yazdani and Henderson, 2015;Nam et al., 2016b), we hypothesize that the proposed joint space will be more sample efficient than the baseline NMT with or without weight tying, which we empirically validate with a sampling-based experiment in Section 4.5 (Table 2, last three rows with |V| ≈ 128K).
Relation to Weight Tying
The proposed joint input-output space can be seen as a generalization of weight tying (W = E T , Eq. 3), because its degenerate form is equivalent to weight tying. In particular, this can be simply derived if we set the non-linear projection functions in the second line of Eq. 14 to be the identity function, g inp (·) = g out (·) = I, as follows: Overall, this new parametrization of the output layer generalizes over previous ones and addresses their aforementioned challenges in Section 2.2. En
Evaluation
We compare the NMT-joint model to two strong NMT baselines trained with and without weight tying over four large parallel corpora which include morphologically rich languages as targets (Finnish and German), but also morphologically less rich languages as targets (English) from WMT 2017 (Bojar et al., 2017) 2 . We examine the behavior of the proposed model under challenging conditions, namely varying vocabulary sizes, architecture depth, and output frequency.
Datasets and Metrics
The English-Finnish corpus contains 2.5M sentence pairs for training, 1.3K for development (Newstest2015), and 3K for testing (New-stest2016), and the English-German corpus 5.8M for training, 3K for development (Newstest2014), and 3K for testing (Newstest2015). We preprocess the texts using the BPE algorithm (Sennrich et al., 2016) with 32K, 64K and 128K operations. Following the standard evaluation practices in the field (Bojar et al., 2017), the translation quality is measured using BLEU score (Papineni et al., 2002) (multi-blue) on tokenized text and the significance is measured with the paired bootstrap re-sampling method proposed by (Koehn et al., 2007). 3 The quality on infrequent words is measured with METEOR (Denkowski and Lavie, 2014) which has originally been proposed to measure performance on function words.
To adapt it for our purposes on English-German pairs (|V| ≈ 32K), we set as function words different sets of words grouped according to three frequency bins, each of them containing |V| 3 words of high, medium and low frequency respectively and set its parameters to {0.85, 0.2, 0.6, 0.} and {0.95, 1.0, 0.55, 0.} when evaluating on English and German respectively.
Model Configurations
The baseline is an encoder-decoder with 2 stacked LSTM layers on each side from OpenNMT (Klein et al., 2017), but we also experiment with varying depth in the range {1, 2, 4, 8} for German-English. The hyperparameters are set according to validation accuracy as follows: maximum sentence length of 50, 512-dimensional word embeddings and LSTM hidden states, dropout with a probability of 0.3 after each layer, and Adam (Kingma and Ba, 2014) optimizer with initial learning rate of 0.001. The size of the joint space is also selected on validation data in the range {512, 2048, 4096}. For efficiency, all models on corpora with V ≈ 128K (∼) and all structure-aware models with d j ≥ 2048 on corpora with V ≤ 64K are trained with 25% negative sampling. 4 NMT baseline in many cases, but the differences are not consistent and it even scores significantly lower than NMT baseline in two cases, namely on Fi → En and De → En with V ≈ 64K. This validates our claim that the parametrization of the output space of the original NMT is not fully redundant, otherwise the NMT-tied would be able to match its BLEU in all cases. In contrast, the NMTjoint model outperforms consistently both baselines with a difference up to +2.2 and +1.6 BLEU points respectively, 5 showing that the NMT-tied model has a more effective parametrization and retains the advantages of both baselines, namely sharing weights with the input embeddings, and dedicating enough parameters for generation. Overall, the highest scores correlate with a high number of BPE operations, namely 128K, 64K, 128K and 64k respectively. This suggests that the larger the vocabulary the better the performance, especially for the morphologically rich target languages, namely En → Fi and En → De. Lastly, the NMT baseline seems to be the least robust to sampling since its BLEU decreases in two cases. The other two models are more robust to sampling, however the difference of NMT-tied with the NMT is less significant than that of NMT-joint.
Ablation Analysis
To demonstrate whether all the components of the proposed joint input-output model are useful and to which extend they contribute to the performance, we performed an ablation analysis; the results are displayed in Table 3. Overall, all the variants of the NMT-joint outperform the baseline with varying degrees of significance. The NMTjoint with a bilinear form (Eq. 6) as in (Yaz- dani and Henderson, 2015; Nam et al., 2016b) is slightly behind the NMT-tied and outperforms the NMT baseline; this supports our theoretical analysis in Section 2.1.2 which demonstrated that weight tying is learning an implicit linear structure similar to bilinear joint input-output models.
The NMT-joint model without learning explicit translation context structure (Eq. 7 a) performs similar to the bilinear model and the NMTtied model, while the NMT-joint model without learning explicit output structure (Eq. 7 b) outperforms all the previous ones. When keeping same capacity (with d j =512), our full model, which learns both output and translation context structure, performs similarly to the latter model and outperforms all the other baselines, including joint input-output models with a bilinear form (Yazdani and Henderson, 2015;Nam et al., 2016b). But when the capacity is allowed to increase (with d j =2048), it outperforms all the other models. Since both nonlinearities are necessary to allow us to control the effective capacity of the joint space, these results show that both types of structure induction are important for reaching the top performance with NMT-joint.
Effect of Embedding Size
Performance Figure 2 displays the BLEU scores of the proposed model when varying the size of the joint embedding, namely d j ∈ {512, 2048, 4096}, against the two baselines. For English-Finish pairs, the increase in embedding size leads to a consistent increase in BLEU in favor of the NMTjoint model. For the English-German pairs, the difference with the baselines is much more evident and the optimal size is observed around 2048 for De → En and around 512 on En → De. The results validate our hypothesis that there is parameter redundancy in the typical output layer. However the ideal parametrization is data dependent and is achievable systematically only with the joint output layer which is capacity-wise in between the typical output layer and the tied output layer.
Training speed Table 4 displays the target tokens processed per second by the models on En → DE with |V| ≈ 128K using different levels of negative sampling, namely 50%, 25%, and 5%. In terms of training speed, the 512-dimensional NMT-joint model is as fast as the baselines, as we can observe in all cases. For higher dimensions of the joint space, namely 2048 and 4096 there is a notable decrease in speed which is remidiated by reducing the percentage of the negative samples.
Effect of Output Frequency and Architecture Depth
Figure 3 displays the performance in terms of ME-TEOR on both directions of German-English language pair when evaluating on outputs of different frequency levels (high, medium, low) for all the competing models. The results on De → EN show that the improvements brought by the NMTjoint model against baselines are present consistently for all frequency levels including the lowfrequency ones. Nevertheless, the improvement is most prominent for high-frequency outputs, which is reasonable given that no sentence filtering was performed and hence frequent words have higher impact in the absolute value of METEOR. Similarly, for En → De we can observe that NMTjoint outperforms the others on high-frequency and low-frequency labels while it reaches parity with them on the medium-frequency ones.
We also evaluated our model in another challenging condition in which we examine the effect of the NMT architecture depth in the performance of the proposed model. The results are displayed in Table 5. The results show that the NMTjoint outperforms the other two models consistently when varying the architecture depth of the encoder-decoder architecture. The NMT-joint overall is much more robust than NMT-tied and it outperforms it consistently in all settings. Compared to the NMT which is overparametrized the improvement even though consistent it is smaller for layer depth 3 and 4. This happens because NMT has a much higher number of parameters than NMT-joint with d j =512.
Increasing the number of dimensions d j of the joint space should lead to further improvements, as shown in Fig. 2. In fact, our NMT-joint with d j = 2048 reaches 18.11 score with a 2-layer deep model, hence it outperforms all other NMT and NMT-tied models even with a deeper architecture (3-layer and 4-layer) regardless of the fact that it utilizes fewer parameters than them (48.8M vs 69.2-73.4M and 50.9-55.1M respectively).
Related Work
Several studies focus on learning joint inputoutput representations grounded to word semantics for zero-shot image classification (Weston et al., 2011;Socher et al., 2013;Zhang et al., 2016), but there are fewer such studies for NLP tasks. (Yazdani and Henderson, 2015) proposed a zero-shot spoken language understanding model based on a bilinear joint space trained with hinge loss, and (Nam et al., 2016b), proposed a similar joint space trained with a WARP loss for zero-shot biomedical semantic indexing. In addition, there exist studies which aim to learn output representations directly from data such as (Srikumar and Manning, 2014;Yeh et al., 2018;Augenstein et al., 2018); their lack of semantic grounding to the input embeddings and the vocabulary-dependent parametrization, however, makes them data hungry and less scalable on large label sets. All these models, exhibit similar theoretical limitations as the softmax linear unit with weight tying which were described in Sections 2.2.
To our knowledge, there is no existing study which has considered the use of such joint inputoutput labels for neural machine translation. Compared to previous joint input-label models our model is more flexible and not restricted to linear mappings, which have limited expressivity, but uses non-linear mappings modeled similar to energy-based learning networks (Belanger and McCallum, 2016). Perhaps, the most similar embedding model to ours is the one by (Pappas and Henderson, 2018), except for the linear scaling unit which is specific to sigmoidal linear units designed for multi-label classification problems and not for structured prediction, as here.
Conclusion and Perspectives
We proposed a re-parametrization of the output layer for the decoder of NMT models which is more general and robust than a softmax linear unit with or without weight tying with the input word embeddings. Our evaluation shows that the structure-aware output layer outperforms weight tying in all cases and maintains a significant difference with the typical output layer without compromising much the training speed. Furthermore, it can successfully benefit from training corpora with large BPE vocabularies using negative sampling. The ablation analysis demonstrated that both types of structure captured by our model are essential and complementary, as well as, that their combination outperforms all previous output layers including those of bilinear input-output embedding models. Our further investigation revealed the robustness of the model to samplingbased training, translating infrequent outputs and to varying architecture depth.
As future work, the structure-aware output layer could be further improved along the following directions. The computational complexity of the model becomes prohibitive for a large joint projection because it requires a large matrix multiplication which depends on |V|; hence, we have to resort to sampling based training relatively quickly when gradually increasing d j (e.g. for d j >= 2048). A more scalable way of increasing the output layer capacity could address this issue, for instance, by considering multiple consecutive additive transformations with small d j . Another useful direction would be to use more advanced output encoders and additional external knowledge (contextualized or generically defined) for both words and sub-words. Finally, to encourage progress in joint input-output embedding learning for NMT, our code is available on Github: http://github.com/idiap/ joint-embedding-nmt. | 2018-08-26T18:59:02.906Z | 2018-08-01T00:00:00.000 | {
"year": 2018,
"sha1": "aa882a052cbc6dcaa04a28854a2b04676ace737a",
"oa_license": "CCBY",
"oa_url": "https://www.aclweb.org/anthology/W18-6308.pdf",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "8a387faa261747f0fde30b96e592cc8afa926989",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
234202600 | pes2o/s2orc | v3-fos-license | Mid-Day Meal Analytics Using Machine Learning
The concept is to prevent the wastage of food from the mid-day meal scheme and to identify the fraudulence that is being happened in the mid-day meal scheme. In this process, QR-code reader is used to the number of students is enroll their count for food within some pre-determined time through their ID card. A touch screen display is placed at the food issuing place which shows the menu for a particular day. Food can be monitored through a Web camera and it also placed at the food issuing place take snapshot of every individual student which provides the count of students reported with the required students count the amount of food needs to be prepared for that day is known. This snapshot photo is compared with the photo taken by the Web camera at the place where the food is prepared. The Raspberry-pi is to process the machine learning concept to identify the meal. To develop the mobile application to send the notification to a higher authority when fraudulence action is detected via camera.
Introduction
India is going through a fast change in the population structure. Along with it changes the disease pattern and nourishment status of the population. The change in the nourishment can be indicated by the nourishment of school going children as they form the major part of the community. Planning Commission report, 2010, reported that Mid-Day Meal (MDM) Program has achieved success in meeting the nourishment of school going children and also creates social equality in government school students. The MDM scheme is providing the afternoon food for the students in school. It is formulated by Indian government to nourish the school going children all over the nation. Through this program approximately 120,000,000 students of 1st to 8th classes from 1,265,000 schools is being provided with their lunch and Education guarantee scheme. It is the world's largest one. Almost 50% of the school children in US have either their morning meals or mid-day meal at their schools (Burghardt et al., 1995), still they may buy eatables from other places in the school. The rising trend in the variety of foods provided by the schools poses a threat to the National School Lunch program (NSLP).
In 1995, the USDA started a School Meals Initiative for Healthy Children. The motive was to enhance the quality of nutrition provided to the school children. The School Meals Initiative need the lunch to be as per the Dietary Guidelines to Americans, i.e., Total fat should be less than 30% of the total calories and saturated fat should be less than 10%; 1/3rd of the advised diet should consist of protein, calcium, iron, Vitamin-A and Vitamin-C (USDA, 1995).On further note, a more accurate and fast data collection is done by block chain, sensor enabled devices, and other Artificial Intelligence technologies and thus the need for supply for the next day meal is calculated. Mobile phones and a special system designed by Accenture is used to collect the information regarding the timings at which the food is delivered at schools. The special sensors (IoT) were utilized to make sure the standard of food cooked and proper resource utilization and the procedure by which food is cooked. This system also assisted to keep an eye on kitchen productivity and gave precise, ontime information and helped in making right choices. [1]. During the academic year of 2006-07, there was considerable decrease in the diet pattern that was competitively provided in schools and there was an increase in the national school lunch program. The participation of schools in Connecticut's program helped reducing the school nutrition programs that were not healthy and are provided competitively.
2.2.
Nutrition standards for all foods sold in school as required by the Healthy, Hunger-Free Kids Act of 2010 [2]. This rule came before the finalization and this made changes in NSLP and breakfast program. It established the level of nutrition, for all eatables in school other than those given under the abovementioned programs. Amendments made by Section 208 of the Healthy, Hunger-Free Kids Act of 2010 (HHFKA) needs the Secretary to implement standards for the foods sold, which should be as per the latest Dietary Guidelines to Americans. It also instructs the secretary to take into account the scientific suggestions for the quality of nourishments the nutrition standards that are already in the schools, also including the standards set by the schools by themselves for other drinks and eatables; existing State and local standards; the ground level use of the standards; also some considerations were provided to the rare donors. [3]. The motto of this program is to improve physical and mental status of the school-going children. After this, "The School Lunch Act" was rewritten in 2008 and the objective was altered to "promoting Shokuiku". According to record on May 2009, almost 10 million students have been actively involved in the program. It is also considered a task to improve the mental status of the students. The children themselves are given responsibility to serve food and clear the dishes. It encourages them to learn social behavior by having food along with other students. Also, it helps to understand the dietary requirements and food culture is promoted by studying the menu of each meal.
School meal program in Vietnam
: reality and future plan [4]. Nowadays parents have become so busy that they are not able to spend time to prepare breakfast for food for their kids. This program has been first initiated by Department of Education for Kindergarten in 1977, which later expanded to cover elementary school by the year eighteen hundred. As of now, all kindergarten schools and 9/10th of Elementary school have this program. The motto is to give them optimum nutrition and also as an aid for education and communication. Almost 90% of the food is prepared in the kitchen of the education institution while the other 10% are given by company where meals are cooked. According to the diet plan of the week, it covers 30% of the Recommended dietary (RDA) for the kids. For diet photo identification, the author in his dietary assessment project with technologies described diet identification with a small set of information, that was planned to be used be used as food logging system which is dependent on smartphone. The identified 85 edibles, and achieved 62.5% precision of the photos of Japanese cuisine obtained online. Kernel learning for fusion feature was used by them. The Pittsburgh Fast-Food Image Dataset is a set of information of photos of fast-foods in America that was utilized to examine food-identification strategies. Balanced diet with respect to nutritional value was calculated by image processing. Image retrieval was applied to food recording. Deep learning is nowadays used on recognizing the diet. Deep learning which is a common term for the calculations that has in-depth structure which is used to solve complicated issues. A very important unique feature is that enhanced photo features are spontaneously derived by training. The convolutional neural network (CNN) is a strategy that fulfils the needs of the deep learning strategy. Convolutional Neuronal Network is the current tread for recognizing images in challenging situations like the Large-Scale Visual Recognition Challenge.
Experimental Methodology
The web camera is placed in the cooking place and place where the food is served. The camera is used to monitor the process with the given machine learning model such as CNN. The Convolutional Neuronal network [5] provided current method to recognize photos. It is comprised of neural network of many layers, where it uses the small patches of preceding layer. It is capable against minimal shifts and rotations. Its system is composed often layers namely, the convolution layer and pooling layer. It differs from general neural network that are completely inter-connected here weights can be taken as nxn (n< input size)filters. Every data we feed in twists together the filters. Every layer consists of multiple filters that provide variable outputs. In order to recognize images, the various characteristics are extracted by the filters. These are otherwise called(convolution) kernels. The pooling layer creates the outputs by activating over rectangular regions. Various activation methods are available and few examples are maximum activation and average activation. Thus, the Convolutional Neuronal Network is more uniform concerned to its position. A normal Convolution Neuronal Network consists of various convolution and pooling layers with a completely interlinked layers to give the end result of the work. In photo classification, every individual unit of the end layer shows the class probability. A lot of photos of the image of the normal meal is needed for recognizing. A meal photo normally consists of number of ingredients. While we analyze the photos of ingredients, each region of the ingredient has to recognized and separated for providing the needed set of data. There are applications for food logging in mobile phones which would be useful for this: the information obtained from this Food-Log (FL) [6] is utilized by us. These FLs can be utilized by the common people to record their own meal as photo and in written form. The common people using FLs capture image of their meals, and mentions each area of the meals by the name of the food item. The name is often selected from the database of foods. Hence clean information about the meals, food and its various regions are available. In our tests we tend to magnify the image given by the user because they often mention the smaller regions of the items. Food Log is an application available to common public, and as the number of users increase, thus the available database of images also increases. From the initiation of Food Logs in 2013, we gathered almost 2 months of users' data from that. Approximately170,000 photos were utilized. This information is obtained from general public using the Food Logs.
Fig 1.Training Data
There is a huge difference and lots of bias occurring in some food items. We selected the ten most often used food items as shown in Figure 1. Training a Convolutional Neuronal network [7] needs a huge data, and hence we utilized the maximum occurred 10 items for the training [8][9][10][11][12][13][14][15][16][17].
When we see a complete photo it consists of background, hence food detection separates the photo into 2, i.e., food and non-food .We did an examination to analyze the Convolutional Neuronal network's capability. Inorder to see the detection activity, we utilized a another dataset so as to include non food pictures also. We utilized 1,234 and 1,980 food and non-food pictures respectively. Also we included human faces and other terrains just forn experimental purpose. The overall architecture of the proposed system is shown in Figure 2.
Materials and methods
The overall process consists of the following materials and methods.
QR-code scanner
QR-code scanner is used to Students need to enroll their count for food within some predetermined time through their ID card with QR code as shown in Figure 3.
User login
The purpose of the process is used to login the user with specific user Id and password as shown in figure 6.
Food and supervisor details
The food details are used to display to the supervisor and how many foods can be selected in the particular day. Based on the count the food can be cooked as shown in figure 7 and figure 8.
Food identification process in cooking place
The purpose of Food identification process is used to identify the food which declared in particular day as shown in figure 9.
Results and Discussion
First of all, Students need to enroll their count for food within some predetermined time through their ID card with QR code. A touch screen display is placed at the food issuing place which shows the menu for a particular day. The display also shows the total number of counts enrolled for food from which the total number of students count and the amount of food needs to be prepared for that day is known. Web camera placed at the food issuing place takes a snapshot of every individual student which provides the count of students reported. The snapshot photos from the Web camera placed at the food issuing place shows the menu displayed in the touch screen display. This snapshot photo is compared with the photo taken by the Web camera at the food preparing place. The video feed is provided to students through the Web camera which shows the food serving area, school, etc. All other information is stored in a central database and Alert messages are sent to the central coordinator in case of any discrepancies through a mobile application. | 2021-05-11T00:07:07.018Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "dd7313a3778282f8e6bfb54789e1f741071a8de6",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1717/1/012035",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "066c05e5b8a04e7d1b380a08910f35fbd868e934",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Psychology",
"Physics"
]
} |
270243449 | pes2o/s2orc | v3-fos-license | The Assessment of the Efficacy, Safety, and Challenges of Ketogenic Diet Therapy in Children with Epilepsy: The First Experience of a Single Center
Background and Objectives: Ketogenic diet therapy (KDT) has been used as a non-pharmacological treatment for childhood refractory epilepsy. Its efficacy and safety have been described in numerous studies and reviews. However, there have been fewer studies evaluating the challenges experienced by patients and their family members when starting KDT. When implementing a new treatment method, challenges arise for both the healthcare professionals and patients, making it important to summarize the initial results and compare them with the experiences of other centers. To analyze and evaluate the efficacy and safety of KDT in children with epilepsy, as well as to consider the challenges faced by their parents/caregivers. Materials and Methods: A retrospective analysis of patients’ data (N = 30) and an analysis of the completed questionnaires of the parents/caregivers (N = 22) occurred. Results: In the study group, 66.7% of the patients had a >50% decrease in seizure frequency, and 2/3 of them had a >90% decrease in seizure frequency or were seizure-free, which enabled reducing the anti-seizure medications in 36.4% of the patients, as well as reducing the hospital visits. Cognitive improvement and better alertness were subjectively reported by 59.1% of the parents/caregivers. No dangerous long-term adverse effects of KDT have been observed in the study group. The patients with generalized epilepsy experienced significantly more adverse events. Most of the adverse effects of KDT were related to the digestive system, but usually they were temporary and controllable. The challenges of the parents/caregivers were mostly related to social life issues and financial difficulties; the medical-related challenges were minimal. Conclusions: KDT is an effective and safe treatment option for children with drug-resistant epilepsy, and the challenges faced by families are resolvable. In order to ensure effective KDT, a multidisciplinary team is required. This would ensure smooth and comprehensive care and the timely resolution of emerging problems. The cooperation of the families undergoing KDT is also important, enabling them to share their experiences.
Introduction
The ketogenic diet is a high-fat, low-carbohydrate, and adequate-protein diet that has been used as a therapeutic option for patients with drug-resistant epilepsy.Numerous studies have shown that ketogenic diet therapy (KDT) can be effective in reducing the seizure frequency and improving the seizure control in children and adults who do not respond well to anti-seizure medications (ASMs), have difficulty tolerating them, or when epilepsy surgery is impossible [1][2][3][4][5].For some conditions, such as glucose transport type-1 (GLUT-1) deficiency syndrome and pyruvate dehydrogenase complex (PDHC) deficiency, KDT is the treatment of choice [1,3,6], but KDT can also be effective in certain types of epilepsy, such as Dravet syndrome [7][8][9], Lennox-Gastaut syndrome [3,[10][11][12], tuberous sclerosis complex [3,13,14], infantile spasms, etc. [1,3,15].Moreover, a ketogenic diet is recommended as a new adjunctive treatment during critical care for the resolution of acute status epilepticus and related disorders, such as new-onset refractory status epilepticus (NORSE) and febrile infection-related epilepsy syndrome (FIRES), when the traditional ASMs and anesthetic agents fail [16][17][18].
The safety of undertaking a ketogenic diet depends on various factors, including the individual's medical history, overall health, and specific dietary needs.The most commonly described complication of introducing KDT is hypoglycemia.Dehydration is more common when KDT is introduced after a few days of fasting.Additionally, gastrointestinal symptoms often occur at the beginning of KDT: vomiting, nausea, diarrhea, and abdominal pain.These side effects are usually short-lived and can be corrected by reviewing the dietary plan.Some serious complications that sometimes occur include kidney stones and pancreatitis [18][19][20].
The implementation of the ketogenic diet can often be a challenge for the family.When applying the ketogenic diet, the parents/caregivers not only need to learn how to accurately calculate the nutrient content of meals and discover new recipes adapted to the diet but also need to review the dietary habits of the entire family.This can create difficulties in the social aspects of family life [21][22][23].
While there are international recommendations for the use of KDT [3,6], introducing a new treatment method poses challenges for both the healthcare professionals and patients, as well as their families.Close collaboration between the patients and the KDT team can help to mitigate these challenges and achieve better outcomes.However, there are significantly more studies evaluating the efficacy and safety of KDT in treating drugresistant epilepsy but far fewer that describe the challenges faced by both families [23,24] and healthcare professionals.
Since 2019, KDT has been implemented and applied in the Child Neurology Department at the Hospital of the Lithuanian University of Health Sciences Kauno klinikos (Kauno klinikos) for the treatment of children with drug-resistant epilepsy.Until then, the application of KDT in Lithuania was limited to individual cases, primarily initiated by parents/caregivers, without a regular monitoring plan.Additionally, there have been no publications regarding the use of this treatment method in Lithuania.As the nonpharmacological treatment options for drug-resistant epilepsy have been expanded at Kauno klinikos, it is crucial to review the initial results of KDT and compare them with the data published by other centers.
The aim of the study was to analyze and evaluate the efficacy and safety of KDT in children with epilepsy, as well as the challenges faced by their parents/caregivers.
Materials and Methods
The study included pediatric patients with epilepsy who were treated with the ketogenic diet as inpatients/outpatients at the Child Neurology Department of Kauno klinikos from 1 April 2019 to 1 October 2022, and whose parents/caregivers agreed to participate in the study and sign the informed consent form.Before starting KDT, patients are consulted by a pediatric neurologist (evaluation of epilepsy form, seizure type and frequency, detecting indications and contraindications for KDT, neurological examination, and correction of treatment with ASMs), a dietitian (assessment of nutritional status and feeding pathway, meal preferences, allergies, and calculation of calories and diet ratio), and comprehensive laboratory (complete blood count, blood biochemistry, urine analysis, ASMs blood levels, serum acylcarnitine profile, serum amino acids profile, and urine organic acids profile) and instrumental tests (electroencephalogram, brain magnetic resonance imaging, electrocardiogram, and abdominal and renal ultrasonography) are performed.The classic ketogenic diet was applied to our patients by gradually increasing the ketogenic ratio to 3:1 or 4:1, using specialized formulas as needed.All patients began treatment as inpatients.Some sources of information were provided to parents/caregivers and children: booklets in the Lithuanian language and a video for children about KDT with Lithuanian subtitles (https://www.youtube.com/watch?v=olZCljOSeZ8, accessed on 12 May 2024).Also, a special email address for consultations regarding KDT has been created (ketogenine.dieta@kaunoklinikos.lt), as well as an online forum (Facebook group "Gydanti keto mityba vaikams").Parents/caregivers and children are trained in calculating meal plans and measuring ketosis and glycemia.Instructional materials and special monitoring charts have been created and provided for home use.All patients reached and maintained therapeutic ketosis (blood ketone level 2-5 mmol/L).According to international recommendations [1,3,25,26], during the follow-up period, regular outpatient visits were conducted-every 3 months during the first year of KDT, and then every 6 months.During these visits, the efficacy and adverse effects of KDT were assessed, blood and urine tests were performed, dietary plans were adjusted as needed, and necessary supplements were prescribed.Therefore, after the first 3 months of KDT, a decision is made whether to continue or discontinue the diet.
Data were collected by using two data sources: (1) objective demographic and clinical information was obtained from all available medical charts, and (2) subjective parent/caregivers' opinions on the positive or negative effects of KDT and the challenges they had experienced were collected from questionnaires filled out by the study parents/caregivers.The data from both sources were analyzed in terms of three main aspects: (1) efficacy, (2) safety, and (3) challenges.
KDT is considered effective when the seizure frequency is reduced more than 50%, categorized into 3 groups: seizure reduction > 50%, seizure reduction > 90%, and seizurefree.To evaluate the safety of KDT, adverse effects were analyzed: nausea/vomiting, hunger, constipation, drowsiness, mood changes, hypoglycemia, hyperketosis, weight loss, and weight gain.It is important to note that we not only assessed the presence of adverse effects but also asked parents/caregivers to evaluate whether these adverse effects had temporary or long-term impacts.Challenges encountered by families during KDT implementation were assessed in focus groups: accessibility of information about KDT, dietary changes, financial and social aspects, and accessibility of medical services.
Statistical Analysis
The statistical data analysis was performed using the IBM SPSS Statistics 23.0 software package.The distributions of quantitative variables were assessed visually and using the Shapiro-Wilk test.KDT was considered effective when the frequency of seizures decreased by more than 50%.For examining differences in KDT efficacy between genders, epilepsy types, and special formula consumption, the non-parametric chi-squared (χ 2 ) test was used.Adverse effects were encoded into scores, with a score of 2 indicating a long-term impact on parent's satisfaction, a score of 1 indicating a temporary impact, and a score of 0 indicating no observed adverse effects, while the maximum score of 16 would indicate the presence of long-term adverse effects in all areas examined.The sum of scores expressing the adverse effect was calculated for each participant.To compare the duration of KDT and the sum of scores for adverse KDT effects between two groups of KDT efficacy, as well as when assessing the differences in the sum of scores for adverse KDT effects between genders and different epilepsy forms, the non-parametric Mann-Whitney U test was used.To calculate the differences in the sum of scores for adverse KDT effects among different epilepsy etiological types, the non-parametric Kruskal-Wallis test was used.The Spearman's rank correlation coefficient was used to analyze the relationship between the duration of KDT and the quantity of adverse effects.A significance level of 0.05 was used when testing statistical hypotheses.
Results
During the study, the demographic and clinical data of 30 patients were analyzed (Table 1).Data from eight patient questionnaires were not included in the analysis of the parent/caregiver questionnaires due to data unavailability: three due to patient deaths (questionnaires were not sent for ethical reasons) and five due to incomplete parent/caregiver questionnaires.Therefore, twenty-two completed questionnaires from the parents/caregivers were analyzed (Figure 1).In accordance with this, the size of the group described is provided in brackets.Within the study period, 13 (43.3%)continued the KDT treatment.Among the remaining seven participants (23.3%),KDT was discontinued due to lack of efficacy, five participants (16.7%) discontinued due to intolerance, three participants (10%) died, and two participants (6.7%) discontinued due to a lack of motivation (Figure 1).The average duration from the start of the treatment until discontinuation was 9.5 ± 7.9 months.
Results
During the study, the demographic and clinical data of 30 patients were analyzed (Table 1).Data from eight patient questionnaires were not included in the analysis of the parent/caregiver questionnaires due to data unavailability: three due to patient deaths (questionnaires were not sent for ethical reasons) and five due to incomplete parent/caregiver questionnaires.Therefore, twenty-two completed questionnaires from the parents/caregivers were analyzed (Figure 1).In accordance with this, the size of the group described is provided in brackets.Within the study period, 13 (43.3%)continued the KDT treatment.Among the remaining seven participants (23.3%),KDT was discontinued due to lack of efficacy, five participants (16.7%) discontinued due to intolerance, three participants (10%) died, and two participants (6.7%) discontinued due to a lack of motivation (Figure 1).The average duration from the start of the treatment until discontinuation was 9.5 ± 7.9 months.
When the parents/caregivers were asked about the positive effects of KDT, they (N = 22) reported that twelve (54.5%) experienced a reduction in seizure frequency, and four (18.2%) became seizure-free.Furthermore, eight (36.4%) of the patients experienced milder/shorter seizures.The parents also noticed a subjective positive effect of KDT on psychosocial well-being: thirteen (59.1%) of the patients were more alert, nine (40.9%) experienced improved sleep, and eleven (50%) showed developmental improvements.The parents also noted positive effects related to epilepsy treatment: four (18.2%) of the patients were able to reduce the dosage of ASMs, eight (36.4%) were able to discontinue at least one medication, and ten (45.5%) had fewer visits to healthcare facilities.However, in five cases (22.7%), no positive effects were reported by the parents/caregivers.
Safety of KDT in Children with Epilepsy
Table 3 describes the adverse effects of KDT in the study group (N = 22).The average sum of scores for adverse effects (0 indicated no observed burden of adverse events, while the maximum score of 16 would indicate the presence of long-term adverse effects in all the areas examined) was 4.6 ± 2.4 (range 0-10).Comparing the sum of scores for adverse effects between the two effectiveness groups (with or without effect), the average ranks of the sum of scores did not significantly differ between these groups (Mann-Whitney U test, U = 30, z = −1.4,p = 0.176), although a difference was observed (medians of 4 and 6, respectively).The average ranks of the sum of scores for adverse effects did not significantly differ between the boys and girls (Mann-Whitney U test, U = 33.5, z = −1.7,p = 0.089) (medians of 6 and 4, respectively).It was found that the patients with generalized epilepsy experienced significantly more adverse effects as compared to the patients with focal epilepsy (Mann-Whitney U test, U = 19.5, z = −2.1,p = 0.032) (medians of 5.5 and 3.5, respectively).The average ranks of the sum of scores for adverse effects did not significantly differ among the different types of epilepsy (Kruskal-Wallis test, H (2) = 4.1, p = 0.396).The analysis of the influence of KD duration on the occurrence of adverse effects revealed a negative but non-significant correlation (Spearman's correlation, r = −0.2,p = 0.317).
Challenges of Introducing KDT
In the survey (N = 22), the parents/caregivers of twelve children (54.5%) indicated that KDT was continued, seven (31.8%) had discontinued the diet, and three (13.6%) reported adopting a modified version of the ketogenic diet.Among the parents/caregivers of children who no longer followed KDT, five (22.7%) noted that this was because they did not observe the expected effects, one (4.5%) reported that the child could not tolerate the diet, one (4.5%)mentioned that the child refused to eat the required food and violated the diet, two (9.1%) reported significant adverse effects, and one (4.5%)was suggested another more effective treatment option.The decision to discontinue KDT was unilaterally made by the parents/caregivers of one child, while, in eight cases (27.3%), the decision was made together with the healthcare professionals in charge of the KDT at the Kauno klinikos.
The majority of the parents/caregivers of the participants, 16 (72.7%),learned about KDT from a pediatric neurologist.Some parents, six (27.3%), learned about the diet from the media and/or online portals, four (18.2%) from online forums and/or social media groups, three (13.6%)from other parents, and two (9.1%) from other specialists.Eighteen participants (81.8%) were orally fed before starting the ketogenic diet, three (13.6%)were fed through a gastrostomy, and one (4.5%)had a mixed feeding method.
Figure 2 describes the challenges faced by the parents/caregivers during the initiation of ketogenic diet therapy.The parents/caregivers mostly lacked information on how to calculate the nutrient requirements and caloric content of food, eight (36.4%),how to prepare meals, five (22.7%), and how to handle special situations, such as illness, trauma, anesthesia, or surgery, five (22.7%).In the survey, two (9.1%) indicated that they faced a specific challenge.In addition, eighteen (81,8%) of the parents/caregivers indicated a lack of information regarding the use of dietary supplements, one (4.5%) on how to manage high or low levels of ketones and glucose in the blood, while eight (36.4%)stated that they did not lack any information at all.
Regarding additional information, 21 (95.5%) of the parents/caregivers were seeking it online, and 19 (86.4%) obtained it from specialized forums/groups and during medical consultations.Social media platforms as information sources were mentioned by 17 (77.3%).
When asked about ways in which parents/caregivers contribute to helping themselves and other parents, it was found that almost all of them, 19 (90.9%), responded to questions when personally asked by others.Additionally, half of them, 11 (50%), engaged in personal communication with one or more families of children undergoing KDT.Eight (36.4%) parents/caregivers shared supplements and resources, while five (22.7%) shared their own discovered recipes.They also initiated conversations and asked questions in the Facebook group.Only two (9.1%) parents indicated that they did not participate in these activities.The parents/caregivers mostly lacked information on how to calculate the nutrient requirements and caloric content of food, eight (36.4%),how to prepare meals, five (22.7%), and how to handle special situations, such as illness, trauma, anesthesia, or surgery, five (22.7%).In the survey, two (9.1%) indicated that they faced a specific challenge.In addition, eighteen (81,8%) of the parents/caregivers indicated a lack of information regarding the use of dietary supplements, one (4.5%) on how to manage high or low levels of ketones and glucose in the blood, while eight (36.4%)stated that they did not lack any information at all.
Regarding additional information, 21 (95.5%) of the parents/caregivers were seeking it online, and 19 (86.4%) obtained it from specialized forums/groups and during medical consultations.Social media platforms as information sources were mentioned by 17 (77.3%).
When asked about ways in which parents/caregivers contribute to helping themselves and other parents, it was found that almost all of them, 19 (90.9%), responded to questions when personally asked by others.Additionally, half of them, 11 (50%), engaged in personal communication with one or more families of children undergoing KDT.Eight (36.4%) parents/caregivers shared supplements and resources, while five (22.7%) shared their own discovered recipes.They also initiated conversations and asked questions in the Facebook group.Only two (9.1%) parents indicated that they did not participate in these activities.
Discussion
In our study, we observed that KDT was effective in 66.7% of the patients, with exceptionally good results reported in 44.6% of the cases, which is consistent with the existing literature [20,23,[27][28][29][30].More than half (59%) of the surveyed patients' parents/caregivers also expressed a positive response of KDT in terms of epilepsy seizure control, such as a decrease in seizure frequency, milder and shorter seizures, or seizure-free status.This is particularly important as the studied patients had intractable epilepsy and had tried various combinations of ASMs.Furthermore, over half of the parents/caregivers noticed an improvement in cognitive functions, including increased alertness, improved communication, developmental progress, and better sleep [31].Almost all the participating patients had documented cognitive deficits, which, when accompanied by recurrent, frequent, and prolonged seizures, worsen developmental progress and hinder communication and socialization.Therefore, the improvement in these functions observed by the parents when implementing KDT is crucial for evaluating the treatment effectiveness and the long-term development, family interaction, and socialization of the patients [24].
Additionally, 36% of the parents/caregivers indicated in the surveys that, in addition to the positive effects of KDT, they were able to reduce the dosage of ASMs or discontinue at least one medication.This is significant because reducing the doses and quantity of ASMs reduces the potential side effects and enhances patients' alertness [1,29].It also has a positive impact on the cost of epilepsy treatment [32].
It is important to mention that 36.4% of the parents/caregivers reported a reduction in visits to healthcare facilities.It can be assumed that, with a lower seizure frequency and fewer prolonged seizures, there is a reduced need to seek emergency care or attend outpatient visits.A lower frequency of visits to healthcare facilities can have a positive impact on the family's social life and reduce the transportation expenses.Fewer visits to healthcare facilities, along with a lower quantity of prescribed ASMs, also reduce the societal costs of patient treatment [33].
The patients for whom KDT was effective continued the diet for a longer duration.In our study, we observed an unexpected significant difference in the efficacy of KDT between the boys and girls, with the girls experiencing greater efficacy.There is no corresponding comparison found in the literature regarding this gender-based difference in efficacy.The reason for the observed association between gender and efficacy might be due to the small study sample.This observation could be further explored in future prospective studies.
A statistically reliable difference in KDT efficacy between generalized and focal epilepsy has not been found, although there is a tendency for KDT to be slightly more effective for generalized epilepsy.To assess this more accurately, a larger sample size would be necessary.There are individual articles in the literature suggesting that KDT is more effective for patients with generalized epilepsy [34], which could be a subject for future research.
The results of our study indicate that the patients in the research group achieved and maintained therapeutic ketosis, which demonstrates that KDT was implemented and followed correctly.Therefore, our presented results can be considered reliable [35].In order to assess the associations between the KDT adherence and achievable outcomes, specialized questionnaires were developed and implemented into practice [36,37].
Our study shows that KDT was relatively safe, with few significant adverse events observed.The most commonly reported adverse effects by the parents/caregivers were related to the digestive system: nausea, vomiting, and constipation.Long-term constipation was reported by nearly one-third of the patients, as also indicated in the literature [19].This could be associated not only with KDT but also with the fact that children with drugresistant epilepsy and comorbid movement disability are prone to constipation.Nausea and vomiting were temporary phenomena, which are also reported in the literature as more common at the beginning of KDT due to the physiological changes in the gastrointestinal tract and as an expression of patient resistance to the new diet [19].More than half of the patients did not indicate hunger or weight fluctuations as an adverse effect.According to the testimonies of two parents, weight gain was a positive effect due to the slow weight gain of the patient prior to KDT.The parents/caregivers noted encountering clinically significant hyperketosis and hypoglycemia.These conditions can be managed by taking additional measures based on hypoglycemia or hyperketonemia algorithms, which are provided to parents by healthcare professionals responsible for KDT implementation.
The adverse effects of KDT reported by patients' parents/caregivers were quantified on a score scale.The average score was 4.6 (from a maximum of 16).Therefore, in our group of patients, the overall burden of adverse events did not exceed one-third of the maximum possible score.Thus, we can conclude that the burden of adverse events in our patient group was relatively low, especially considering that some of these adverse events were temporary and manageable.It was found that the patients with generalized epilepsy experienced significantly more adverse events than those with focal epilepsy.It is possible that the patients with generalized epilepsy had a more severe overall condition, and a larger sample size and longer follow-up would be needed for a more precise assessment.
KDT is a long-term change that requires the efforts and dedication of the entire family, posing additional challenges for them.The parents/caregivers most commonly identified the difficulties and costs of procuring appropriate ketogenic diet products as the largest and long-lasting challenge.Calculating the food composition, meal preparation, and the regular testing of ketones and glucose at home typically presented temporary challenges that could be overcome with time.Even 72.7% of the parents/caregivers noted that KDT caused more difficulties in eating away from home after beginning KDT.This can further worsen the social life of families with children with epilepsy.Nearly one-third of the patients' parents/caregivers reported difficulties in family relationships after implementing KDT.Only a small number of parents/caregivers (1 in 22) indicated challenges related to regular visits to healthcare facilities, obtaining specialist consultations, or receiving information in their native (Lithuanian) language.Therefore, the majority of the challenges arise from social and economic issues rather than problems related to the provision and accessibility of medical services.
About half of the participants continued with KDT.The remaining participants discontinued the diet for various reasons: lack of efficacy, intolerance, child's refusal to eat, or other proposed treatment options.The decision to discontinue KDT unilaterally was made only by 3.3% of the children's parents/guardians.Most of the parents/caregivers discussed the decision to discontinue the treatment with the child's doctor.
According to the survey of the parents/caregivers, a portion of them expressed a lack of information on how to calculate the daily nutrient requirements and caloric intake, prepare suitable meals for the diet, and manage special situations, such as illness, trauma, anesthesia, or surgery.Most of the parents/caregivers sought the missing information online, in specialized forums/groups.It is encouraging that 72.7% of the parents/caregivers obtained the necessary information during doctor consultations.The main amount of educational material was being prepared when KDT was already implemented at the Kauno klinikos, so the parents/caregivers of the initial patients had less information available.It is important to note that the informational resources were prepared in collaboration with the parents/caregivers, discussing and clarifying what information in what format was most needed.Of course, it is worth considering the need for more educational materials about KDT and organizing more events dedicated to it.More attention should also be afforded to educational activities involving not only families implementing KDT but also general practitioners and pediatricians providing children's healthcare services.
Almost all the patients' parents/caregivers were collaborative and were inclined to personally answer other parents' questions about KDT, with half of the respondents stating that they personally interacted with one or more families of children undergoing KDT.This highlights the importance of a self-help community that can assist patients' families in navigating the daily challenges [24].
Changes in daily dietary habits can cause tension not only for the patients themselves but also for their family members [23].Therefore, close collaboration between families, healthcare specialists, easily accessible information, and overcoming the difficulties encountered in implementing KDT are of great importance.Based on our surveys of the parents/caregivers and clinical data obtained by our team, the positive effects of KDT on seizure control and the cognitive development of the child outweigh the challenges that may arise.So far, individual studies have been conducted, showing a tendency for the efficacy of KDT to outweigh the challenges faced by families [24], but more research and specialized questionnaires are needed to evaluate the impact of KDT on quality of life [38].
To ensure the smooth implementation, application, and patient care of KDT, a competent multidisciplinary team is necessary.This team should consist of a pediatric neurologist, dietitian, specially trained nurse, and psychologist, who can dedicate sufficient time for consultations and addressing any arising questions.This would ensure comprehensive and seamless patient care and the timely resolution of any issues that may arise.To further investigate the factors related to the safety and efficacy of KDT, as well as the changes in the laboratory blood indicators during its application, conducting a longer-term prospective study would be beneficial.Additionally, collaboration among the families implementing KDT is crucial, not only in working with healthcare professionals but also in fostering communication, sharing experiences, and organizing joint events.
Conclusions
In conclusion, KDT has been evaluated as an effective and safe therapeutic option for children with epilepsy, particularly those who do not respond well to medications.It can help to reduce seizure frequency, improve seizure control, increase parents' satisfaction, and reduce the medical costs.However, it requires careful implementation, monitoring, and medical supervision to ensure its efficacy and safety.Each child's individual needs and medical history should be considered when considering KDT as a treatment option for epilepsy.
Figure 1 .
Figure 1.Flow diagram of the study group.
Figure 1 .
Figure 1.Flow diagram of the study group.
Table 1 .
Demographic and clinical data of the study group (N = 30).
Table 1 .
Demographic and clinical data of the study group (N = 30).
Table 2 .
Efficacy of ketogenic diet therapy in the study group.
Table 3 .
The adverse effects of ketogenic diet therapy for children with epilepsy, as indicated by parents/caregivers (N = 22). | 2024-06-05T15:18:52.549Z | 2024-05-31T00:00:00.000 | {
"year": 2024,
"sha1": "7ac17cc6bba8aed90630ffcb81c5d6e3b6f8be06",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1648-9144/60/6/919/pdf?version=1717138448",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e05b17be274eea095d1d64d92b6dec3d85701623",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
254267572 | pes2o/s2orc | v3-fos-license | Daily precipitation performances of regression-based statistical downscaling models in a basin with mountain and semi-arid climates
The impacts of climate change on current and future water resources are important to study local scale. This study aims to investigate the prediction performances of daily precipitation using five regression-based statistical downscaling models (RBSDMs), for the first time, and the ERA-5 reanalysis dataset in the Susurluk Basin with mountain and semi-arid climates for 1979–2018. In addition, comparisons were also performed with an artificial neural network (ANN). Before achieving the aim, the effects of atmospheric variables, grid resolution, and long-distance grid on precipitation prediction were holistically investigated for the first time. Kling-Gupta efficiency was modified and used for holistic evaluation of statistical moments parameters at precipitation prediction comparison. The standard triangular diagram, quite new in the literature, was also modified and used for graphical evaluation. The results of the study revealed that near grids were more effective on precipitation than single or far grids, and 1.50° × 1.50° resolution showed similar performance to 0.25° × 0.25° resolution. When the polynomial multivariate adaptive regression splines model, which performed slightly higher than ANN, tended to capture skewness and standard deviation values of precipitations and to hit wet/dry occurrence than the other models, all models were quite well able to predict the mean value of precipitations. Therefore, RBSDMs can be used in different basins instead of black-box models. RBSDMs can also be established for mean precipitation values without dry/wet classification in the basin. A certain success was observed in the models; however, it was justified that bias correction was required to capture extreme values in the basin. Supplementary Information The online version contains supplementary material available at 10.1007/s00477-022-02345-5.
Introduction
The world is warming to dangerous levels due to the increase in the concentration of carbon dioxide and other greenhouse gases (Salman et al. 2018). Despite the decrease in global carbon dioxide emissions during the world lockdown after the onset of the COVID-19 pandemic in March 2019, this decreased amount was compensated in the last half of 2020 (Tollefson 2021). This situation indicates that warming will continue without significant reduction. Liu and Raftery (2021) have shown that if our current trend continues, the target of the Paris agreement to stay below the increase of 2°C in the warming has a 5% probability according to the pre-industrial level. While if all countries meet the specified conditions in the agreement, this probability increases to the level of 26%. In other words, reducing the gases released into the atmosphere within the framework of the determined commitments will not significantly reduce the impact of climate change.
Climate change, which is the change in the mean and variability of the climate over more than decades, is one of the most severe threats to the environment and humanity in this century. It is generally accepted as the reason for the increase in frequency, density, and duration of extreme values such as floods, droughts, forest fires, and heatwaves in various parts of the world (Ahmed et al. 2018;Shiru et al. 2019). And therefore, climate change will impact local or global precipitation and hydrological regime (IPCC 2013;Rashid et al., 2015). The Mediterranean region, including Turkey, is among the areas that will be most affected by climate change (Diffenbaugh and Giorgi 2012;Keupp et al. 2019;IPCC 2021). It is necessary to develop an effective policy for analyzing and understanding the current and possible future climate changes and adaptation to climate change (Noor et al. 2020).
The global climate models (GCMs) allow the possibility to study changes using physically-based equations to simulate the effect of greenhouse gas concentrations on the atmosphere and various ocean processes in means and variations (IPCC 2013(IPCC , 2021. So, the GCMs are essential tools for observing large-scale climate features and climate change under possible future scenarios. However, the GCMs are insufficient due to their coarse resolutions. The climate and hydrology components that directly or indirectly provide water resources management at the local scale require high resolution Tavakol-Davani et al. 2013;Rudd and Kay 2016). Downscaling methods are used to overcome this obstacle, and act as a bridge between the GCMs and local climate-hydrology components. Downscaling methods are generally divided into statistical and dynamic downscaling. However, as dynamic downscaling methods require high computing power and design and some specialized knowledge, the statistical downscaling (SD) methods are mainly used in hydrological studies thanks to their cheaper and easier applications (Fowler et al. 2007;Chen et al. 2010;Chen et al. 2014;Ekstrom et al. 2015). The SD methods are broadly divided into three parts; transfer function (perfect prognosis), weather generator, and weather pattern approach (Chen et al. 2010;Maraun et al. 2010;Tavakol-Davani et al. 2013;Chen et al. 2014;Hou et al. 2017). The transfer function method is frequently used because it is easy to apply anywhere and/or anytime. It establishes linear or nonlinear relationships between local climatic components and large-scale GCMs (Wilby 1998;Wilby et al. 2004;Hessami et al. 2008;Maraun et al. 2010).
Reanalysis datasets establish downscaling methods with transfer functions that downscaled GCMs to a regional scale. Reanalysis datasets consist of a combination of atmospheric data obtained from different sources. Although there are many reanalysis datasets, e.g., JRA, MERRA, ERA-40, CERA, ERA-Interim (0.75°9 0.75°), and NCEP/NCAR (2.5°9 2.5°), which are generally used to predict local atmospheric variables in the SD studies such as; Hessami et al. 2008, Chen et al. 2010, Liu et al. 2019, Jafarzadeh et al. 2021, Quesada-Chacon et al. 2021, the ERA-5 being the development of the ERA-Interim is a high-resolution reanalysis set emerged in 2019 (Hersbach et al. 2020). However, it has not been used so far for the SD studies for Turkey's basins.
Daily precipitation data are vital for assessing the impact of climate change on small and medium-sized basins in many hydrological models (Frost et al. 2011;Beecham et al. 2014). Daily precipitation is difficult to predict because it contains high spatial and temporal randomness (Beecham et al. 2014;Rashid et al. 2015;Liu et al. 2019). Different SD methods have been used for daily precipitation predictions. For example, Hessami et al. (2008) used ridge regression and a statistical downscaling model for daily precipitation prediction and could not determine superiority. Tavakol-Davani et al. (2013) used hybrid models for precipitation occurrence and daily precipitation amount using multiple linear regression (MLR), multivariate model tree (MT), and multivariate adaptive regression splines (MARS) methods. Contrary to the hybrid models, the other methods did not show remarkable differences in the success of daily precipitation prediction compared to the MLR method. Nasseri et al. (2013) made precipitation predictions by hybridizing the MLR, MT, MARS, k-nearest neighbor, and genetic algorithm-optimized support vector machine models. The study results revealed that the precipitation occurrence and amount could be successfully modeled using the hybrid models of the MLR, MT, and MARS methods. The least absolute shrinkage and selection operator (LASSO), which has punishment parameters but does not use low coefficient predictors in contrast to ridge regression, is considered an alternative to precipitation prediction. Although the LASSO method is not superior to stepwise regression in the study by Gao et al. (2014), it was found to perform better than principal component regression in the study by He et al. (2019).
Besides, the choice of predictors is also essential in affecting heavy precipitation (Keupp et al. 2019). The Susurluk Basin in Turkey, having mountain and semi-arid Mediterranean climates, was chosen as the study area because the interplay of dynamic and thermodynamic processes has given rise to the Mediterranean precipitations reported by Keupp et al. (2019). The importance of variable selection for the SD models is even more significant in the Susurluk Basin. While establishing the SD methods, long-distance grids can also affect the local climate in different periods (Wilby and Wigley 2000;Crawford et al. 2007;Borges et al. 2017). Also, resolution differences between GCMs and the predictor set can source additional uncertainty (Amjad et al. 2020). However, no study has examined the most appropriate predictors, resolution, and long-distance grid situations in a holistic way on a daily precipitation scale. As daily precipitation downscaling models, no study comparing MLR, elastic net regression (ENET, which combines ridge regression, and LASSO methods), and MARS has been found in the literature. And also, the polynomial MARS (PolyMARS) method, modified from the MARS method, has not been used as a daily precipitation prediction model. Besides, the SD models have not been sufficiently investigated with the ERA-5 reanalysis for daily precipitation prediction.
The study aims to draw a directive holistic analysis framework that selects the most appropriate predictors, grid resolution, and long-distance grid states for the first time. Because regression-based models with ease of use and high interpretability provide advantages over black-box models, it also evaluates daily precipitation SD performances using the MLR, exponential regression (EREG), ENET, MARS, and PolyMARS models with ERA-5 reanalysis data in this study for 1979-2018. Besides, the regression models were compared with the black-box model, namely, artificial neural network (ANN).
The study consists of six sections; the study area and datasets in the second section, the methodology in the third section, the results in the fourth section, the discussion in the fifth section, and the conclusion in the last section.
The study area and datasets
The Susurluk Basin is located between 39°and 40°latitudes and 27°-30°longitudes in north-west Turkey, and covers approximately 24,000 km 2 area (Fig. 1a). The altitude of the basin varies between the Marmara Sea and the Uludag Mountain (* 2543 m). In Turkey, there are 14 Ramsar sites having international significance, two of which are the Ulubat and Manyas lakes located in the Susurluk Basin (Ramsar 2021). The basin includes important stream watersheds such as Kocaçay, Mustafakemalpaşa, Nilüfer, and Simav. The basin is influenced by the Mediterranean climate, with hot summers and wet winters. However, the Nilüfer Stream Watershed, where the Uludag Mountain is located, has a mountain climate (Peel et al. 2007;Ozturk et al. 2017). The basin has a mean annual temperature of 12°C, and receives mean precipitation annual of 640 mm. There are approximately 65 dam lakes of various sizes in the basin, either completed or ongoing, for drinking, irrigation, and energy needs, flood protection (DSI 2020). Water demand is also supplied from many aquifers in the basin (Akbas et al. 2020). Turkey's most intense mining activities occur in the Balıkesir and Bursa provinces, covering a large part of the basin (MTA 2021). It is one of the basins with the densest population in Turkey (Akbas et al. 2020). The mentioned activities and reasons cause to be under the pressure of high-water consumption in the basin, and reveal the basin's importance. In the basin, flood frequency is high; fortunately, mortality rates are low (SYGM 2018;Haltas et al. 2021).
With daily total precipitation data from 1979 to 2018, nine meteorological stations were selected to represent the basin (Fig. 1a). The precipitation data were obtained from the Turkish State Meteorological Service (TSMS 2021), and monthly total precipitation changes were calculated (Fig. 1b). For the SD model setup, the ERA-5 reanalysis hourly dataset was obtained from European Centre for Medium-Range Weather Forecasts Re-analysis 5 (Hersbach et al. 2020).
Methodology
The grid resolution, long-distance grid situations, and predictor selection were performed in the ERA-5 dataset before the SD models were established for daily prediction. After applying and comparing regression-based five SD models and ANN, bias correction procedures were performed (Fig. 2).
Predictor, resolution, and grid condition selection methodology
Before the SD model setup, predictor selection is an essential step in model accuracy because using physical and logical components provides meaningful connections between predictors and predictand (Wilby et al. 2002;Liu et al. 2013;Borges et al. 2017). Although different methods are used for predictor selection, correlation methods, e.g., Pearson and Spearman, are generally preferred for their simplicity of use. Including different regression methods, their advantages have been recently investigated comparatively (Yang et al. 2018;Liu et al. 2019;Jafarzadeh et al. 2021). The stepwise regression method, which is as if it evaluates all possible models, is generally found successful. However, it does not assess the suitability of all possible parameters. Instead of this method, if the number of predictors is 15 or less, the-all possible regression (APR) method can give more accurate results (Burnham et al. 2011;NCSS 2021). In the study by Okkan and Kirdemir (2016), the APR method was used for the predictor selection, and a single prediction was determined. However, the stationary may not decrease in such an evaluation since no humidity parameter is included in predictors (Crane and Hewitson 1998;Wilby et al. 1998;Hessami et al. 2008). The Mediterranean precipitations result from the interaction of dynamic and thermodynamic processes (Keupp et al. 2019). Therefore, common predictors of reanalysis were determined from the studies by Chen et al. (2010), Nasseri et al. (2013), Beecham et al. (2014), Bettolli and Penalba (2018), Yang et al. (2018), Keupp et al. (2019), andJafarzadeh et al. (2021) that achieved successful results in predicting daily and heavy precipitations (Table 1).
Due to more computation time and overfitting, it is not recommended to apply downscaling methods with many predictors (Mujumdar and Nagesh Kumar 2012;Das and Nanduri 2018). Different methods are used to select the predictors from the reanalysis data set, but Spearman correlation is frequently used (Chen et al. 2010;Lin et al. b Fig. 1 a Digital elevation model of the Susurluk Basin, b Monthly total precipitation changes at the meteorology stations from S1 to S9 Spearman correlation is less affected by extreme situations than Pearson, it does not contain any assumptions and investigates nonlinear relationships (Krause et al. 2005;Chen et al. 2010;Sen 2020a). Using Spearman correlation at a significance level of 1%, the choice of the predictors was performed with the grid values, in which the related meteorology station is located. In addition to the predictor selection, using more than one grid is vital as remote grids may affect the local climate at different times (Wilby and Wigley 2000;Crawford et al. 2007;Borges et al. 2017). Also, the resolution difference between the predictor set and the GCMs may be an additional cause of uncertainty (Amjad et al. 2020). There are studies evaluating grid resolution, predictor selection, and long-distance grid effect separately, e.g., Borges et al. (2017), Yang et al. (2018), Amjad et al. (2020), Jafarzadeh et al. (2021. However, no study performs holistic evaluation together in daily precipitation prediction. The GCM and reference gridded data are usually overlapped with grid centers (Sarhadi et al. 2016;Salman et al. 2018;Khan et al. 2020). While the original resolution of the ERA-5 set used in this study is 0.25°9 0.25°, the GCMs are * 1.00°-3.00°. However, overlapping the GCMs and reanalysis grid centers after converting the GCMs to finer resolution may cause extra uncertainty in the SD models (Amjad et al. 2020). Therefore, the reanalysis data with three different resolutions, i.e., 1.00°9 1.00°, 1.50°9 1.50°, and 2.00°9 2.00°, were obtained by considering the mean and standard deviation of the resolutions of the GCMs in Coupled Model Intercomparison Project sixth phase (CMIP6) in addition to the ERA-5 original resolution. Since distant grids can affect the local climate at different times (Amjad et al. 2020), the mean status of the four grids around the stations by considering the grid centers closest to the station (C2) and the grid mean status of the basin (C3) were also taken into account in addition to the grid situation (C1) in which the station is located (Fig. S1). In the studies by Chen et al. (2014), Borges et al. (2017), Lin et al. (2017), and Saengsawang et al. (2017), the one-day lag values (lag-1) in addition to the current day values (lag-0) of the predictors slightly improved the performance of the models. Thus, lag-1 values of the predictors were also examined with Spearman correlation. Then, the most efficient first 12 predictors were selected by taking the absolute values of Spearman correlation. That is, after the first predictor selection was performed at the ERA-5 original grid, in which the stations are located, Spearman correlation analysis was performed with four different resolutions (0.25°9 0.25°, 1.00°9 1.00°, 1.50°9 1.50°, and 2.00°9 2.00°) (Fig. S2), three different grid conditions (C1, C2, and C3), and the lag-0 and lag-1 values of the predictors. Then, 12 predictors were selected for each station. The APR method was used for grid condition, grid resolution, and predictor selection for the best case. This method tests combinations of all possible regressions for each parameter. The total number of all possible regression models is 2 k À 1. It can be done with Mallows' C p that has been used with success by the previous studies to select the best model (Fistikoglu and Okkan 2011;Okkan and Kirdemir 2016). Mallows' C p is used to compare models with different parameters (Mallows 1973;Fistikoglu and Okkan 2011). Mallows' C p is calculated as follows: where n is the number of data, i is the number of model parameters, SR 2 i is the sum of residual squares in the model with i parameters, and SR 2 F is the sum of residual squares in the whole model. The smaller or equal to i þ 1 (if possible) the value of C p , the better a model fit is (Pardoe 2013;STAT462 2021).
Statistical downscaling methods
Regression-based methods are used in SD studies for ease of use and interpretability (Wilby 1998;Wilby et al. 2004;Hessami et al. 2008;Chen et al. 2010;Tavakol-Davani et al. 2013;Nacar et al. 2022). In this study, after the grid condition, resolution, and predictor selection were performed, regression-based methods with the relatively new ERA-5 reanalysis dataset were compared with a black-box model by establishing SD models. Linear regression, i.e., MLR, EREG, a nonlinear form of MLR, ENET to remove unnecessary variables in MLR, and MARS, a nonlinear form of ENET, and PolyMARS models, which are an improved version of MARS were chosen as regressionbased methods. ANN was chosen as a black-box model. Details about those mentioned above are given in the supplementary file. In SD studies by Maraun et al. (2010) and Hertig and Jacobeit (2013), it is assumed that the relationship between predictors and predictand will not change under changing climate conditions. It is also thought that 40 years of data may represent the actual climatic conditions for the area in question, including less frequent climatic events (Khan et al. 2006). Again, it has been stated in some studies (Khan et al. 2006, Huang et al. 2011Nasseri et al. 2013;Tavakol-Davani et al. 2013) that the long training period increases the performance of the models and the potential of the models to catch rare climatic events. Wilby (1998) also stated that the SD model
Bias correction
The GCMs and reanalysis data contain biases due to factors such as the false representation of climatic physical processes, parameter optimization, and black-ocean atmosphere feedbacks, and thus, bias correction is required (Troin et al. 2015;Sippel et al. 2016;Nahar et al. 2017;Amjad et al. 2020;Nguyen et al. 2020). The evaluation may not be helpful for future scenarios if bias correction is not applied (Johnson and Sharma 2015). Considering the study by Amjad et al. (2020), in which the study area covers this study's area, bias correction was performed since the ERA-5 dataset was determined to contain high bias. It is the simplest to apply the standardization to predictors and predictand before the model is set up for bias correction, and it corrects possible biases in mean and variance (Wilby et al. 2004). Predictors and predictand were standardized with long-term mean and standard deviation as follows: where x is series value, u is the long-term mean of x, r is the standard deviation of x, and b x is standardized x. In addition, another bias correction method is quantile mapping (QM), which is frequently used in post-processing in the literature (Ashfaq et al. 2010;Rashid et al. 2015). One of the essential features of this method is that it can correct the number of wet days because the GCMs simulate too many wet days (Gutowski et al. 2003;Maraun, 2016). Cumulative distributions of observed and predicted values are used in the QM given as follows in its simplest form: where P is a raw model output, € P is validated model output, cdf sim is cumulative density function for model output, and cdf À1 o is inverse cumulative density function of observation values. This study used the empirical QM, which does not require the parametric distribution assumption. In the empirical QM, empirical cumulative density functions are estimated using empirical percentile tables. The studies by Boe et al. (2007) and Themessl et al. (2012) can be examined for more details.
Model evaluation
To evaluate predictor, resolution, and grid condition selection on a monthly scale and model selection, the Nash-Sutcliffe efficiency (NSE), the most frequently used model performance criterion in hydrology (Lamontagne et al. 2020), is calculated as follows: where y i is the model result, x i is the observation value, and x is the arithmetic mean of the observation values.
Model performances of daily precipitation prediction are generally evaluated separately in terms of basic statistical moments such as the mean, standard deviation (Std), and skewness (Hessami et al. 2008;Chen et al. 2010;Liu et al. 2013;Rashid et al. 2015). Although some studies Tavakol-Davani et al. 2013) holistically evaluated these parameters, they mostly applied evaluations by increasing the coefficients of the mean. However, extreme conditions and precipitation variations are essential as much as the mean values for hydrological models (Liu et al. 2013;Pour et al. 2014;Rashid et al. 2015). Thus, the Kling-Gupta efficiency (KGE) proposed by Gupta et al. (2009) was modified to holistically evaluate the statistical moments of precipitation. It has been frequently used in hydrological modeling (Lamontagne et al. 2020). Because skewness is taken into account to assess information about The values closer to observation values are given in bold rare extreme deviations from the mean (Chen et al. 2010) by substituting the skewness parameter for the correlation parameter, the KGE formulation has evolved as follows: where KGES is KGE included skewness parameter, SK o (SK m ), u o (u m ), and r o (r m ) is the skewness, arithmetic mean, and standard deviation of the observed (modeled) data, respectively. In Eq. (5), the closer to 1 the KGES ((-!, 1]) value is, the higher the model performance is. It should be noted that the KGES formulation was created by utilizing the Euclidean distance feature of the KGE, and is open to the modeler's judgment for different purposes. The standard triangular diagram (STD), which has been proposed by Sen (2020b), has been modified and employed to support the holistic numerical assessment. Standard deviation (r), mean (l), and serial correlation (q) values were used in the original of this method. The calculation steps for the STD graph are as follows: Step I calculates r, l, and q statistic values for observation and prediction values.
Step II rates observation and prediction statistics with each other. The ratio values here should be calculated between 0 and 1. So, it does not have to be observation/ prediction or vice versa (0\l r ¼ l 1 =l 2 \1; 0\r r ¼ r 1 =r 2 \1; 0\q r ¼ q 1 =q 2 \1).
Step III sums the ratios: S ¼ l r þ r r þ q r Step IV divides each ratio by the sum (S) calculated in step III, and multiplies it by 100 as follows: Step V places the proportions found in step IV on the triangular diagram given in Fig. 3.
The diagram is divided into four equal sub-triangles; the fourth region in the middle means that no parameter dominates the others, except for insignificant differences (Fig. 3). If a point falls into the fourth region, the parameters are similar in ratios. If a point is closer to the reference point (R, 33.33% for each parameter), parameter ratios are more similar. If a point falls in the first region, the predicted values are highly correlated with the observed values, but the other two parameters are less harmonious with significant differences than the correlation. Similar comments are also valid for the second and third regions (Sen 2020b).
In addition, the contingency table (Table 2) was used to evaluate the performance of each model. This table has four internal partitions calculated according to the following schedule: Misses: the number of observed wet days modeled as dry days, False alarm: the number of observed dry days modeled as wet days, Hits: the true distinctive number of wet events, True negative: true distinctive number of dry events. The critical achievement index (CSI) was applied according to Table 2: where CSI is taken one in the best case, and zero the in worst case (Jafarzadeh et al. 2021).
Predictor, resolution, and grid condition selection results
Primarily, the relationship between total precipitation values and the predictors in the ERA-5 set was investigated at a 1% significance level using the Spearman correlation method for the period 1979-2018, and non-significant parameters were excluded from the scope of the study (Table 3). Considering the Spearman correlation results, the predictors, u, v, and slp, were excluded from the study since these did not show a meaningful relationship for all stations in general. The predictors with the highest absolute correlation with daily total precipitation values were determined as tp (* 0.56), r_850 (* 0.56), and z_700 (* 0.50). Then by considering the lag-0 and lag-1 values of the predictors for three different grid conditions (Fig. S1) and four different resolutions, i.e., 0.25°9 0.25°, 1.00°9 1.00°, 1.50°9 1.50°, and 2.00°9 2.00° ( Fig. S2), 12 predictors were selected by employing Spearman correlation. The situation mentioned above was examined for the Susurluk Basin with different climate characteristics, and the predictor selection was made together with the conditions using the APR method. The application results for the first station as a representation of the APR method are given in Table 4. In the APR method, daily values were used for Cp. Since daily precipitation contains high randomness, the NSE criterion, a combined measure of correlation, bias, and variability (Gupta et al. 2009), was also used as support over monthly total precipitation. In Table 4, Cp increases after a rapid decrease to n = 8. If possible, it is recommended that 'Cp e ¼n þ 1' should be preferred (STAT462 2021). In the absence of this situation, the smallest value of Cp is evaluated as the most suitable model (Pardoe 2013;STAT462 2021). Therefore, nine variables were chosen for the first station. The predictor frequencies determined for the precipitation prediction in the basin are given in Fig. 4. As seen in Fig. 4, the most frequently used predictors are tp, z_500, r_700, and z_700, respectively, for the downscaling model in the basin. After the predictor selection was made with the APR method, grid conditions and resolution selections were comparatively performed using monthly NSE values (Fig. 5).
In Fig. 5, C1 generally showed the worst performance for all situations at all stations. C2 and C3 had similar performances for all grid resolutions. Although C3 mostly performed well compared to C1, it did not perform as stable and well as C2 at all grid resolutions. The horizontal red dashed line corresponding to an NSE value of 0.75 ([ very good performance) was drawn for easier decisionmaking. Except for the third (NSE = 0.73) and sixth (NSE = 0.61) stations, which have poor performances in all grid resolutions and conditions, it was observed that C2 did not cross below the horizontal red dashed line at 1.50°9 1.50°resolution. For this reason, C2 and 1.50°9 1.50°grid resolutions were chosen because the mean of GCM in CMIP6 resolutions was about 1.50°9 1.50°. So, it can be said that long-distance grids effectively predict precipitation (Wilby and Wigley 2000;Crawford et al. 2007;Borges et al. 2017).
Precipitation occurrence results
Precipitation occurrence was modeled with precipitation amount without any dry/wet classification. While determining the conditions as wet and dry days, precipitation of C 1 mm was considered a wet and dry class and for otherwise conditions (Frost et al. 2011). The holistic evaluation results of the monthly mean wet days of the stations in the basin are graphically shown in Fig. 6. In addition, the heat map of monthly mean wet days by altitude is given in Fig. 7 for testing (2009-2018) period. The reason for examining the statistics of daily values in a month was to show the behavior of daily precipitation during a month. In other words, the seasonal behavior capture performances of the models were examined.
The EREG model provided the worst prediction of the wet day mean in the basin during training and testing periods, as shown in Fig. 6. The EREG and ENET models did not improve prediction performance compared to the MLR model, and even predicted precipitation occurrence as worse. The PolyMARS, ANN, and MARS, models provided the best models for both periods, with a negligible difference, respectively. The PolyMARS model predicted an error of about 17% for March, with the worst prediction success, considering the testing period. Compared to the study by Chen et al. (2010), this error value was at a better prediction level, in which wet/dry seasons and wet/dry days were classified. As the altitude increases (Fig. 7), the mean of wet days in the months also increases, but the S6 station, which has an altitude of 833, is different from this situation. Although MARS, PolyMARS, and ANN tend to catch the observation values due to the scale width due to the altitude difference, the PolyMARS model becomes more prominent when looking at mid-altitudes for the testing period (Fig. 7). However, although there is a pattern between the observed values and the modeled results, it is seen from Fig. 6 that there are biases, and therefore, bias correction is going to be applied in the next step.
To Table 5, the EREG model showed the worst performance among the models with the lowest CSI values. When EREG has the highest average WW value, Poly-MARS model has the highest average DD value. That is, PolyMARS correctly captured dry days, while EREG captured wet days. Although the CSI values of PolyMARS and ANN were equal and higher during the training period, Fig. 9 Standard triangular diagram of the quantile mapping results per model and station from S1 to S9 PolyMARS showed significantly higher performance during the testing period. So, PolyMARS performed better than the other models in modeling precipitation occurrence.
Precipitation statistics results
Daily precipitation statistics are essential for the hydrology of current and future periods. The statistics of six different SD model results are given in Table 6.
All models gave results closer to the observation values, with no significant difference in the mean parameter. The MARS and PolyMARS models gave results closer to the observation values in the mean parameter. All models underestimated Std for the training and testing periods. However, the MARS, PolyMARS, and ANN models gave the prediction values closer to the observation values for both periods in Std. While all models for both periods underestimated skewness associated with the generation of rare extreme values, the MARS and PolyMARS models Except for the first station, precipitation parameters at all stations were successfully predicted since the KGES values were greater than zero (Table 6). Although the MARS model had a higher performance to the KGES values, the PolyMARS model had higher performance to NSE values, especially during the testing period (Table 6). Besides, in the training and testing sets, the relative error mean of the means of daily precipitation in months is 8% and 11%, respectively (Table S1). While the ANN gave the relative error mean of the means of daily precipitation in months in the training period, PolyMARS showed a high performance with the smallest in the testing period. So, when the precipitation statistics in the basin are evaluated monthly or in whole time series, all models most likely catch the observation values in predicting the mean precipitation. In other words, someone who needs to examine in terms of mean values can use these models under specified conditions. However, it should also be considered that it contains bias in Std and skewness values (Figs. 6 and 7 and Table 6).
Although the MARS, PolyMARS, and ANN models were slightly better than the other models in general in both periods for Std and skewness values, these models could not catch the observation values exactly (Figs. 6 and 7). All models could not detect the Std and skewness values of the observation (Fig. 6), but PolyMARS tended to capture high skewness values in summer and autumn at low altitudes during the testing period (Fig. 7). According to Figs. 6 and 7, although there is a certain pattern, there is bias in Std and skewness values. Therefore, bias correction should be performed for daily downscaled precipitations. For the testing period, the observation Std and skewness were high due to the extreme precipitations (* 250 mm/day) around the Marmara Sea, 7-12 September 2009 (Komuscu et al. 2013;Komuscu and Celik 2013). The PolyMARS model tended to capture these extreme precipitation values in months during both the training and testing period, slightly different from the other models (Fig. 6). However, the MARS model tends to catch these values in terms of the whole time series in both periods (Table 4). For 9 September 2009, the heaviest day of precipitation, the mean absolute error of the EEG and PolyMARS models is smaller than the other models, but PolyMARS tends to capture 248 mm of precipitation at station S1 more than EREG. Therefore, the general distribution of the absolute (Fig. S3) is due to the high atmospheric instability of the precipitation values observed during this period and the short-term precipitation intensity caused by the different warming between the land and sea surfaces (Komuscu et al. 2013;Komuscu and Celik 2013). In other words, the model performance is lower in the first and second stations compared to the rest, as the precipitation between these dates contains very high randomness. It can be said that the MARS, PolyMARS, and ANN models, which consider nonlinear relationship, are somewhat better than the other models because it tends to catch extreme rainfalls during the training period. The MARS and PolyMARS models also tend to catch up with these extremes to some extent. These models fail due to the precipitation having a random process (Beecham et al. 2014;Rashid et al. 2015) and the dry days in the daily precipitation series, especially during the summer months. In daily precipitation prediction, the SD methods may model wet days as dry days and vice versa (Beecham et al. 2014). It is seen from Fig. 6 that the ERA-5 contains a high bias for the Std and skewness, and has a drizzle effect, which is the situation of climate models to simulate very few dry days (Gutowski et al. 2003).
Bias correction results for the downscaled precipitation
As a bias correction method, predictor and predictand values are generally standardized. Although the same method was used in this study, it was observed to be insufficient (Fig. 6). The insufficiency of this method was also revealed in a study by Rashid et al. (2015). Therefore, a second bias correction application was thought appropriate in this study, and the QM method was applied. After the method was applied to the training set on a station basis, it was also applied to the testing set with the same parameters (Figs. S4 and S5). For example, quantilequantile plots of the PolyMARS precipitation model at the first station with the highest skewness parameter and semiarid Mediterranean climate, and the last station with mountain climate are given in Fig. 8. The QM method gives successful results up to 50 mm/day for the training and testing periods at the first and last stations with different climatic characteristics (Fig. 8). The QM method also makes more successful corrections in the training period than in the testing period but corrects over/under in heavy precipitations in both periods. So, after the limit of 50 mm/day, there is a decrease in the success of the QM method in heavy precipitations, especially in rare extreme values. For heavy precipitations (C 50 mm), the models gave a maximum (mean) relative error of 58% (42%) during the training period and 55% (39%) during the testing period except for the third station before bias correction. However, the maximum (mean) errors occurred with 8% (2%) and 38% (15%), respectively, in the training and testing periods after bias correction (Table 7). So, the QM method gave more successful results in the training period compared to the testing period for heavy precipitations. The STD graph examining the relative dominance of the mean, Std, and skewness parameters over each other over the entire time series of each model in both the training and testing set at each station is given in Fig. 9. It is seen from Fig. 9 that all model outputs of all stations do not have any significant advantage over each other in uncorrected bias during the training period. However, the skewness values are less dominant than the mean and Std parameters. The rate of capturing the observation values is low in terms of the skewness parameters of the models. This situation did not change in uncorrected bias during the testing period; even the ENET and ANN models captured the parameter at the first station much lower than the other parameters. After bias correction was applied, all models were gathered on the reference point, i.e., the black point, in terms of Std, mean, and skewness. In the testing period, the bias correction method could not catch as much as the training period, but it can be considered reasonable. Since these findings have shown consistent results with the study by Rashid et al. (2015), it can be said that the bias correction improves the scattering in a limited capacity. Still, it does not catch statistically significant moments.
The comparison of wet/dry days after bias correction is shown in Figs. 10 and 11. It can be said that good harmony was achieved for all models in the training period except for March, but it was at an acceptable level during the testing period. Considering the change of bias correction in month's statistical values (Figs. 10 and 11), it can be said that it shows an improvement for all models on the hits of wet/dry, Std and skewness parameters compared to Figs. 6 and 7. This is because the QM method can capture variations in observations compared to other bias methods.
Compared to the others, the PolyMARS model tends to capture Std and skewness values with minor differences in training and testing periods, in the basin. It has been also observed that precipitation quarters, dry/wet days, and statistical moments can be significantly captured by applying bias correction in addition to the MLR models established with ERA-5 data. However, it should be noted that even though the linear regression-based methods are corrected for bias, the explained variance of predictors is less than other methods, as stated by Chen et al. (2014).
Discussion
Streamflows are essential for managing disasters such as floods and droughts, building safety and management, and requirements such as water supply, energy generation, and irrigation. In addition, correct streamflow prediction is essential for hydrological models that enable the analysis of the dry-wet periods and transitions of the hydrological and biochemical processes in a basin (Loucks and van Beek 2017;Arora et al. 2020;Maina et al. 2020;Newcomer et al. 2021). Therefore, correct precipitation prediction, the most important parameter affecting the streamflow, is important. Precipitation has been considered the most critical parameter in the studies by Fistikoglu and Okkan (2011), Okkan and Inan (2015), and Okkan and Kirdemir (2016) on the prediction of monthly precipitation and streamflow in the adjacent basin, namely Gediz, with similar climatic conditions. However, the geopotential and humidity parameters, which trigger precipitation and allow for avoiding the stationary assumption (Crane and Hewitson 1998;Wilby et al. 1998;Hessami et al. 2008), were removed from the predictors set in the studies. Besides, the humidity was a parameter controlling extreme precipitation in the Mediterranean Basin (Keupp et al. 2019). The geopotential parameter also significantly affects precipitation in semi-arid basins or dry seasons (Chen et al. 2010;He et al. 2019;Kumar et al. 2021). Similar studies (Turkes 1998;Hertig et al. 2013Hertig et al. , 2017Keupp et al. 2019) have also shown that precipitation in the Mediterranean region controls geopotential height and humidity parameters. High geopotential height stands for stagnant air, while high relative humidity indicates the high moisture required for heavy precipitation (GFA, 2022;GMAFM, 2022). So, there is an inverse relationship between them. Therefore, these parameters are important for precipitation prediction in the SD studies. In this study, precipitation, geopotential, and humidity were effective parameters, and confirmed the abovementioned studies. Besides, although long-distance grids have been emphasized more than the closest grid effect in some previous studies (Wilby and Wigley 2000;Crawford et al. 2007;Borges et al. 2017), the longer distance effect has not been taken into account. However, it was seen that the effect of long-distance grids on the station might decrease depending on the topography in this study (Figs. 1a and 5). At this point, the number of long-distance grids is still an open debate for researchers, but it should be considered. Besides this study, it was stated that the effect of the grids around the station was also important in similar studies (Borges et al. 2017;Herrera et al. 2019;Nacar et al. 2022). Again, considering the coarse resolution and the long-distance grid, there may be a slight improvement in monthly total precipitation. In the study by Amjad et al. (2020), including this study area, the resolution decreases in the case of an average of grids around stations. Still, a slight decrease preserves the bias, and the correlation increases by a small amount. The change of NSE, the combined measure of correlation, bias, and variance (Gupta et al. 2009), has also similar results in this study (Fig. 5). The difference in resolution between the reanalysis and GCMs data can be a source of extra uncertainty during the downscaling phase (Amjad et al. 2020). For this reason, to reduce the possible uncertainties in the relationship between GCMs and ERA-5 data, different grid resolutions were examined in this study. Considering the long-distance effect, the average (1.5°9 1.5°) resolution of the GCMs in CMIP6 was selected as the optimum rather than the best for this basin. This does not mean that fine resolution should not be preferred, but doing so will reduce the uncertainty in future predictions with the GCMs (Amjad et al. 2020). So, the scale (resolution) effect is still an open issue (Chen et al. 2014).
In terms of precipitation occurrence and amount, linear regression-based methods, i.e., the MLR and ENET, have the most ineffective performances due to the randomness of precipitation (Rashid et al. 2015). Some studies have attained similar outcomes (Chen et al. 2010;Tavakol-Davani et al. 2013;Chen et al. 2014;Liu et al. 2019). Compared to the EREG model, which is a nonlinear one, the PolyMARS model provided successful results (on the KGES, CSI, and NSE parameters, PolyMARS outperformed the MLR by 66%, 5%, and 9%, respectively) since these models set up regression models within the parts by separating the data into different intervals Stone et al. 1997;Yilmaz et al. 2018). In the basin, local convective instability aforementioned by Ozturk (2010) may be the reason for the failure of prediction performances for precipitation values to achieve a very high degree of success.
Conclusions
In this study, the performances of regression-based statistical downscaling methods, multiple linear regression (MLR), exponential regression (EREG), elastic net regression (ENET), multivariate adaptive regression splines (MARS), and polynomial MARS (PolyMARS), were compared in the Susurluk Basin (Turkey), which have both the mountain climate and semi-arid climate. In addition, an artificial neural network (ANN) was used to examine the performances of regression models according to black-box models. Before comparing the models, the effects of atmospheric variables, grid resolution, and remote grid conditions in the reanalysis dataset were also examined. Then, nine stations were selected, and their daily total precipitation data and atmospheric variables in the ERA-5 reanalysis dataset from 1979 to 2018 were used. While doing the holistic evaluation, the standard triangular diagram, which was recently proposed as a graphical model evaluation metric, was considered. It was also used as a numerical model evaluation metric by modifying the Kling-Gupta efficiency. The conclusions obtained from the study are itemized below: • The grids around the station affect precipitation prediction rather than a single grid or basin grids mean. • The monthly precipitation prediction performance may decrease from fine resolution to coarse resolution due to the basin characteristics. • The most influential parameters are total precipitation, geopotential height, and relative humidity from 18 atmospheric variables selected from the ERA-5 dataset in the basin. • All models tend to predict more wet days due to climate models having a drizzle effect. Considering their wet/ dry day hit performances, the PolyMARS model, however, shows more success by separating them from the other models. • The PolyMARS model is also successful compared to the other models. To the observation values, the model has closer prediction values with a bit of difference in statistical moments, i.e., mean, standard deviation, and skewness. However, the linear models, i.e., ENET and MLR, have worse prediction performances. • The ANN model has similar results, although not as much as the PolyMARS model, in monthly and whole time series evaluation of daily precipitation values. In other words, regression-based nonlinear models can be used with reanalysis sets such as ERA-5 in statistical downscaling studies rather than the black-box model. • All models perform well in mean values but do not in variance and rare extreme values. • Statistical downscaling models can be used by ERA-5 data without any wet-dry classification in the basin. Although the quantile mapping method corrects the number of wet days and statistical parameters, it is less successful in correcting biases in heavy precipitation and variance.
This study draws a framework for statistical downscaling. However, some shortcomings and proposed future examinations are given below: • The predictors were selected from the ERA-5 reanalysis dataset. The same analyses can be performed using different reanalysis datasets. Thus, the reanalysis dataset that best represents the basin can be determined. The only regression-based models, whose performances were studied separately in the literature, were examined mutually in this study. Still, downscaling performances of the PolyMARS model, which were successful in this study, can be examined or hybridized with rule-based models such as Cubist, M5-tree, and Part in future studies. • Downscaling model performances were evaluated on daily precipitation in this study. In future studies, hourly or minute precipitations or other hydrological parameters, e.g., maximum and minimum temperatures, can be downscaled with the models used in this study and/or different models. • The bias correction method of quantile mapping was applied in this study. The most appropriate bias correction method can be determined using different procedures.
This study has drawn important parameters and flowcharts for statistical downscaling studies. The findings within the scope of this study will form a basis for future climate change scenario modeling and researchers. Besides, by switching to the streamflow with the models obtained, it will help the management of water resources of the basin, and the management of extreme events such as floods and droughts, etc. | 2022-12-06T16:05:52.577Z | 2022-12-04T00:00:00.000 | {
"year": 2022,
"sha1": "f0147d39f28a2499cb8f6b9a292d8c33510483ed",
"oa_license": null,
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00477-022-02345-5.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "d6c83aa2ebebce1dc5b3473919e000a59c60145d",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
} |
267352031 | pes2o/s2orc | v3-fos-license | Assessment of Annexin A2 as A Marker for Diagnosis of Hepatocellular Carcinoma in Compensated and Decompensated Hepatitis C Virus Treated Patients
Abstract
Anti-ANXA2 antibodies were utilized as identification antibodies.Results: With a p-value of <0.001, the ANXA2 level sensitivity and specificity test demonstrated a statistically significant difference in the identification of HCC cases in both compensated and decompensate populations.In comparison to the decompensated group, the ANXA2 level's sensitivity in diagnosing HCC patients was 90% in the compensated status.Regarding AFP and ANXA2 levels, there was a statistically significant distinction (p <0.001) between the two study groups, with a greater mean among the HCC group.Conclusion: The combination of annexin A2 and AFP significantly boosts the diagnostic capability of this promising HCC diagnostic marker.Its serum concentration can be used as an effective, noninvasive tumour marker for HCC identification.
Inclusion criteria:
The investigation involved both males and females who were older than 18.
Participants in the research were HCVtreated people without HCC, Individuals with HCC who have HCV, including those with compensated and decompensated liver cirrhosis, were also included.
Exclusion criteria:
Patients under the age of 18, those with other cancers, those who had viral hepatitis rather than HCV, those with a history of autoimmune illnesses, and those with other significant comorbidities were all excluded from the study.
Ethical consideration:
Patients received comprehensive information from researchers regarding the trial and the marker used.Kruskal-Wallis analysis was used for evaluating multiple independent groups.
The Mann-Whitney test is used for comparing two independent groups.In order to compare two or more qualitative classifications, the chi square test was used.To examine the relationship between parameters, the bivariate Pearson correlation test was used.
Applying the "Receiver Operating
Characteristic" (ROC) curve, tests were performed to determine the sensitivity and specificity of novel tests.
Statistical significance was defined as a P-value of 0.05 or less.
Results:
Table 1 showed that compensated HCV cases had a statistically significant lower https://ejmr.journals.ekb.eg/mean age than decompensated HCV and compensated HCC cases.However, there was no disparity in the sex distribution across the four groups, with a p-value of >0.05 (figure 1).
Table (3):
Comparisons between tumour markers across various study groups.Mean AFP level in study groups
Figure (3) Mean ANXA2 level in study groups
Table 4 shows that there was a statistically significant distinction in AFP and ANXA2 levels between the two groups under consideration, with a greater mean among the HCC group (p-value <0.001).
Mean ANXA2 level in study groups
Table 5 demonstrated that there was no statistically significant variance in the virology assessment between the four groups under investigation with a p-value of >0.05.
Figure (5) Splenomegaly in study groups
According to Table 7, there was no statistically significant variation between the HCC groups (compensated and decompansated) in terms of tumour size, tumour number, or portal vein thrombosis.
Conclusions:
The annexin family, which includes
Recommendations:
Future prospective studies on larger population should be performed to reach a higher diagnostic accuracy.Annexin A2 should also be evaluated in different populations to assess it accuracy.
Further assessment of other biomarkers is recommended to reach higher specificity in assessment of HCC patients.Combination of two or more of non-invasive biomarkers is recommended to reach higher accuracy in assessment of HCC.
Group ( 2 )
) constitutes one of the most prevalent cancers.According to an investigation conducted by Egypt's National Population-Based Cancer Registry Program, liver cancer was the most common type of cancer among Egyptian men (33%), second only to breast cancer among women (13.5%), and first overall (23.8%).Hepatitis C virus (HCV) dissemination was followed by the growth of liver cancer (1).Egypt has the biggest worldwide incidence of hepatitis C virus (HCV) infection (2).Hepatocellular carcinoma (HCC) and cirrhosis are both caused by chronic hepatitis C (CHC), which is a significant risk factor (3).For early detection to enhance the medical outcomes of HCC patients, more sensitive and specific indicators are required.A number of human cancers exhibit an overexpression of annexin A2 (ANXA2), an inducible, calcium-dependent phospholipidbinding protein.An appealing putative https://ejmr.journals.ekb.eg/receptor for enhanced plasmin production on the surface of tumour cells is annexin A2.In healthy liver tissues and tissues affected by chronic hepatitis, ANXA2 is essentially undetectable (4).Consequently, the current study's objective was to assess the amount of annexin A2 in compensated and decompensated HCV-treated patients in order to diagnosis HCC because it is a useful diagnostic and predictive marker for early HCC in patients with chronic hepatitis C. included in this study and divided into two groups.Group (1): 40 HCV-treated patients, 20 of whom had compensated liver cirrhosis and 20 did not, were free of HCC.: 40 HCV patients with HCC, 20 of whom had compensated liver cirrhosis and 20 did not.Between November 2021 and May 2022, patients were chosen from the internal medicine department of the Beni-Suef university hospital according on the preceding inclusion and exclusion criteria:
Figure ( 2 )
Figure (2) Mean AFP level in study groups
Annexin
invasion, and metastasis.Contrary to normal or cirrhotic tissue, HCC has higher levels of ANXA2 expression.The diagnostic power of Annexin A2, a promising HCC diagnostic marker, when combined with AFP is significantly increased.Its serum concentration can be used as an effective, non-invasive tumour marker for HCC identification.
https://ejmr.journals.ekb.eg/
assessment, To assess the quantitative variations between more than two independent variations in quantitative data, a one-way ANOVA test was used.
Table 1 :
Comparisons of demographic characteristics in different study groups.
Table 2 :
Correlations between laboratory tests conducted among the study groups.
Table 3
revealed that the AFP and ANXA2 values between the four study groups differed statistically and significantly (p-value < 0.05), with the decompensated HCC group having an elevated mean (figures 2 and 3).
Table ( 5
): comparisons of virology findings across various study groups.
Table ( 6
): Comparisons of clinical findings in different study groups.
a: significance difference between comp.& decomp HCV groups b: significance difference between comp.& decomp HCC groups c: significance difference between comp.HCV& HCC groups d: significance difference between decomp HCV&HCC groups Figure (4) Ascites in study groups
Table (
Correlation between ANXA2 with routine investigations among HCV cases. 7): Comparisons of tumor characteristics in different HCC groups.andTLCwithp-value <0.05, indicating that a rise in TLC will be associated with a higher levels in ANXA2 content.However, there was no statistically significant link between ANXA2 level and all other HCV case investigations with p-value >0.05.No SplenomegalySplenomegaly https://ejmr.journals.ekb.eg/Table (8):
Table 9
Sensitivity and specificity tests for the ANXA2 level showed that neither research group (HCV or HCC) could reliably diagnose the condition of decomposition (figures 6 and 7).Sensitivity and specificity of ANXA2 in diagnosis of decompensate degree of both HCV and HCC cases.
demonstrated that no statistically significant relationship existed between the ANXA2 value and any of the other HCC cases examined with a p-value greater than 0.05.Table (9): Correlation between ANXA2 with routine investigations among HCC cases.NS https://ejmr.journals.ekb.eg/ Figure (6): ROC curve for ANXA2 level in diagnosis of decompensated condition cases among (HCV) | 2024-02-01T16:37:04.202Z | 2024-01-01T00:00:00.000 | {
"year": 2024,
"sha1": "9f41b878ebcbb867dfdc1dc925ac2d0c9d69c241",
"oa_license": "CCBY",
"oa_url": "https://ejmr.journals.ekb.eg/article_337981_62dae39d10814a5d61833490b3178290.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "409dc95675cdd914461a2e1d0b76d15b1a416fa0",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
85130947 | pes2o/s2orc | v3-fos-license | Physiological Responses to Nutrient Accumulation in Trees Seedlings Irrigated with Municipal Effluent in Indian Desert
Leaf water potential (Ψl), net photosynthesis rate (PN), transpiration rate (E), stomatal conductance (gS), and water use efficiency (WUE) are greatly influenced by the nutrient composition of water which is used for irrigating trees. The above-mentioned physiological variables and foliage mineral concentrations were observed for Eucalyptus camaldulensis, Acacia nilotica, and Dalbergia sissoo plants irrigated with municipal effluent (ME) at 1/2 PET (potential evapotranspiration; T 1 ), 1 PET (T 2 ), and 2 PET (T 3 ) rates and the control plants irrigated with canal water at 1PET (T 4 ). Increased mineral concentrations in order T 1 <
Introduction
Land degradation and contamination of environment from a variety of anthropogenic sources such as smelters, power station industry, the application of metal-containing pesticides, fertilizers and sewage sludge are wide spread [1]. Metals/minerals released into environment do not only become irreversibly immobilized in soil components but are also toxic to animals, plants, and microorganisms [2]. Zinc, nickel, and copper are important constituents of pigments and enzymes. Cadmium, lead, mercury, and copper are toxic at high concentrations because of they disrupt enzyme functions, replace essential metals in pigments, or produce reactive oxygen species [3]. Although some plants have tremendous potential to hyperaccumulate minerals [4,5], their excess accumulation could have adverse effect on the physiological functions thereby affecting growth and biomass production of various tree species when exposed to wastewater disposal. The problem is further aggravated due to the prevalence of crosstalk across different elements. Incidents of interaction between phosphorus and other macro-and microelements have been reported in crop species [6], whereas nutrient interactions in A. thaliana corroborated the prevalence of crosstalk across P and Fe [7]. In addition, Zn deficiency induced accumulation of P in barley, whereas Pi deficiency in A. thaliana resulted in the suppression of high-affinity Zn transporter ZIP9 [6,8]. However, the studies pertaining to nutrient interaction have been confined largely to crop species or model plant system.
Unrelenting disposal of effluent of varying chemical constituents is responsible for contamination of land and water bodies, though increased water and nutrients availability by effluent disposal improve photosynthetic capacity of plants [9]. Municipal effluent is a precious resource available in dry regions and is rich in nutrients required for the plants.
Rate of photosynthesis, carbon assimilation, and biomass production can be increased in tree by making available this water and nutrients to the nutrient poor soil of the desert region [10]. Though increased photosynthetic efficiency is the most important way of increasing productivity, simultaneous 2 Physiology Journal increase in mineral concentrations may affect the efficiency of the species towards efficient utilization of this resource [11]. Protective mechanism of plants by absorption and uptake of minerals from the soil reduces soil toxicity and safeguards environment [12]. But long term disposal may lead to excess accumulation of mineral in biological system and affect physiology and productivity [13]. The extent of influences both on the plant and soil needs to be assessed to avoid mineral toxicity during long term effluent application. The influences may be assessed by measuring foliage mineral concentration in and the physiological functions of tree seedlings used in plantation for efficient utilization of the effluent along with environmental and aesthetic benefits.
Present investigation was undertaken to monitor the effect of varying levels of municipal effluent on minerals accumulation in Eucalyptus camaldulensis Dehnh., Acacia nilotica (L.) Willd. ex Delile and Dalbergia sissoo Roxb. ex DC. seedlings, and the physiological responses in these seedlings in relation to the accumulated minerals. Objectives of this study were to monitor changes in physiological functions of tree seedlings influenced by the mineral accumulation due to municipal effluent irrigation/disposal.
Materials and Method
2.1. Site Description. Experiment was conducted in nonweighing in-filled type of lysimeters of capacity 8 m 3 (i.e., size of 2 m × 2 m × 2 m) at the experimental field of Arid Forest Research Institute, Jodhpur (26 ∘ 45 N latitude and 72 ∘ 03 E longitude), in Rajasthan, India. The climate of the site is characterized by hot and dry summer, hot rainy season, warm autumn, and cool winter. The mean annual rainfall of 1998, 1999, and 2000 was 420 mm and the mean annual pan evaporation was 2025 mm. Averages of minimum and maximum air temperatures of a month were 14.5 ∘ C and 25.0 ∘ C in January, which increased gradually to 34.4 ∘ C and 40.7 ∘ C, respectively, in May. The soil was loamy sand (coarse loamy, mixed, hyperthermic family of Typic Camborthides, according to US soil taxonomy) with 82% sand, 12% silt, and 6.0% clay. Soil organic matter was 0.13% and available PO 4 -P, NO 3 -N, and NH 4 -N were 5.00, 6.00, and 4.50 mg kg −1 , respectively. Soil pH and electrical conductivity (EC) were 7.61 and 0.71 dSm −1 , respectively [14].
Sampling, Preservation and Analysis of the Effluent.
Samples of municipal effluent were collected and analyzed as described earlier [14,15]. Samples were analyzed for pH, electrical conductivity, chemical oxygen demand, biochemical oxygen demand, macro-and micronutrients, total dissolved salts, total solids, and total suspended solids [16]. Nitrogen (N) and phosphorus (P) were analyzed following standard procedure [17]. Calcium (Ca), magnesium (Mg), potassium (K), sodium (Na), copper (Cu), iron (Fe), manganese (Mn), and zinc (Zn) were estimated by the aqua-regia method of Jackson [17] followed by a measurement of concentrations using an atomic absorption spectrophotometer (model-3110, Perkin-Elmer, Boesch, Huenenberg, Switzerland). Municipal effluent was alkaline (pH 7.60 to 8.02); whereas electrical conductivity ranged from 0.91 to 2.14 dSm −1 as described earlier [14]. Biochemical and chemical oxygen demand ranged between 36 and 56 mg L −1 and 190 and 270 mg L −1 , respectively. Availability of NH 4 -N, NO 3 -N, PO 4 -P, K, Fe, Cu, Mn, and Zn was always higher in municipal effluent than in the canal water. Calcium and iron were highest in concentrations among the basic cations and micronutrients, respectively. The ratios of K : N, K : Ca and Mg, Mg : Na, Mg : Mn, Fe : Mn, and Zn : Mn in municipal effluent were 0.04, 0.21, 0.31, 0.91, 9.09, and 1.22, respectively (see Supplementary Table 1 in Supplementary Material available online at http://dx.doi.org/10.1155/2014/545967). These effluent parameters increased during summer (due to high temperature and concentration) and decreased during rainy season (because of addition of runoff water), but highest concentration of NO 3 -N during monsoon was due to its addition from the suburban area and the fertilized field through runoff water [14].
Plantation and Experimental
Design. Nursery raised oneyear-old seedlings of Acacia nilotica, Dalbergia sissoo, and Eucalyptus camaldulensis from a single provenance were planted in July 1998 in the lysimeters of 2 × 2 × 2 m 3 capacity, which were filled with soil up to 185 cm leaving 15 cm space for irrigation. There was one seedling in each lysimeter. The plantation was done in completely randomized design with three replications. Irrigation with municipal effluent was initiated in the first week of September 1998 after seedling establishment. Irrigation was based on the potential evapotranspiration (PET) calculated by multiplication of pan evaporation (Class A evaporation pan fixed at the site) rate and pan coefficient (i.e., 0.70) considering the crop coefficient value of 1.2 to 1.5 for Eucalyptus/Alfaalfa [18][19][20][21]. Water uses by tree plantation was considered not less than 1.5 times that of agriculture crop or about 1.25 times of Class A pan [22]. Four treatments comprised T 1 : irrigation of seedlings with municipal effluent at 1/2 PET; T 2 : irrigation of seedlings with municipal effluent at 1 PET; T 3 : irrigation of seedlings with municipal effluent at 2 PET, and T 4 : irrigation of seedlings with canal water (potable water with low mineral concentration) at 1 PET as control. At the time of treatment application, average seedling heights and collar diameters (12 plants) were 37.3 ± 0.5 (mean ± SE) cm and 0.5 ± 0.0 cm in E. camaldulensis, 37.8 ± 2.1 cm and 0.5 ± 0.1 cm in A. nilotica and 49.8 ± 0.3 cm, and 0.5 ± 0.0 cm in D. sissoo, respectively.
Observation
Recording. Leaf water potential (LWP) was measured monthly on leaf discs in a leaf chamber (L-52; Wescor, Logan, Utah, USA) connected to a dew point microvoltmeter (Wescor HR-33T) between 0500 and 0700 hr from December 1998 to November 1999 before the reirrigation of the seedlings in each treatment. Leaf disc of 0.5 cm diameter was punched out from the attached leaves (without leaf abrasion) and was transferred into a leaf chamber and after 15 minutes of equilibration the water potential was determined [23]. The discs were collected at the time of observation recording for each measurement. Net photosynthetic rate ( ), transpiration rate ( ), and stomatal resistance ( ) were recorded with open system of portable CO 2 Gas Physiology Journal 3 Analyzer, Model CI-301 (CT-301 PS0), CID Inc., Vancouver, USA. Stomatal conductance ( S ) was calculated as 1/stomatal resistance. These physiological variables were recorded between 10 : 00 and 11 : 00 hrs and at one-month interval from December 1998 to November 2000 (24 months). All these observations were recorded on leaves of middle canopy of the seedlings in three replicates. Self-shading within the cuvette was minimised by ensuring that the leaves did not overlap. Instantaneous water use efficiency (WUE) was calculated as / . Atmospheric CO 2 concentration during the experiment period was 380 ppm.
Mineral Nutrient
Analysis. Irrigation quality criteria of municipal effluent and canal water were assessed as described earlier [14,15]. Leaf samples from the 24-month-old planted seedling were collected in June 2000, washed with tap water, and then rinsed with distilled water. The leaf samples were then oven-dried at 80 ∘ C, ground in a palvnizer, and digested with triacid mixture (HNO 3 : H 2 SO 4 : HClO 4 in 10 : 4 : 1 ratio). Concentration of K, Ca, Mg, Na, Cu, Fe, Mn, and Zn was determined using atomic absorption spectrophotometer [17]. Measurement of N and P content was performed after wet digestion with 12 mL H 2 SO 4 and two Kjeltab (Cu/3.5) catalyst tablets at 350 ∘ C for half an hour and estimated using UV-VIS spectrophotometer model 117 at 490 and 420 nm wavelengths, respectively [17].
2.6. Statistical Analysis. Data were statistically analyzed using SPSS statistical package. There were three species and four treatments; hence, the foliage nutrient data were analysed using a two way ANOVA. Tree species and treatments were the factors. Since the physiological data were recorded repeatedly at one-month interval, these data were analysed using repeated measure ANOVA. Physiological parameters per month were the response variables. Month was the within subject factor and tree species and treatments were the between subject factors. Before analysis, average data of these variables were log or reciprocal of square root transformed for normality [24] and homocedasticity [25] in order to make valid statistical inferences about population relationships. Duncan Multiple Range Tests (DMRT) were also performed on each set of data for homogeneous subsetting for treatments and species. Pearson's correlation was performed to monitor the relations of foliage nutrients concentrations with the physiological variables and total effluent applied. Regressions were performed to observed relations between 24 months average physiological parameters and foliage mineral concentrations.
Environmental Factors.
Rainfall was 588.5 mm and total pan evaporation was 5420 mm during December 1998 to November 2000 showing high water deficit. Air temperature, photosynthetically active radiation (PAR), and vapour pressure deficit (VPD) varied between the months (Figure 1). Averages of minimum and maximum air temperatures of a month increased from the lowest at 08 : 00 hr to the highest
Foliage Nutrient Concentrations.
Seedlings irrigated with municipal effluent at T 2 and T 3 levels had higher ( < 0.05) concentration of N, P, K, Ca, Mg, Cu, Fe, Mn, and Zn than in the canal water (T 4 ) irrigated seedlings across the species (Table 1). Uptake and accumulation of the abovementioned nutrients increased ( < 0.01) with irrigation quantity from T 1 to T 3 . When T 1 and T 4 treatments were compared, concentration of Na in all the species, Ca and Mg in E. camaldulensis and K in D. sissoo seedlings were the lowest in T 1 , whereas other nutrients were the lowest in the seedlings of T 4 treatment. Concentrations of K, Ca, Mg, Cu, and Mn did not differ ( > 0.05) between the seedlings of T 1 and T 4 treatments (DMRT) despite of twofold water applied in latter than in the former treatment (Supplementary Table 2). There was 2% (Mg in T 1 ) to 2.9-fold (Mn in T 3 ) increase in nutrient concentration in ME irrigated seedlings than the respective concentrations in the seedlings of T 4 treatment (Supplementary Table 2 Differences in relative uptake of nutrients affected the treatments order (T 1 to T 4 ) with increasing nutrients concentration ratios (Figure 2). The ratios of K : N, Mg : Na, Zn : Mn, and Mg : Mn differed ( < 0.01) due to both tree species and treatments. But significant variation ( < 0.05) in Fe : Mn ratio was observed only between the species. Species × treatment interactions were also significant ( < 0.01).
Leaf Water Relations.
Leaf water potential (Ψ ) was the highest ( < 0.01) in the seedlings of T 3 across species ( Table 2). The highest Ψ was in E. camaldulensis, but it did not differ with D. sissoo for Ψ ( > 0.05, DMRT) across the treatments. The lowest Ψ was in A. nilotica. Dalbergia sissoo indicated the highest Ψ in T 2 , whereas E. camaldulensis showed greater Ψ in T 3 and T 4 treatments (DMRT). The Ψ was the highest ( < 0.01) in January (December in E. camaldulensis) that decreased gradually to the lowest value in May and then rose in July-August (Table 3). The lowest Ψ was recorded for A. nilotica seedlings in most of the months. ; E: rate of transpiration (mmol m −2 s −1 ); s: stomatal conductance (×10 −3 mol m −2 s −1 ), and WUE: instantaneous water use efficiency ( / ). T 1 , T 2 , T 3 , and T 4 are irrigation of seedlings with municipal effluent at 1/2 PET, 1 PET, 2 PET, and canal water at 1 PET, respectively. * * Significant at < 0.01. The same letter in the same column means no significant difference ( > 0.05) between treatments/species.
The Ψ was the highest for D. sissoo from April to August and that of A. nilotica from September to October.
Stomatal Conductance.
Stomatal conductance ( S ) across the species increased ( < 0.01) in order T 1 < T 4 < T 2 < T 3 ( Table 2). Considering species, S was highest ( < 0.01) in E. camaldulensis and lowest in A. nilotica. However, DMRT showed non-significant difference in S between E. camaldulensis and D. sissoo. D. sissoo indicated highest ( < 0.05) S during April to August (66.59 ± 2.44 × 10 −3 mol m −2 s −1 ) 2000 (August 2000 in all treatments) and in E. amaldulensis in rest of the observations. From the lowest value in December/January, S increased by 1.8-fold in T 1 , 1.7-fold in T 2 and T 3 and 1.6-fold in the seedlings of T 4 with wide temporal variation ( Figure 3, left panels).
Transpiration
Rate. Rate of transpiration ( ) varied ( < 0.01) within months, species and treatments. Across the species, was the lowest in the seedlings of T 1 . Average was 4% lesser in the seedlings of T 1 , but it increased by 46% and 85% in the seedlings of T 2 and T 3 treatment, respectively, as compared to value in T 4 treatment ( Table 2). Average across treatments, was the highest ( < 0.01) in . camaldulensis and the lowest in A. nilotica seedlings. E. camaldulensis in May and June. Rate of transpiration was the lowest in December/January (Figure 3(b)). It increased in March and April and decreased again in May before approaching the highest value in August. 3.6. Net Photosynthesis Rate. Repeated measure ANOVA indicated variations ( < 0.01) in net photosynthesis rate ( ) due to species, treatments, and months. Average increased with quantity of applied effluent and seedlings of T 3 treatments indicated the highest in all species. The lowest was in the seedlings of T 1 in most of the months and in T 4 in April, May, August, and September. When compared with the seedlings of T 4 , increased by 34% and 66% in the seedlings of T 2 and T 3 , respectively, whereas it was 5% less in T 1 treatment. Across the treatments, the highest and lowest values of were in E. camaludensisis and A. nilotica, respectively (Table 2). However, tempral variation indicated the highest in E. camaldulensis in most of the observations (maximum of 13.56 ± 0.34 mol CO 2 m −2 s −1 in August 2000, mean ± 1SE), in D. sissoo in April, May, June, and July (6.35 ± 0.36 mol CO 2 m −2 s −1 ) and in A. nilotica in October 2000 (5.2 ± 0.05 mol CO 2 m −2 s −1 in T 3 ). There was a significant ( < 0.01) seasonal pattern in with two maxima, that is, August and again in March/April (Figure 4(a)).
value in August was 6.2-to 7.1-fold in T 1 , 4.0-to 5.2-fold in T 2 , 3.9to 4.5-fold in T 3 , and 4.0-to 5.7-fold in the seedlings of T 4 as compared to the respective value in December/January. Seedlings of D. sissoo showed the highest seasonal variations in among the species. between the species, treatments, and months. WUE was the highest ( < 0.01) in T 4 and the lowest in T 3 seedlings across the species. However, WUE did not differ significantly between the seedlings of T 1 and T 2 treatments (DMRT). Among the tree species, the highest ( < 0.05) WUE was in the seedlings of D. sissoo and the lowest in A. nilotica (Figure 4(b)). When compared with the lowest WUE (observed either in winter or in summer), a 2.1-to 2.6-fold greater WUE was observed in the seedlings of T 1 , whereas the increase was 1.9-to 2.4-fold in T 2 , 1.7-to 2.6-fold in T 3 , and 1.9-to 2.2-fold in T 4 treatments. Relative increase in WUE was highest in D. sissoo (Figure 4(a)). Regressions equations (irrespective of species and treatments) between physiological functions and foliage nutrients concentration showed nonlinear relationships ( < 0.05). Nitrogen and P concentrations showed linear relations to and WUE, respectively (Table 4) was influenced by foliage biochemistry resulting in variations in / ratio, which decreased with increase in nutrient concentration, but P concentration was positively related ( Figure 5). WUE increased with increase in Mg : Na, Fe : Mn, Zn : Mn and Mg : Mn ratios but decreased when their ratio increased above 1.9, 6.67, 0.2, and 34.1, respectively. Increase in Ca, Na, and Fe concentrations influenced ( < 0.05) positively but their respective concentration of greater than 22.26 g kg −1 , 2.76 g kg −1 , and 1146 mg kg −1 reduced (Table 4, Figure 5). Likewise greater than 1.65 g kg −1 , 21.09 g kg −1 , and 2.76 g kg −1 concentrations of P, Ca, and Na respectively, reduced .
12
Physiology Journal T 4 treatment (despite same quantity of water in T 2 and half quantity of water in T 1 treatment) were due to the nutrients applied through municipal effluent. Relatively greater accumulation of Mn, Fe, Cu, and Zn (10% to 2.9-fold in effluent irrigated than in T 4 ) as compared to N, K, Ca, and Mg (2% to 93%) showed an increase in absorption and mobility of the former elements under increased level of effluent irrigation [26]. However, absence of any toxic effect on the tree seedlings showed that the nutrient concentrations were adequate [27][28][29] or lesser than the critical concentrations observed in other studies [30,31]. The differential accumulation of nutrients varied with species characteristics influencing Ψ and ratios of the nutrient concentrations in plant system. The highest concentration of Ca, Mg, K, Na, Cu, Fe, and Zn in A. nilotica seedlings particularly basic cations (Table 1) was related to reduced Ψ and physiological functions by increasing solute concentration. Lesser concentrations of these nutrients in E. camaldulnsis and D. sissoo were due to dilution effects because these species had much broader leaf blades (and increased growth and biomass) than A. nilotica [32]. Relatively greater accumulation of Fe as compared to Mn (high Fe: Mn ratio) in A. nilotica indicated impairing effect of Ca and K reducing Mn concentration, an important constituent (together with Cu, Zn and Fe) of many enzymes influencing physiological function [33]. Higher plants have also evolved sophisticated antioxidant defense system and glyoxalase system to scavenge the oxidative effects of metals [34]. However, the lowest ratio of Fe : Mn in E. camaldulensis was an adaptation/defense mechanism through antioxidative systems (superoxide dismutase) for success of this species under high water availability or waterlogged conditions as observed for Populus angustifolia [35,36]. Increased Ψ in the seedlings from T 1 to T 3 was positively influenced by increased level of effluent applicationsoil water availability [37]. However, higher ( < 0.05) Ψ in the seedlings of T 4 as compared to the seedlings of T 1 treatment was due to two-fold higher water applied to T 4 . A difference of 1.14 to 1.35 MPa between the lowest and highest Ψ in the seedlings of E. camaldulensis as compared to those of 0.83 to 1.24 MPa in A. nilotica and 0.35 to 0.59 MPa in D. sissoo was due to higher in former than in the latter two species (Table 3). High Ψ during December and January indicated low water loss or reduced as a function of low PAR, VPD, low air temperature, and rainfall in January and February, 1999 ( Figure 1). However, gradual increase in PAR, VPD, and air temperature with concomitant decrease in Ψ from January to May in all the three species indicated negative relations between Ψ and these environmental factors.
Foliage Nutrients and Gas
Exchange. Application of municipal effluent had no toxic effect on , , and and seedlings grown with municipal effluent irrigation were capable of maintaining efficient photosynthetic activity throughout the growing season. Higher values of these physiological variables in the seedlings grown in T 2 and T 3 treatments than in the control (T 4 ) further suggested that leaf function was unimpaired by the municipal effluent irrigation. Reduced , , and S together with Ψ in the seedlings of T 1 indicated negative impact of low water supply [38]. Relatively greater increase in these variables during August (monsoon period, Figures 3 and 4) further suggests that the seedlings of this treatment suffered of water stress [39]. A 15% decrease in has been reported in a two-year-old Picea ruben seedlings at water potential averaging −2.45 MPa [40]. Greater values ( < 0.05) of , , and S in T 4 than in T 1 seedlings (except in February, 1999, March, May, August, and October, 2000, for , February and May 1999, and February to June, 2000, for E, and May 2000 for S ) were due to two-fold water applied [41]. Despite of similar level of irrigation (1PET) increased values of the physiological variables in T 2 as compared to the seedlings of T 4 were due to nutritional effects of municipal effluent (Figures 3 and 4). Carswell et al. [42] observed an enhanced rate of electron transport and velocity of carboxylation in Cedrela odorata seedlings at 5% rate of macro-and micronutrient supply compared to that at 1% rate. Highest level of irrigation and corresponding increase in water and nutrient supply induced absorption and transport of the nutrients to the seedling resulted in the highest and in the seedlings of T 3 . This increase in , and S was positively related to Ψ and foliage N and other nutrients as observed in Pseudotsuga menziesii (Mirb.) Franco. [43]. However, the higher value of was not paralleled by increased or , which may reflect a partial limitation in foliage biochemistry or leaf structure and varying effects on these physiological variables [44].
Linear/nonlinear increase in with nutrient concentrations suggests a close link of photosynthetic capacity with nutrient supply, but simultaneous increases in and s ( Figure 5) are indicative of rapid growth and biomass production [45]. A decline/saturation, after an initial increase in and with increase in concentrations of Ca, Na, and Fe, was as a result of the effects of accumulated minerals and limitations due to other nutrients and their ratios [45,46]. A reduction in net photosynthetic rate and stomatal conductance due to a toxic effect of Na + has also been reported in Citrus limonia Osbeck and Olea europaea L. [47]. Though increases in N, K, Fe, and Mn concentrations were beneficial, but relatively greater increase in N and Mn than K and Fe, respectively, from T 1 to T 3 seemed to facilitate to a greater extent than evidenced by increased WUE as observed in D. sissoo discussed later (Supplementary Figure 1). A 4.0-to 7.1fold variation in compared to 2.2-to 4.0-fold variation in among the months further indicated greater sensitivity of to foliage chemistry as well as environmental factors. Increase in , , and S during monsoon and spring due to reduced VPD, PAR, and air temperature though rainfall suggests the effects of environmental factors in influencing physiological variables. Despite of lower nutrient concentrations except Mn (highest) higher , , and S in E. camaldulesis were the effects of lower Ψ and tolerance to Mn because of scavenging system composed of antioxidants as reported for Mntolerant maize (Zea mays L.) [48]. Decrease in the values of these physiological variables during winter (due to plant senescence and reduced VPD and transpiration losses) and summer (due to increase in VPD, PAR, air temperature, desiccating wind velocity, and probably mineral concentrations) was similar to that in Pseudotsuga menziesii [49]. Drops in and as a function of high irradiance/temperature through stomatal control have also been reported by Van Assche and Clijsters [50] and Castillo et al. [51].
Foliage Nutrients and Water Use
Efficiency. Nutrient concentration influenced and and thus instantaneous water use efficiency (WUE). A negative relation of nutrients concentration with WUE suggested impaired effects of K, Ca, Na, and Zn on than on . Increase in was associated with increase in and S from T 1 to T 3 treatments, but greater increase in as compared to due to increased water and nutrient supply from T 1 to T 3 impaired WUE. Ewers et al. [52] observed an increase in transpiration rate in irrigated trees, relative to unirrigated trees by the effect of irrigation combined with fertilization. Low WUE in municipal effluent irrigated seedlings as compared to control (T 4 treatment) was due increased water availability which enhanced to a greater extent (increased by 46% in T 2 and 85% in T 3 ) than (increased by 34% in T 2 and 66% in T 3 ). Higher ( < 0.01) WUE in D. sissoo than the other species in most of the months (Figure 4(b)) was due to enhanced foliage and concentrations with greater positive influence on than on . Thus D. sissoo was able to maintain high rates of with relatively low S and is considered to be tolerant to low moisture availability and has high WUE [50]. An inverse relation between K : N ratio and WUE (Table 3; Figure 5) also suggests foliar chemistry regulated variations in and . Lowest WUE in A. nilotica was due to relatively greater concentrations (than in other species) of basic cations together with Fe and Zn and lesser concentrations of P and Mn influencing / ratio. This type of species-specific response in WUE had also been observed in Vismia japurensis, Bellucia grossularioides, and Laetia procera when treated with P, Ca, and gypsum [53]. After initial increase, a decrease in WUE with increase in ratios of Mg : Mn, Fe : Mn, and Zn : Mn suggested an adverse effect of Mg, Fe, and Zn on WUE at enhanced concentrations. It seemed that Mn played a part in stabilizing mineral ratio to maintain up right : ratio (WUE).
Conclusions and Recommendation
Irrigating tree seedlings with municipal effluent showed positive influence on nutrient accumulation and physiological functions, that is, Ψ , , , and S . Enhanced together with and S with increased water and nutrient from T 1 to T 3 indicated a fast growth in the tree seedlings. Increase in physiological functions in T 2 as compared to T 4 was the nutrient effects, whereas their increase in T 4 than in T 1 was the effect of water. Relatively higher and lower concentrations of basic cations and Fe influenced gas exchange negatively in A. nilotica and positively in E. camaldulensis, respectively, affecting WUE. A positive effect of N and P on net photosynthesis and that of K on transpiration rate influenced WUE in these seedlings. D. sissoo was efficient water user by maintaining up right ratio between and by accumulating higher N and P, and lower Mg, Na, and Fe concentrations than other mineral nutrients. Adequate concentration of Mg, Na, Fe and Zn enhanced physiological functions, but their higher concentrations adversely affected gas exchange and WUE. Conclusively, higher nutrient accumulation and low WUE in A. nilotica seedling were adaptations towards higher nutrient load and this species can safely be categorized as best soil ameliorator [32]. D. sissoo maintained relatively greater and lesser (a characteristic of efficient water user). E. camaldulensis maintained higher gas exchange by reducing concentration of basic cations and stabilizing Fe : Mn and Mg : Mn ratios and can be better species for long term disposal of municipal effluent. | 2019-03-22T16:18:50.457Z | 2014-07-24T00:00:00.000 | {
"year": 2014,
"sha1": "502296496132c85d0b8f493fbc6617bf22d6377c",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/archive/2014/545967.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "4eab906921bece3eb297157e421dda07c549e4fb",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Biology"
]
} |
198913736 | pes2o/s2orc | v3-fos-license | Chronically ill patients’ preferences for a financial incentive in a lifestyle intervention. Results of a discrete choice experiment
Background The preferences of diabetes type 2 patients and cardiovascular disease patients for a financial incentive added to a specified combined lifestyle intervention were investigated. Methods A discrete choice experiment questionnaire was filled out by 290 diabetes type 2 patients (response rate 29.9%). Panel-mixed-logit models were used to estimate the preferences for a financial incentive. Potential uptake rates of different financial incentives and relative importance scores of the included attributes were estimated. Included attributes and levels were: form of the incentive (cash money and different types of vouchers), value of the incentive (ranging from 15 to 100 euros), moment the incentive is received (start, halfway, after finishing the intervention) and prerequisite for receiving the incentive (registration, attendance or results at group or individual level). Results Prerequisites for receiving the financial incentive were the most important attribute, according to the respondents. Potential uptake rates for different financial incentives ranged between 37.9% and 58.8%. The latter uptake rate was associated with a financial incentive consisting of cash money with a value of €100 that is handed out after completing the lifestyle program with the prerequisite that the participant attended at least 75% of the scheduled meetings. Conclusions The potential uptake of the different financial incentives varied between 37.9% and 58.8%. The value of the incentive does not significantly influence the potential uptake. However, the potential uptake and associated potential effect of the financial incentive is influenced by the type of financial incentive. The preferred type of incentive is €100 in cash money, awarded after completing the lifestyle program if the participant attended at least 75% of the scheduled meetings.
Introduction Physical inactivity and a poor diet contribute to the development of a range of chronic diseases and explain part of the variation in premature mortality [1,2]. Many people do not meet the standards for physical activity levels developed by the World Health Organization and are physically inactive [2,3]. Patients with diabetes mellitus type 2 or with coronary heart disease are groups with relatively high prevalence of physical inactivity [2].
Health care providers seek effective ways to change this unhealthy behavior. One way to do so is by offering (chronically ill) patients a lifestyle program that includes physical activity and improving eating behavior, called combined lifestyle interventions (CLIs) [4,5]. However, participation rates in lifestyle programs vary considerably. Some programs have good participation rates, others struggle with low participation rates. For example, the participation rates of diabetes mellitus type 2 patients in lifestyle programs, mainly implemented in primary care, range from 10% to 80% and multiple studies mentioned that boosting the motivation of participants requires more attention [6][7][8][9].
Health promoting financial incentives (HPFI) might increase patients' participation rates and adherence to lifestyle programs and are increasingly implemented by public authorities and health insurance companies to promote healthy behaviors [10][11][12][13]. However, the effectiveness of financial incentives added to lifestyle programs in the health care setting for individuals is still inconclusive [14,15]. HPFI are cash or cash-like rewards or fines, provided contingent on (non-) performance of healthy behaviors. The two main categories are positive (e.g. reward or discount) and negative (e.g. a fine or a higher contribution to the lifestyle program or health insurance premium) incentives [16]. Within these two categories, the incentive can vary on different characteristics. For example, they can vary in value, the moment that the participants receive their incentive (before the intervention or afterwards), conditions that have to be fulfilled to receive the incentive, and many more characteristics (e.g. provider of the incentive, lottery system or guaranteed reward). The incentive can be targeted at the participation rate, at compliance with instructions, or at outcome measures such as a higher physical activity level, a healthier diet or weight loss.
A financial incentive is an extrinsic motivation. A well-known argument for not using financial incentives is the crowding-out effect. This refers to the mechanism that extrinsic motivation in the form of financial incentives might undermine and replace the intrinsic motivation. However, in the field of health related behavior, so far no evidence has been found to support this possibility [17,18]. A plausible explanation is that individuals eligible for a CLI do not have any intrinsic motivation to change their health behavior. Therefore, intrinsic motivation cannot be replaced by extrinsic motivation. By adding an extrinsic motivation to start participating in a CLI, participants may develop intrinsic motivation during the course of the program, for example because they develop a better physical condition.
To prevent the implementation of an ineffective or even counterproductive HPFI, insight into the preferences of the target population with regard to the HPFI is of crucial importance. To date, in the design phase of a new intervention that includes a financial incentive, hardly any research (if any) has been performed into the target populations' preferences regarding the characteristics of the financial incentive. Previous studies do however provide some general information about preferences regarding incentives. For example, the study by Gneezy et al. shows that if a financial incentive is not high enough, it might justify or even promote undesirable behavior [19]. The study by Barte et al. shows that there is a need for more insight into the effectiveness of the different types and components of a financial incentive and that for example unconditional financial incentives do not affect physical activity [20].
One way to determine preferences with regard to HPFI is by performing a Discrete Choice Experiment (DCE). This is a quantitative technique and a frequently used tool in (public) health research to estimate possible participation rates in interventions or medical treatments and to provide knowledge on the components of the programs that determine the participation rates [21,22]. The DCE methodology is based on the Random Utility Theory and assumes that any intervention or treatment can be described by its characteristics (i.e. attributes, such as the form of the incentive). In this study, a discrete choice experiment is performed to identify which financial incentive is preferred by diabetes mellitus type 2 patients to be added to a specific lifestyle intervention that aims to improve the participant's physical activity level and eating habits.
Material and methods
This study does not fall under the scope of the Dutch Medical Research Involving Human Subjects Act (in Dutch; WMO) and therefore did not need to undergo a review by a Medical Ethical Committee. Since an Institutional Review Board (IRB) approval is only needed when daily life of participants is influenced or participants should perform specific actions an IRB approval was not warranted and therefore not obtained. The data were anonymized prior to the moment that the authors received the data. The authors did not have access to any identifying information. This DCE was conducted as preparatory part of an intervention study aimed at evaluating the efficacy and feasibility of a financial incentive added to a lifestyle intervention. The results of this experiment were used to design the financial incentive that was added to a lifestyle intervention. The lifestyle intervention aimed to improve the participants' physical activity behavior and eating habits. This lifestyle intervention was designed for patients at least 18 years of age, with diabetes mellitus type 2 and/or cardiovascular disease, who received integrated care in the primary care setting in the region of a care group in the southern part of the Netherlands. In this section, the methods of the DCE are described.
Study population
The study population for the DCE was part of the study population of the main project and was selected based on a geographic area. The area of the care group was divided into four parts. Three subareas were selected for the intervention study in which the CLI and a financial incentive would be implemented. One subarea was excluded from the intervention study and the patients living in this area were invited to fill out the questionnaire. All selected patients were at least 18 years of age, with diabetes mellitus type 2 and/or cardiovascular disease who receive integrated care in the primary care setting for their diseases. They received the DCE questionnaire by conventional mail, with a reminder sent two weeks after the first mailing. As respondents completed their questionnaire anonymously, no information about non-responders is available.
Discrete choice experiment
The attributes and levels included in the current study ( Table 1) were determined in a stepwise manner. First, a list of characteristics of financial incentives was compiled, based on available research literature [11,23]. This list was discussed in three focus group interviews (eleven participants in total) to ensure that the most important attributes for the decision-making process were included. The focus groups consisted of patients with diabetes mellitus type 2 and/or cardiovascular disease. Since no new attributes were mentioned during the focus groups, the existing list of potential attributes was sent to a new subsample of patients in a different geographical location in the northern part of the Netherlands. We believe the patients of this subsample are comparable to patients in our study as patients in all Dutch care groups receive similar diabetes care, based on Dutch general practitioners' guidelines. These patients were asked to rank the attributes from most to least important. In total, 30 individuals filled out the ranking forms, of which eleven had participated in the focus group interviews. This process led to the inclusion of four attributes of which one had three levels (moment), two had four levels (form and value) and one had five levels (prerequisite). The levels were chosen based on the feasibility in practice. See Table 1 for the levels and attributes that were included in this DCE.
Study design. A full factorial design with the identified attributes and levels as described in Table 1 would test all possible combinations of attributes and levels and would therefore consist of 240 (3 � 4 � 4 � 5) different scenarios. Due to obvious methodological (bias) and cognitive (burden on participants) reasons, not all these scenarios were included.
After pilot testing our original orthogonal DCE design, NGene 1.0 (ChoiceMetrics, 2011) software was used to develop a D-efficient design, which entails a design with an optimal variance-covariance matrix [24,25]. The design was restricted because not all combinations of attribute levels are possible in real life. For example, when the reward is given at the start of the intervention the only requirement that can be met is registration for the lifestyle program. Our final design consisted of 18 unique choice tasks. To limit the burden for the respondents, NGene divided these 18 choice tasks into two sets of nine choice tasks and each set was disseminated among half of the study population.
Questionnaire. The questionnaire consisted of two parts (S1 and S2 Files). In the first section the participant had to fill out 29 questions about age, gender, socioeconomic status, nationality, physical activity level, eating habits, quality of life (EQ-5D questionnaire; score between 0 and 1), health literacy [26,27], and attitude towards lifestyle programs. The second part of the questionnaire consisted of the actual DCE. Every respondent was presented a series of choice tasks. These choice tasks consisted of two different financial incentives described by means of varying levels of the included four attributes ( Table 1). In the questionnaire, definitions for all attributes were specified. Every choice task started with the question: 'Imagine that your physician recommends that you participate in the lifestyle program as described above. Which financial incentive would motivate you most to participate in the lifestyle program and to complete it?' An example of a choice task is shown in Fig 1. Following each of the nine choice tasks, the participant was asked whether the financial incentive of their choice would actually motivate them to participate in and finish the lifestyle program or not (opt-out question). This option was included, because in real life people also have the option not to participate in the program. After completing the nine choice tasks, patients had to fill out six questions about their attitude and opinion regarding financial incentives. Response options were the characteristics of financial incentives in the choice tasks.
Questions were asked about their opinion about using financial incentives, whether they believe it could motivate them or other people to work on their health, which attribute is most important in their choice for accepting or declining a financial incentive, and which form and prerequisites they prefer most.
The questionnaire was pilot tested in the development phase to make sure the target group was able to fill out the questionnaire as intended. Respondents of the pilot questionnaire (n = 30) were able to give comments on the choice of words, the length of the questionnaire and the layout, of the final questionnaire. The respondents did not report any lack of clarity, so we did not change the text of the questionnaire.
Statistical analysis
Direct attribute ranking. Before respondents answered the choice tasks, they were asked by means of a multiple-choice question which characteristic (i.e. attribute) of a financial incentive they found most important when choosing to accept or decline a financial incentive. The results of this question are reported as percentages of the respondents who rank a certain attribute as most important.
Preferences with regard to the incentive. To estimate the preferences of the target population with regard to a financial incentive, data was analyzed using panel-mixed-logit (Panel-MIXL) models. These models adjust the results for the multilevel structure of the data; every respondent completed nine choice tasks, therefore their answers may be correlated, which is accounted for using these analytical models. The following equation was tested using these models: U = V + ε = β 0 + β 1 � voucher exchangeable in multiple stores + β 2 � voucher exchangeable in multiple restaurants + β 3 � voucher for theater-or concert tickets + β 4 � value + β 5 � after the lifestyle program + β 6 � halfway (50%) and after completing (50%) the lifestyle program + β 7 � 75% attendance at individual level + β 8 � 75% attendance at group level + β 9 � individual result fitness test + β 10 � group result fitness test + ε V describes the measurable utility of a specific financial incentive based on the attributes that were included in the DCE. β 0 represents the alternative specific constant and β 1 -β 10 are the attribute level estimates that indicate the relative importance of each attribute. The opt-out option was modelled as having a utility of zero. Finally, ε describes the unmeasured and unmeasurable variation in the respondents' preferences.
All non-linear variables are coded using effects coding. In contrast to dummy coding, the reference category is coded as -1. The coefficient for the reference category is therefore -1 � (sum of the β of the other attribute levels within the same attribute).
Based on the results of the model fit tests (Log Likelihood ratio test and AIC), all attributes were included as random parameters with a normally distributed standard deviation. By doing this, the model accounts for the heterogeneity in respondents' preferences concerning those attributes.
Relative importance scores of the attributes. The relative importance scores of the attributes represent the relative distance of all attributes to the most important attribute on a scale of 0-1. Since the coding of the data influences the estimates of the model, a new model was used to calculate the relative importance scores, in which all attribute levels have been coded similarly (-1 to 1).
The attribute with the highest relative importance score is most decisive in the choice for a financial incentive. To calculate these relative importance scores, first the difference between the largest and the smallest attribute level estimate had to be calculated for each attribute. An importance score of 1 was given to the attribute with the largest difference value. The other relative importance scores were calculated by dividing the difference values by the largest difference value, resulting in a relative distance of all attributes to the most important attribute.
Potential uptake of different incentives. The potential uptake of a financial incentive that consists of a specific set of attributes was estimated. Since all attributes were included as random parameters in the analyses and their standard deviation had to be taken into account, simulation was used to calculate the choice probabilities. The mean participation rates of all simulations (n = 10,000) was estimated by taking the average of all simulated participation rate probabilities, which were calculated as 1/(1+exp -v ).
Participant characteristics
The questionnaire was sent to 971 individuals and 290 questionnaires were returned in total (response rate of 29.9%). The mean age of the respondents was 69.4 years (range 38 to 92 years) and 60.4% were male. About half of the participants had a low educational level. Participants scored their health-related quality of life (EQ-5D) on average with a score of 0.84 for men and 0.79 for women (overall score of 0.82), while 12.2% of the respondents had an inadequate health literacy (score �2; self-reported). Almost a quarter of the participants believed that using financial incentives to motivate people to improve their health would be useful and 42.7% considered it not useful. In total, 16.9% of the respondents reported that a financial incentive would personally motivate them to improve their health while 64.2% reported that it would not motivate them ( Table 2).
Direct attribute ranking
Most of the respondents (52.5%) reported that the prerequisites for receiving the incentive were the most important attribute for them, followed by the form of the incentive (22.1%) and the value of the incentive (14.9%). Finally, the smallest number of respondents (10.5%) marked the moment of awarding the incentive as the most important attribute (Fig 2).
Preferences with regard to the incentive
Respondents preferred cash money over all other forms of incentives, while a voucher for theater or concert tickets was the least preferred. The higher the value of the incentive, the more individuals preferred the incentive. Respondents preferred to receive the incentive after completing the lifestyle program over receiving it at any other point in time. Finally, respondents preferred the prerequisite of 75% attendance at individual level over all other prerequisites. The least preferred prerequisite for receiving the incentive was the group result of the fitness test (Table 3).
Relative importance scores of the attributes
Respondents reported that a prerequisite for receiving the incentive was the most important attribute (score 1.00). The moment of receiving the incentive was about half as important (0.52) and the value of the incentive has the lowest relative importance score.
Potential uptake of different incentives
Potential uptake rates varied strongly, ranging from 37.9% to 58.8%, based on the characteristics of the incentive. The financial incentive with the highest potential uptake (58.8%) was cash money with a value of €100 that is handed out afterwards with the requirement that the individual has attended at least 75% of the appointments ( Table 4). The incentive with the lowest potential uptake (37.9%) was a voucher for theater or concert tickets of €15 that is handed out at the start with no requirements besides registration for the lifestyle program ( Table 4).
Discussion
We performed a discrete choice experiment to identify which financial incentive should preferably be added to a combined lifestyle intervention among patients with diabetes type 2. This study is, to our knowledge, the first to investigate preferences for a financial incentive added to a lifestyle program. The most preferred financial incentive resulting in the highest potential uptake based on this DCE was cash money with a value of €100, handed out after completing the lifestyle program with the prerequisite that the participant had attended at least 75% of the appointments. The prerequisite for receiving the financial incentive was the most important attribute when patients had to decide whether or not to participate in a lifestyle program with an incentive, while the monetary value of the incentive had the lowest relative importance score.
The range of the potential uptake of all incentives was between 37.9% and 58.8%. This range is not very wide, taking into account the great variety of financial incentives that were examined in this study. Still, these differences in potential uptake do matter in practice, which makes this study relevant. It is a noticeable finding that the easiest requirement (registering for the lifestyle program and receiving the incentive at the start of the program) showed quite low potential uptake percentages (range between 37.9% and 45.4%). The study by Wanders et al. describes differences in effect size between out-of-pocket costs and financial rewards on the willingness to participate in a lifestyle program. In contrast to the results of our study, the study by Wanders et al. showed that a reward with a higher value is not always preferred [28], and that individuals may be offended by the high values of the incentive that were offered. In our study we used lower values for the incentive than the cut-off point of the study by Wanders et al., since a higher value than €100 was not feasible with a view to implementing the incentive in practice. Overall, the value did not have much impact on the potential uptake of the incentive ( Table 3 & Table 4).
Sixty-two percent of the respondents have a household income between €1000 and €3000 per month. According to the OECD, the average household income in the Netherlands is about €2100 a month [29]. The average age of the respondents is 69.4 years, implying that most people are retired and entitled to a state pension and possibly to a supplementary pension scheme. In this group, it was found that the value of the financial incentive does not influence the potential uptake to a large extent. We hypothesize that retired individuals might not have very high costs, such as growing children or a mortgage, and may not need the money. The prerequisite for the financial incentive might be a more important determinant of their choice, because receiving the incentive and appreciating the reward is more justifiable if they have accomplished something.
Our target population consisted of patients with diabetes type 2 and/or cardiovascular disease. The average age was almost 70 years and half of the study population had a low level of education. In our study, 12.2% of the respondents had a low health literacy level. According to a report of the HLS-EU Consortium, about 29% of the Dutch population has an inadequate or problematic health literacy [30]. This relatively low percentage of individuals with low health literacy might be the result of selective response, since individuals with low health literacy might also not understand the questionnaire and therefore not respond. Completing a DCE is quite a complex task. One strength of our study is that the questionnaire was first pilot tested on readability and intelligibility, which is recommended in order to obtain valid results [31,32]. By doing this, we reduced the chance that participants did not understand the final questionnaire. Furthermore, to limit the burden for the participants we divided the choice sets into two blocks.
There is little knowledge with regard to the response rates for DCE questionnaires. A study by Watson et al. found that the response rate decreases as the cognitive burden of the questionnaire increases [33]. The response rate in our DCE was 29.9%, which we believe is quite good, taking into account the aforementioned characteristics of our target population and the general complexity of the task. Overall, despite some limitations of the DCE technique, it is now the most accepted method to identify people's preferences. Overall, 64.2% of the respondents reported that a financial incentive would not motivate them to participate in and complete the lifestyle program. We sent this questionnaire to all patients with diabetes type 2 that are registered with a regional care group. This population also includes individuals who are sufficiently active. On the one hand, there might be selective nonresponse, with these active individuals not completing the questionnaire because they do not see the point of the program. On the other hand, the individuals who are sufficiently active and did fill out the questionnaire might not be motivated by receiving a financial incentive. If the respondents are not motivated by an incentive, does not mean that the wrong attributes were chosen in this study. The attributes are characteristics of the incentive that influence the choice for willing or not willing an incentive. We have chosen our attributes with input of our target population, so the selection of attributes was evidence based. Moreover, our results show a large heterogeneity in preferences. For example, the constant show that some respondent have a strong preference for receiving an incentive, whether others have a strong preference for not receiving an incentive. A similar pattern is seen for the value of the incentive. Some people attach importance to the value of the incentive, whether others do not. Due to the sample size, we were not able to specify the analyses, but it is likely that the heterogeneity could be explained partly by the respondents who state that an incentive would not motivate them.
Although just a small amount of research has been performed on the preferences of the target population for a financial incentive, it is becoming an increasingly important research area. Financial incentives may improve the effectiveness of, for example, prevention programs. One concern is that the implementation of financial incentives might pave the way for patients to misuse the available resources [34]. This might result in negative opinions and resistance from the public towards programs that contain financial incentives. In spite of the concerns that individuals may misuse HPFI, research shows that under certain conditions a HPFI is accepted more readily by the general public. These conditions are for example that the HPFI has to be effective and cost-effective and that the HPFI is closely monitored and evaluated [34][35][36][37]. Table 4. Potential uptake in percentages of all possible financial incentives (lowest and highest potential uptake rates in bold).
Cash
Voucher exchangeable in multiple stores
Voucher exchangeable in multiple restaurants
Voucher theater or concert tickets Despite the arguments above, it is still useful to perform research on the preferences for and effectiveness of financial incentives. Lifestyle interventions can support good short-term adherence (up to twelve weeks) to exercise programs for chronically ill patients, but long-term adherence (up to four years) is poor and not well documented [38]. By completing lifestyle programs that are extended enough to achieve behavioral change, the chance that individuals will keep exercising in the long term might be higher. New and creative ways have to be found to increase the adherence of the chronically ill to lifestyle programs. Financial incentives might form one of these new instruments.
This study contributes to the knowledge of what chronically ill patients rate as more and less important with regard to financial incentives in lifestyle programs. The results of this DCE will be used in a study to evaluate the effectiveness of a financial incentive for improving the health of diabetes patients. By first identifying the preferred financial incentive, the probability that the financial incentive is effective will be maximized. In a broader perspective, this study contributes to the knowledge of preferences of individuals with regard to financial incentives.
Conclusions
Among potential participants for a specified lifestyle program for the chronically ill, the most preferred financial incentive is cash money with a value of €100 that is handed out after the lifestyle program is finished with the prerequisite that the participant has attended at least 75% of the appointments. The potential uptake of the different financial incentives included in this DCE varied from 37.9% up to 58.8%. The value of the incentive did not significantly influence the potential uptake. However, the potential uptake and associated potential effect of the financial incentive is influenced by the type of financial incentive. | 2019-07-27T13:05:04.922Z | 2019-07-25T00:00:00.000 | {
"year": 2019,
"sha1": "136da5f55042102a9100dcab9c7c857ef8adc19b",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0219112&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "136da5f55042102a9100dcab9c7c857ef8adc19b",
"s2fieldsofstudy": [
"Medicine",
"Economics"
],
"extfieldsofstudy": [
"Medicine"
]
} |
256653260 | pes2o/s2orc | v3-fos-license | Atherosclerosis and Endometriosis: The Role of Diet and Oxidative Stress in a Gender-Specific Disorder
Background: Accelerated atherosclerosis in patients with endometriosis has been hypothesised, and lifestyle improvement might control cardiovascular risk. We explored cardiometabolic markers and oxidative stress and evaluated the effects of the Mediterranean Diet (MD) in modulating these markers. Methods: In this prospective study, we included 35 women with endometriosis. At baseline (T0) and after 3 (T1) and 6 (T2) months from the start of the diet, we investigated cardiometabolic parameters, lifestyle and oxidative stress. Results: After a 3-month intervention with MD, we observed a significant reduction in total cholesterol (p = 0.01) and LDL-c (p = 0.003). We observed at T1 an increase in B12 and E vitamins, folate and zinc. After 6 months, zinc (p = 0.04) and folate (p = 0.08) increased in comparison to T0. A reduction in homocysteine from T0 to T1 (p = 0.01) was found. After 3 months, an increase in Rapid Assessment of Physical Activity tool 1 (RAPA) (p < 0.001) and RAPA 2 was observed (p = 0.009). We observed high levels of oxidative stress markers at baseline. After 6 months of MD, a significant improvement in lymphocyte Reactive Oxygen Species (ROS) (p < 0.001) and total antioxidant capacity was observed (p = 0.02). Conclusions: The improvement of lifestyle, and in particular the Mediterranean dietary intervention, allowed the improvement of the metabolic and oxidative profile and overall health-related quality of life.
Introduction
Atherosclerosis represents a multifocal and progressive systemic disease of the arterial wall. The pathogenesis of atherosclerosis is linked to the interaction of known and emerging risk factors [1]. The key element in the pathogenesis of this systemic disorder is endothelial dysfunction (ED), which is characterised by an altered endothelium-dependent vasodilation and by a specific state of "endothelial activation". ED promotes all stages of atherogenesis as a proinflammatory, proliferative and procoagulant. Therefore, ED may reflect a vascular phenotype with atherogenic potential. In ED pathogenesis, reactive oxygen species (ROS) hyperproduction and oxidative stress appear to have a relevant role, being closely associated with both traditional and emerging atherosclerotic risk factors [2]. ED is a reversible disorder, and therapeutic strategies aimed at controlling cardiovascular risk factors may translate a "sick" endothelium into a "health" endothelium. In the context of cardiovascular diseases, we cannot ignore a gender-specific approach, so it is necessary to analyse not only the prevalence of traditional risk factors and their different impacts in relation to gender, but also those emerging risk factors that are more common or exclusive in the female gender, such as endometriosis.
Endometriosis is a chronic, debilitating disease affecting roughly 10% (190 million) of reproductive-age women globally [3]. It is associated with pelvic pain and infertility and is characterised by endometrial-like tissue outside the uterus [4].
Endometriosis and atherosclerosis are traditionally viewed as distinct entities, with endometriosis affecting young reproductive-age women, while atherosclerosis is an ageingrelated process. The link between endometriosis and cardiovascular diseases is increasingly being studied and has been partly explained by systemic inflammation, with potentially underlying genetic similarities that, in turn, increase the risk of atherosclerosis, coronary artery disease and cardiovascular morbidity and mortality [5]. Chronic inflammation, oxidative stress, endothelial dysfunction, and cellular proliferation are common hallmarks of both atherosclerosis and endometriosis [6][7][8]. Altered expression of antioxidants and abnormal lipid profiles have also been identified as risk factors for cardiovascular diseases and have similarly been found to be altered in women with endometriosis.
Oxidative stress, increased inflammatory cytokines and mast cell activation are considered relevant steps in the pathophysiology and progression of the disease [9]. Retrograde menstruation is likely to carry into the peritoneal cavity several well-known inducers of oxidative stress, such as erythrocytes, apoptotic endometrial tissue, and cell debris, in addition to pelvic macrophages [10]. Chronic inflammation and oxidative stress are not restricted to the peritoneal cavity and induce a state of systemic subclinical inflammation. Findings from the literature evidenced the beneficial effects of food's natural phytocomponents and dietary patterns, rich in flavonoids and polyphenols, in oxidative diseases [11]. In particular, experimental data reported that hydroxytyrosol, a natural compound of olive oil, exerted anti-inflammatory and antioxidant effects and decreased endometriotic cyst diameter, area and volume [9].
Endometriosis has also been associated with an atherogenic lipid profile, notable for increased LDL, non-high-density lipoprotein and triglycerides, and decreased HDL levels [12,13]. The most convincing evidence was provided by Mu et al. in Nurses' Health Study II, with a 25% increased risk of developing hypercholesterolaemia in endometriosis and a 22% increased risk of developing laparoscopically confirmed endometriosis in women with hypercholesterolaemia [14]. It would signal the need for targeted prevention, and early detection guidelines for chronic and life-threatening diseases in women with endometriosis are needed. In this context, the role of nutrition in determining the establishment and progression of endometriosis has recently become a topic of interest and several observational studies have investigated certain nutrition habits and lifestyles as risk factors for endometriosis [15,16].
The role of diet in endometriosis has gained more attention since it has been observed that diet can affect several processes that are involved in endometriosis, including inflammation, prostaglandin metabolism, and oestrogen activity [17]. To date, it is evident that this topic is characterised by an extreme paucity of scientific data and by an extreme variability in the results obtained. Moreover, it is important to note the difference between the role of diet in the risk of developing endometriosis and a dietary intervention with the aim of suppressing endometriosis-related symptoms.
Lifestyle improvement by adopting Mediterranean dietary patterns, an evidence-based nutritional model for the prevention of cardiovascular disease, might also represent a future non-pharmacological therapeutic intervention for controlling cardiovascular risk in women with endometriosis. No data are available concerning the role of the Mediterranean Diet in influencing subclinical atherosclerosis, and in turn cardiovascular risk, in endometriosis. Based on the above-mentioned observations, the aim of this study is to explore cardiometabolic, endothelial, oxidative stress and inflammatory markers correlated with the atherosclerotic process, and to evaluate the effects of Mediterranean Diet intervention in modulating these markers.
Study Design
In this prospective study, we investigated 90 Caucasian women with endometriosis referred to the Internal Medicine Clinic at the Centre for Assisted Reproductive Technology, Division of Obstetrics and Gynaecology of Careggi University Hospital, Florence, from March 2020 to May 2022. All women were referred to the Endometriosis Centre of Careggi University Hospital, Florence, a third-level centre for endometriosis treatment. In all patients, the diagnosis of endometriosis was confirmed by diagnostic imaging (US or MRI) and/or laparoscopy performed by gynaecologists expert in this field. Pre-existing atherothrombotic disorders, hypertension, diabetes, autoimmune diseases, renal failure, obesity, pregnancy and chronic illnesses that are known to affect gastrointestinal absorption of nutrients (celiac disease, Crohn's disease, ulcerative colitis, or cystic fibrosis) represented exclusion criteria. Women participating or who had participated in a weight loss treatment and nutrition programme in the last 3 months were excluded. Therefore, we included 35 women with endometriosis of reproductive age in the study. At the first evaluation, all women were in therapy with oestrogen-progestins or progestins. The baseline assessment included demographic information and cardiovascular risk factors. Moreover, at the first visit, we focused on the current nutritional habits of endometriosis patients before the start of the diet. We administered a validated questionnaire to evaluate accurate data about dietary intake and adherence to the Mediterranean Diet [18,19].
During the subsequent 10 days after the first visit, the participants completed a food diary giving a detailed description of each food consumed, the time of consumption, and amount, using household measures, in order to capture their habitual diet and stool habit assessment. After the evaluation of the food diary, we developed a personalised tailored nutrition Mediterranean Diet plan agreed upon with the patient to achieve the best compliance. The energy requirements were calculated for each woman. The diet consisted of 50-55% carbohydrate, 25-30% total fat (≤10% saturated fat) and 15-20% protein. Each woman was provided with a 1-week menu plan as well as information on the food groups that could be included and those that should be avoided.
During the study, 3 clinical evaluations were performed: at the baseline, before the start of dietary treatment (T0), 3 months after the start of the dietary intervention (T1), and after a further 3 months (6 months from the beginning) (T2). At baseline (T0) and after 3 (T1) and 6 (T2) months from the start of the diet, we evaluated: -Coagulative profile, including VIII Factor (FVIII), von Willebrand Factor (vWF), vitamin profile (Folate, B12, B6, E) and zinc, endothelial markers (homocysteine, Lp(a)), lipid and glucose profile, liver panel enzymes, hs-CRP; -Blood redox status (oxidative stress markers such as, lipid peroxidation markers, plasma total antioxidant capacity and blood leukocyte subpopulation ROS production).
At each clinical evaluation, anthropometric parameters (weight, height, BMI, waist and hip circumference, waist to hip ratio) were measured. Compliance with the Mediterranean Diet was evaluated during follow-up visits using a validated adherence score [19].
The investigation conformed to the principles outlined in the Declaration of Helsinki. The Local Ethics Committee (Azienda Ospedaliero-Universitaria Careggi) approved the original study (Reference: 21140).
Assessment of Cardiovascular Risk Factors
We assess women's vascular profiles by collecting information about the family history of CVDs, defined as the history of CVDs in first-degree relatives <55 years of age in men and <65 years in women [20]. During the clinical evaluation, traditional (dyslipidaemia, overweight and abdominal fat, smoking habit, sedentary behaviour), prevalent and sex-related (migraine with aura, history of negative obstetric events, such as recurrent-2-pregnancy loss, recurrent implantation failure and placenta-mediated pregnancy complications) CV risk factors were also investigated. Weight was measured in underwear and without shoes with a mechanical column scale with a stadiometer (SECA). According to the WHO criteria, underweight was defined when the BMI was <18.5 Kg/m 2 , normal weight when the BMI was between 18.5 and 24.99 Kg/m 2 , and overweight when the BMI was 25 Kg/m 2 . In order to evaluate abdominal fat, anthropometric parameters were also measured. Hip circumference was measured at the widest point over the buttocks, and waist circumference was measured midway between the inferior margin of the lowest rib and the iliac crest in the horizontal plane at the end of normal expiration. A waist circumference of 80 cm was considered a marker of increased CV risk according to Alberti et al. [21]; waist to hip ratio (WHR) was obtained by dividing the waist circumference by hip circumference, and values 0.85 were considered a marker of increased cardiovascular risk [22]. To limit measurement bias, all anthropometric parameters were performed by the same operator. Dyslipidaemia was defined according to the European Society of Cardiology (ESC) guidelines [23]. Hyperhomocysteinaemia was defined as a pathological condition of excessive plasma homocysteine level (>13 µmol/L). Smokers were defined as current or recent (ex-smokers who stopped less than 5 years earlier) smokers. The diagnosis of migraine with aura was performed by a physician according to The International Classification of Headache Disorders 3rd edition [24]. The Rapid Assessment of Physical Activity (RAPA) questionnaire was administered to assess the level of physical activity. RAPA evaluates a wide range of physical activity levels, from sedentary to vigorous activity, as well as strength and flexibility training [25].
Plasma Lipid Peroxidation Estimation (ALDetect Lipid Peroxidation Assay)
Plasma lipid peroxide levels were estimated using an ALDetect Lipid Peroxidation assay (BML-AK170-Enzo Life) as previously reported [28]. The results are expressed as equivalent of MDA (nmol/mL).
Plasma Total Antioxidant Capacity Estimation
The ORAC (oxygen radical absorbance capacity) assay was performed as previously reported [29]. Plasma total antioxidant capacity was calculated using the standard curve based on Trolox concentration. The results are expressed as Trolox equivalents (µM) and then normalised for the protein concentration.
Statistical Analysis
The sample size was calculated starting from historical data [30] and estimating an improvement (reduction) in High-sensitive C Reactive Protein (hs-CRP) of 0.70 with an SD of 1.4. Assuming a power of 0.80 and α setting of 0.05, the number was estimated for 33 subjects.
The results were expressed as median (range) or mean, as appropriate. Categorical variables were presented in terms of frequencies and percentages. Statistical analysis was performed using McNemar to analyse dichotomous variables and the Wilcoxon rank-sum test for continuous variables for paired data. Unpaired Mann-Whitney U-tests were performed to compare the independent groups. Repeated measures ANOVAs were used to examine the mean differences in the subgroups of women who completed three evaluations. The Spearman (rho) test was used to estimate the correlation between the variables. Differences were considered statistically significant if p < 0.05. Statistical analysis was performed using IBM SPSS Statistics 28 for Windows (SPSS, Chicago, IL, USA).
Results
A total of 35 participants completed the first phase of the intervention. Sixteen women completed the entire study, and 19 women did not complete phase 2 (T2): two of them became pregnant in the meantime, while for what concerns, the remaining 17, less than three months have passed from T1.
About 37% of the study population had a high level of education (>13 years of education). From gynaecologists' medical reports, we obtained information about the phenotype of endometriosis; in particular, 17.1% of women had Deep Infiltrating Endometriosis (DIE) and 17.1% had a mixed phenotype (DIE + Ovarian Endometrioma).
The baseline demographic characteristics of the population studied are shown in Table 1. The patients in the study collective were 31 (21-46) years of age. The evaluation of traditional CV risk factors at baseline evidenced that about 11% of women had a BMI between 25-29.99 kg/m 2 while about 23% of women were underweight. In about 26% of women, we evidenced values of waist circumference above 80 cm. Dyslipidaemia was present in more than 50%, in particular, in 40% of women with high levels of LDL-c.
In framing women's risk profiles, we also considered family history of CVD, and we observed that more than 25% had a family history of CVD. As concerns migraine, which is known as a stronger and more prevalent risk factor in women, we observed a high prevalence of migraine with aura (11.4%).
Finally, by analysing the composition of the diet through a food diary at T0, we observed a daily intake of macronutrients, with carbohydrates accounting for 48% of dietary energy. Carbohydrate intake was lower than desirable for the Mediterranean dietary pattern (50-55%), while sugar consumption was high, if we consider that the baseline evaluation showed that only 14.3% of the patients assumed 2-3 portions of fruit a day. Indeed, we assumed that the quantity of total sugar intake was mainly due to the consumption of added sugars. The consumption of fibre was lower when compared to the 15 g/1000 Kcal/day recommended by the LARN [31], and this can also be attributed to the low percentage of women who regularly consumed fruit and vegetables. Moreover, we observed that the consumption of whole grains was not too frequent either.
Fat consumption exceeded 30% of energy intake and, specifically, the median intake of saturated fatty acids was 10.5%, with values ranging from 6.6% up to 16.6%.
Biochemical Parameters
Biochemical parameters at T0, including coagulative profile, vitamins, homocysteine, liver and renal function, and fasting glucose, are shown in Table 2. The overall frequency of folate and vitamin B12 deficiencies was estimated to be 26.5% and 20%, respectively ( Figure 1). The frequency of elevated homocysteine levels was 8.6% (>13 µmol/L).
We observed no vitamin E deficiency and only 2.9% of women had reduced levels of vitamin B6 and zinc.
Changes in Anthropometric and Biochemical Parameters
We evaluated anthropometric parameters in women from T0 to T2 and we evidenced no significant differences in the BMI, waist circumference and WHR between T0 vs. T1 and T0 vs. T2.
We analysed changes in anthropometric parameters according to BMI at baseline; therefore, we considered changes in women that at T0 were underweight, women with normal BMI and women overweight. Tables 3-5 show the changes in the anthropometric parameters at T1 and T2. In particular, in underweight women (Table 4), we observed at T1 a significant increase in BMI (p = 0.04), while in overweight women (Table 5), we evidenced at T1 a reduction in waist circumference and WHR, even if not significant (p = 0.06). Of relevance, 3 women underweight at T0 had normal BMI at T1 and T2. At T0, all the anthropometric parameters were inversely related to the Mediterranean Diet adherence score, albeit without statistical significance, whereas this correlation was significant at T1 (BMI rho = −0.40, p = 0.02; Waist rho = −0.52, p = 0.001; WHR rho = −0.44, p = 0.009), indicating an inverse association more evident when adherence to the Mediterranean Diet increased. We further evaluated changes in anthropometric parameters by performing repeated measures analysis in a subgroup of women (n = 16) who completed three evaluations ( Table 6). Regarding the lipid profile, we observed a significant reduction in the level of total cholesterol (p = 0.01), and in particular in LDL-c levels (p = 0.003). This trend of reduction in LDL-c seems to also be present at T2, even if not significant, probably due to the small sample size (Table 7). Therefore, at T1, only 14.3% of women (vs. 31.4% at T0) had total cholesterol levels above 200 mg/dL, and 20% of women (vs. 40% at T0) had LDL-c levels above 116 mg/dL. There were no significant changes in the prevalence of hypertriglyceridaemia or low HDL-c levels. Moreover, we analysed the vitamin profile, and we observed an increase in the levels of B12 and E vitamins, folate and zinc (Figure 2). In particular, there was a statistically significant increase in folate levels from T0 to T1 (p = 0.01). After six months, in the 16 patients who completed the clinical evaluation at T2, zinc (T0 vs. T2 p = 0.04) and folate (T0 vs. T2 p = 0.08) levels increased in comparison to T0. Finally, at T1, we observed a reduction in the percentage of patients with B12 vitamin deficit (20% at T0 vs. 5.9% at T1), whereas all women with folate deficit had already corrected this condition. A reduction in homocysteine levels from T0 to T1 (9 µmol/L vs. 7.9 µmol/L, p = 0.01), probably due to an improvement in vitamins regulating its metabolism (folate, B12 vitamin, and B6 vitamin), was found ( Figure 3). Indeed, two out of three patients with homocysteine levels above 13 µmol/L, had normalised their levels at T1. In just one patient, the level of homocysteine was slightly beyond the range of normality. Regarding inflammatory status, hs-CRP levels were beyond the normal range in 14.3% of the patients. At T1, we found a slight reduction, even if not statistically significant, in this parameter (1 mg/L vs. 0.8 mg/L). At T2, in the sample that completed the study, we observed a median value of hs-CRP of 0.6 (0.01-3.4).
Lifestyle Changes
Through the analysis of the Mediterranean Diet adherence score at the end of the first phase, we found a statistically significant increase in the median adherence score (T0 vs. T1, p < 0.001) (Figure 4), which was still significantly higher at T2, with a median value of 13 (8-15) (T0 vs. T2, p < 0.001). The nutrition evaluation took into account the daily water intake that resulted very low in some patients, with a median value of 1.5 L (0. . At T1, we found an important increase to 1.75 L (0.25-4), although not statistically significant (p = 0.07). This improvement may be due to the season in which each woman did their T1 visit (spring or summer rather than fall or winter).
The physical activity levels were evaluated with the Rapid Assessment of Physical Activity (RAPA) tool. At baseline, the median levels were very low for both RAPA 1 (Aerobic) 2.4 (1-6) and RAPA 2 (Strength and Flexibility) 0.3 (0-2). After the first 3 months from the beginning of the intervention, the increase was statistically significant for both RAPA 1 (p < 0.001) and RAPA 2 (p = 0.009). Regarding the patients who concluded the study, RAPA 1 and RAPA 2 significantly increased from T0 to T2 (p = 0.01 and p = 0.04, respectively) ( Table 6). In general, a high prevalence of patients has enhanced their aerobic activities (fast walking, aerobics classrooms and swimming) as well as stretching or yoga for flexibility ( Figure 5).
Oxidative Stress
We evaluated blood global redox status by assessing leukocyte intracellular ROS production, plasma lipid peroxidation, and plasma total antioxidant capacity. We observed significant alterations compared to the normal range observed in a population of healthy age-matched women (as previously published) [32], in particular as regards plasma lipid peroxidation (at T0 was increased in 85.7% of women), level of total antioxidant capacity (at T0 was reduced 51.4% of women), Neutrophil-ROS production (at T0 was increased in 68.6%), Monocyte-ROS production (at T0 was increased 77.1% of women) and Lymphocyte-ROS production (at T0 was increased in 62.9% of women).
We did not observe significant changes between T0 and T1, but there was an improvement between T0 and T2 (Table 8). In particular, by analysing ROS production and plasma antioxidant capacity in patients who concluded the study compared to the initial sample, we observed a statistically significant reduction in Lymphocyte-ROS (p < 0.001) and an increase in total antioxidant capacity (p = 0.02). We also observed this result by analysing ROS production and plasma antioxidant capacity from T0 to T2 in 16 women who concluded the study ( Figure 6). Moreover, considering the relation between adherence to the Mediterranean dietary pattern and oxidative stress profile, we observed at T2 (after 6 months) a significant inverse correlation between leukocyte ROS production and adherence to the Mediterranean Diet
Discussion
This study aimed to evaluate the role of the Mediterranean Diet and lifestyle in improving the cardiovascular risk profile of women with endometriosis and investigated common mechanisms shared by atherosclerosis and endometriosis. The aetiopathogenesis of endometriosis is a multifactorial process that determines the development of an extremely heterogeneous disease. Several hypotheses have been suggested; nevertheless, none is able to completely explain its pathogenesis and all its different clinical features. Evidence suggests that immune cells, adhesion molecules, extracellular matrix metallo-
Discussion
This study aimed to evaluate the role of the Mediterranean Diet and lifestyle in improving the cardiovascular risk profile of women with endometriosis and investigated common mechanisms shared by atherosclerosis and endometriosis. The aetiopathogenesis of endometriosis is a multifactorial process that determines the development of an extremely heterogeneous disease. Several hypotheses have been suggested; nevertheless, none is able to completely explain its pathogenesis and all its different clinical features. Evidence suggests that immune cells, adhesion molecules, extracellular matrix metalloproteinase and pro-inflammatory cytokines activate the peritoneal microenvironment, leading to differentiation, adhesion, proliferation and survival of ectopic endometrial cells [33]. Systemic chronic inflammation, oxidative stress and proatherogenic lipid profile are the main mechanisms involved in the development and progression of atherosclerosis and represent the possible link with endometriosis. It could be useful to bridge the gap between comorbidities and atherosclerotic burden in young women with endometriosis by integrating knowledge of endometriosis with clinical assessment of internal medicine [13].
Data from the Nurses' Health Study II [34] suggested that endometriosis is inversely associated with early adult BMI. A recent experimental study addressed the causal relationship involving BMI, metabolic status and endometriosis in a mouse model [35]. Mice with experimentally induced endometriosis were found to display lower body weight than the controls, leading to the conclusion that endometriosis is causal to the loss of body weight and body fat. Metabolic dysfunction and obesity usually co-exist, but disrupted metabolic status can also arise independently of body weight. In our study, we excluded obese women in order to limit confounding factors on inflammation markers. We observed that about 11% of women were overweight and about 26% had waist circumference higher than 80 cm, a well-established independent predictor of morbidity and mortality [36]. On the other hand, we found that about 23% of women were underweight, thus possibly underlining the already known relationship between lower BMI and endometriosis. Of interest, we observed after the 3-month intervention a significant increase in BMI in underweight women and a significant reduction in waist circumference and WHR in overweight women.
Concerning the link between endometriosis and dyslipidaemia, few studies have investigated the lipid profile in women with endometriosis [14]. In our study, we observed that more than 50% of women suffered from dyslipidaemia. Our findings are in keeping with Melo et al. [12]; moreover, we provided information about another marker of atherosclerosis and endothelial dysfunction, strictly related to lipid metabolism, such as Lipoprotein (a), that resulted in alterations in about 26% of our study population. Based on these observations, we evaluated the eating habits at baseline through food diaries, and we observed a percentage of fat intake above 30%, with more than 10% saturated fatty acids, as previously reported in other studies [37]. Regarding the other macronutrient intake at baseline, women with endometriosis showed lower relative carbohydrate and protein intake. Data from the literature showed that specific fatty acids seem to influence the risk for endometriosis. A high intake of long-chain omega-3 fatty acids decreases the risk, whereas a high intake of trans-unsaturated fat seems to favour the development of endometriosis and inflammatory processes, being associated with increased menstrual pain and autoimmune and hormonal disorders [38]. Furthermore, murine studies have shown a significant influence of an elevated fat intake (45% of daily calorie requirements come from fat) on endometriosis lesions, with an increase in pro-inflammatory cytokines and oxidative stress [39]. Because inflammation plays a pivotal role in the pathogenesis and progression of endometriosis, regulation or monitoring of quantitative and qualitative fat intake might be recommended for disease treatment. At the end of the 3-month intervention with Mediterranean Diet, we observed that this dietary pattern was very effective in reducing total and LDL cholesterol levels.
These data reinforce the previous evidence regarding the healthy effects of the Mediterranean Diet since the development of a less atherogenic LDL phenotype could be a possible explanation for some of the cardioprotective benefits of this dietary pattern [40]. Data from the literature evidenced that the peritoneal fluid of women with endometriosis is characterised by high lipoprotein levels, particularly LDL-c, which generates oxidised lipid components in a macrophage-rich inflammatory milieu [41]. To date, LDL oxidation represents a major cause of endothelial injury, responsible for leukocyte and macrophage migration [42] and induction of inflammatory cytokines, thus favouring internalisation of oxidised LDL particles and contributing to the formation of atheromatous plaques. LDL-c oxidation is a crucial event in the development of atherosclerosis, with low LDL levels reducing the risk of major events in patients with CVD [43]. It is well known that LDL level reduction prevents oxidation, as recognised by European and American societies of cardiology guidelines [44]. Furthermore, LDL oxidation is not the sole initiator of inflammation, as the imbalance between oxidants and antioxidants also plays an important role in the atherogenesis process.
In the evaluation of biohumoral parameters, besides the lipid profile, we evaluated blood levels of vitamin E and zinc, beyond B6, B12 vitamins and folates that are known to be linked to homocysteine levels. Our endometriosis patients showed significantly lower levels of vitamin B12 and folate at baseline, just as higher levels of homocysteine were found in about 9% of women. Hyperhomocysteinemia is most commonly caused by B-vitamins' deficiency, especially folate, B6 and B12. Elevated homocysteine promotes atherosclerosis through increased oxidative stress, impaired endothelial function, and thrombosis induction [45]. Since vitamin B12 is primarily included in animal products, the lower intake of protein (unbalanced diet) in comparison to fat in endometriosis patients could be responsible for this observation. Similarly, folate levels were presumably linked to an irregular vegetables' intake, as observed in the eating habits at baseline. At the end of the 3-month intervention, we observed the efficacy of the Mediterranean Diet in reducing homocysteine concentration and in increasing folate and vitamin B12 levels. Since B-vitamins are assumed to have positive effects on endometriosis and inflammation processes in general, the relevance and influence of lower vitamin B12 or folate intake needs further investigation [46].
With respect to antioxidant vitamins, the mean vitamin E level in our study population fell within the normal range. The main function of vitamin E is to act as a structural antioxidant, in particular at the membrane level. In general, a deficiency of this vitamin exacerbates the inflammatory response and impairs both cellular and humoral immunity. Vitamin E, a lipid-soluble antioxidant, can also delay or prevent oxidative stress-induced diseases. Indeed, it may be considered a neutralising agent against endometriotic cellderived ROS [47,48]. Our results evidenced, after 3 months of intervention, higher levels of vitamin E, as well as zinc, an intracellular signalling molecule with anti-inflammatory properties that has an essential role in oxidative stress and immune functions, inhibiting free radical production [49]. Findings from the literature have reported that lower zinc levels were seen in women with endometriosis in comparison to controls [50]. We observed zinc deficiency in just one patient, but at the same time, a progressive and significant increase after the intervention period started was present in the whole sample. Regarding inflammatory status, it is well-known that hs-CRP represents a predictor of all-cause mortality associated with endothelial dysfunction and possibly reflects the development of atherosclerosis [51]. In the present study, about 14% of women at baseline showed increased hs-CRP compared to the normal range, and after 3 months, a slight reduction was observed. In our previous study performed in women with endometriosis, we evidenced higher mean levels of hs-CRP, which is possibly due to the larger sample size and older age of the subjects investigated [13]. Inflammation together with macrophages, lipid peroxides, and pain-inducing prostaglandins play a vital role in the pathophysiology of endometriotic pain, as suggested by previous studies [52,53].
Oxidative Stress
An increasing body of evidence has suggested that oxidative stress is closely associated with atherosclerotic progression and plaque instability [54]. Oxidative stress is considered a major mechanism involved in endothelial dysfunction and ROS production is associated with cardiovascular risk factors such as hypertension, diabetes, smoking and dyslipidaemia [55]. In the artery wall, ROS generation promotes and activates several pathological pathways involved in atherosclerosis, including lipid oxidation, expression of adhesion molecules, stimulation of vascular smooth muscle cell proliferation and migration, cell apoptosis and activation of metalloproteinase [56,57]. Cellular ROS accumulation induces irreversible damage to cellular components, such as proteins, lipids and DNA, generally leading to necrosis. The connection between endometriosis and ROS production has been widely accepted and deeply studied [10]. In this study, we investigated for the first time a potential link between the Mediterranean Diet and systemic redox status in women with endometriosis. Our findings reveal that women with endometriosis show signs of oxidative stress in the blood at baseline. We observed that Mediterranean Diet intervention contributed to a significant improvement only after 6 months in lymphocyte ROS production (which resulted significantly reduced compared to baseline levels) and total antioxidant capacity (which resulted increased compared to baseline levels), while, after 3 months, possibly due to the short intervention time, no significant modifications were evident. Of interest, we observed an inverse correlation between ROS production and adherence to the Mediterranean Diet. This datum is in keeping with Dai et al. [58], who showed a robust association between adherence to the Mediterranean Diet and lower oxidative stress, supporting the cardioprotective effects of this dietary pattern performed on monozygotic and dizygotic twin pairs.
Other dietary interventions (such as gluten-free diet, low-Ni diet and low FODMAP diet) evaluated the correlation between adherence to diet and improvement of symptoms in endometriosis, but not the effect on oxidative stress [59]. Moreover, Ott et al. demonstrated that the Mediterranean Diet might lead to symptom relief in patients suffering from endometriosis, possibly improving endometriosis-associated pain symptoms via various mechanisms, such as an anti-inflammatory effect [60].
There is growing evidence of the involvement of oxidative stress in female reproduction, particularly in endometriosis. The studies on this topic are highly heterogeneous and aimed to evaluate the effects of dietary supplementation on oxidative stress in endometriosis [59]. Data from studies on mouse models for endometriosis suggest that assuming antioxidant supplements, such as hydroxytyrosol, could be beneficial for endometriosis, as women with endometriosis have immunological processes and elevated pro-inflammatory cytokines, like that of other chronic inflammatory diseases [9]. Therefore, further studies are needed to elucidate the potential role of antioxidant supplementation on ROS in women with endometriosis.
In this study, there are some limits to consider together with the very promising data generated so far. First, the limited duration of the study and the limited number of participants who completed the whole intervention, as enrolment was performed during the COVID-19 pandemic, induced the Italian government to enforce restrictions on outdoor activities and collectively quarantined the population. Second, we are aware that 6 month intervention represents a limited period that permits only to suggest a possible interpretation of the given results. Studies with a larger population and a longer duration are necessary to confirm these intriguing results. Third, in our study, all participants were Caucasian; thus, the findings may not be generalisable to other racial and ethnic groups. Finally, no information concerning intima media thickness (IMT), a structural parameter of subclinical atherosclerosis, was provided. Nevertheless, IMT reflects structural vascular damage that takes a longer time to realise [61]. Notwithstanding these limitations, this study has several strengths. The current research represents a comprehensive analysis of multiple domains of vascular health and various parameters analysed in the same group of participants at different time points. Moreover, another strength is the prospective design of the study, which permits the investigation of events that will take place after the study has been initiated. A second benefit of a prospective study is the ability to account for exposures that vary over time in a given individual.
Conclusions
Our findings provide evidence of an unfavourable cardiovascular profile, as well as of unhealthy lifestyle habits in women with endometriosis, with results appearing of clinical relevance due to the mean age of the study population. Lifestyle improvement, and in particular the Mediterranean dietary intervention, allowed to ameliorate the metabolic and oxidative profile and provided a substantial improvement in the overall health-related quality of life.
Comprehensive and interdisciplinary approaches to managing endometriosis and interventions aiming to increase the education and disease awareness of patients are mandatory in order to provide prompt and accurate diagnosis and treatment and to allow progress in the discovery of possible effective pharmacological and non-pharmacological interventions. In the field of gender medicine, the evaluation of the cardiovascular risk profile cannot neglect a gender-specific approach, since women's health is burdened by exclusive risk factors, such as endometriosis, currently the object of attention by experts in several disciplines. To date, the innovative contribution of this study is represented by the use of a non-pharmacological approach, such as the Mediterranean Diet. This dietary pattern is known to be a cornerstone in the prevention of cardiovascular risk but has limited clinical evidence in the gynaecological field, and specifically in the modulation of common pathogenetic mechanisms correlating endometriosis with subclinical atherosclerosis. This study, by promoting research, prevention, and treatment of this high social impact disease, could help improve the quality of life and long-term cardiovascular health of young women suffering from endometriosis.
The results of this study will help to identify useful indicators in order to define the emerging role of endometriosis in the development of the early atherosclerotic process that exposes women of reproductive age to higher cardiovascular risk. Our results can contribute not only to scientific research, providing further evidence to the ongoing discussion about the correlation between atherosclerosis and endometriosis and the role of the latter as a gender-specific cardiovascular risk factor, but also to suggest intervention programmes aimed at promoting a healthy lifestyle in women with endometriosis.
Further studies are needed to confirm our results and to deeply explore specific aspects related to different hormonal therapies, adherence to the Mediterranean Diet and endometriosis phenotype. Informed Consent Statement: Written informed consent has been obtained from the patients to publish this paper. Data Availability Statement: Not applicable. | 2023-02-08T16:20:09.320Z | 2023-02-01T00:00:00.000 | {
"year": 2023,
"sha1": "f5aef76bd2d1d60511ef7e8520cdceabd91d3bb4",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2227-9059/11/2/450/pdf?version=1675437563",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0034ed2bd4bf8e05787edd9e1aed4594869476fc",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
237454263 | pes2o/s2orc | v3-fos-license | Properties of Staphylococcus lugdunensis in Children
Background. Staphylococcus lugdunensis is one of the clinically important coagulase-negative staphylococci. The purpose of this study was to elucidate the microbiological features of S. lugdunensis in hospitalized children. Methods. From January 2012 to December 2019, all isolates were retrospectively screened for S. lugdunensis. Results. Twenty-five children were eligible for study. Nineteen and six children were classified into a critical care unit group (Group A) and a general medical ward group (Group B), respectively. The prevalence of methicillin-resistant S. lugdunensis was significantly higher in Group A than in Group B (68.4% vs 0%; P < .01). Eleven children (44%) had S. lugdunensis infections, while the remaining children were colonized. Six of the 11 infected children (55%) had healthcare-associated infections. Moreover, 3 isolates exhibited the methicillin resistance. Conclusions. The bacteriological characteristics of S. lugdunensis differ depending on patient background. Selection of antibiotic treatment should in part rely on patient background data.
Introduction
Staphylococcus lugdunensis is a coagulase-negative staphylococcus (CoNS) first described by Freney et al. 1 It is typically considered a member of the normal human skin flora. Even though it is a CoNS, it is known to cause quite severe infections, resembling those of Staphylococcus aureus. 2 Recently, studies of adult populations have resulted in the recognition of its pathogenic role in diseases, such as infective endocarditis, osteomyelitis, septic arthritis, brain abscess, urinary tract infections, and soft tissue infections. [2][3][4][5][6] Therefore, when isolated in culture, S. lugdunensis should be considered a true pathogen, rather than a contaminant, especially if isolated in culture from otherwise sterile patient sample material.
Appropriate administration of antimicrobials is important in the treatment of severe bacterial infection. In the treatment of S. lugdunensis infections, the choice of antibiotics depends on whether the detected S. lugdunensis is methicillin-resistant or not. Several studies involving S. lugdunensis have focused on community-acquired infections. Data on the clinical and microbiological characteristics of S. lugdunensis infection in children remain limited, especially infections in hospitalized children. Moreover, limited data is available pertaining to the potential influence of ward characteristics (eg, pediatric intensive care unit [PICU]; neonatal intensive care unit [NICU]; growing care unit [GCU]; pediatric high-care unit [PHCU]; and general medical ward) on the clinical and microbiological characteristics of S. lugdunensis, especially not in children. 3 The aim of this study was to elucidate the clinical and microbiological features of S. lugdunensis isolated from children admitted to the hospital.
Materials and Methods
The study design was based on a retrospective case series. Between January 2012 and December 2019, 3733 children were treated at Ehime University Hospital. We retrospectively reviewed all patients with cultures positive for S. lugdunensis between January 2012 and December 2019. Children <15 years of age with a positive culture (blood, bile, tissue, sputum/tracheal aspirate, wound/abscess, cerebrospinal fluid [CSF], urine, stool) for S. lugdunensis were eligible for study, including medical chart review. The process whereby children were selected for the study is shown in Figure 1.
At first, we focused on the ward, in which S. lugdunensis was isolated, and classified study subjects into either the critical care unit group (Group A) or the general medical ward group (Group B). Critical care units were defined as wards that were either PICU, NICU, GCU, and PHCU, whereas general medical wards were wards failing to fulfill the definition of critical care unit.
Moreover, study subjects were classified into having either S. lugdunensis infection or S. lugdunensis colonization (or contamination); in the latter case, a positive culture for S. lugdunensis was considered not to be clinically significant. For abscess, wound, sputum, tracheal aspirate, stool, and urine cultures positive for S. lugdunensis, infection was considered if local or/and systemic symptoms compatible with infection (eg, fever, irritability, poor feeding, tachycardia, tachypnea) were observed in the presence of a positive culture. For blood, CSF, bile, and tissue cultures positive for S. lugdunensis, infection was deemed plausible when systemic symptoms were present in addition to a positive culture. Children diagnosed with S. lugdunensis infection were retrospectively reviewed.
Healthcare-associated infections were defined as follows: (i) S. lugdunensis infection identified more than 48 hours after admission to the hospital; (ii) S. lugdunensis infection in a patient fitted with medical devices or indwelling catheters permanently placed via the skin at the time of culture; or (iii) S. lugdunensis infection in a patient with a history of S. lugdunensis infection, hospitalization, surgery, or residence in long-term care facility. Children with S. lugdunensis infection with none of the above-mentioned features were classified as having community-acquired infections. Next, we analyzed selected clinical parameters for each S. lugdunensis-infected patients, including age, gender, underlying disease, diagnosed infections, way of infection acquisition, specimen provided for culture, use of implanted medical device, antibiotic treatment, surgical procedure, and clinical outcome. The period of follow-up was 30 days.
Bacterial isolates were identified using Gram staining, standard bacteriological methods (catalase-positive, coagulase-negative, pyrrolidonyl arylamidase-positive, ornithine decarboxylase-positive, and acid production tests for mannitol-negative results), and matrix-assisted laser desorption/ionization time-of-flight mass spectrometry. 7-10 Antimicrobial susceptibility testing of bacterial isolates was routinely performed using the broth microdilution method. 11 Susceptibility was assessed according to the guidelines of the Clinical and Laboratory Standards Institute (M100-S27). 12 Cefazolin has been the first-choice drug to treat methicillin-susceptible Staphylococcus infections because oxacillin and nafcillin are not approved for use in Japan. Accordingly, methicillin resistance was confirmed on the basis of the antibiotic susceptibility results for cefazolin and the presence of the mecA gene. We used polymerase chain reaction (PCR) to detect the mecA gene. 13 Informed consent was waived due to the retrospective nature of this study and the fact that we used a deidentified chart review. The methods applied in this study were approved by the University Institutional Review Board.
Statistical Analysis
All statistical analyses were performed using SPSS Statistics for Windows Version 22 (IBM, USA). For univariate analysis of non-normally distributed variables, median values (50th percentile), and interquartile ranges (IQR, 25th-75th percentiles) were used. Fisher's exact probability test or the χ 2 test were performed to analyze categorical data. The Mann-Whitney U test was applied to compare the 2 independent samples with regard to age. Differences with a probability (P) value <.05 (for 2-sided tests) were considered statistically significant.
Results
A total of 25 children were eligible for study, of whom 19 were classified into Group A and 6 were classified into Group B. Table 1 summarizes the clinical features and laboratory findings of the 25 children with S. lugdunensis-positive cultures during the study period.
The children comprised 16 males (64.0%) and 9 females (36.0%) and had a median age of 2 months (IQR, 21 days-2 years). The median age of Group A was 1 month (IQR, 16 days-3.5 months, n = 16), whereas that of group B was 2 years (IQR, 1.1 years-4.3 years, n = 9); no significant difference in age was observed between groups A and B (P = .997). Underlying diseases included congenital heart disease (n = 7) and malignancy (n = 2). There were no significant differences between groups A and B with regard to the presence or type of underlying disease (P = .579).
Moreover, insignificant differences were observed for factors such as gender, implanted medical devices, and culture source. Meanwhile, there were significantly more cases of colonization in Group A than in Group B (P < .01).
As shown in Table 1, the number of methicillinresistant S. lugdunensis isolates was significantly higher in Group A than in Group B (68.4% vs 0%, respectively; P < .01). However, insignificant differences in antibiotic susceptibility to ampicillin, clarithromycin, clindamycin, minocycline, and vancomycin were observed. Moreover, the vancomycin minimum inhibitory concentration for S. lugdunensis of 24 of the 25 isolates was <0.5 µg/mL, while that of the remaining isolate was 2 µg/mL. Table 2 summarizes the clinical and laboratory findings of the eleven children with S. lugdunensis infection who were retrospectively reviewed. These children comprised 7 males (63.6%) and 4 females (36.4%). The median age was 2 years (IQR, 6 months-3.5 years, n = 11). Underlying diseases included hypoxic ischemic encephalopathy (n = 1), Dandy-Walker syndrome (n = 2), brain tumor (n = 1), hypoplastic left heart syndrome (n = 1), and preauricular pits (n = 2). Two patients did not have any underlying disease.
Three patients required central venous catheter (CVC). One patient received ventriculoperitoneal (VP) shunt placement, and 1 patient was subject to cystoperitoneal (CP) shunt placement.
As shown in Table 2, the indications for antibiotic treatment were bacteremia (n = 2), pneumonia (n = 1), meningitis (n = 1), cholangitis (n = 1), skin and soft tissue infections (SSTI, n = 5), and lymphadenitis (n = 1). Patients 3 and 6 were diagnosed with clinically significant bacteremia, because 2 separate blood cultures were positive for S. lugdunensis. Additionally, Patient 3 had his peripheral venous catheter removed, and S. lugdunensis also grew in the sample collected from the peripheral venous catheter tip. The portal of entry was unclear in Patient 6.
Six patients had healthcare-associated infection, whereas 5 patients had community-acquired infections.
Both cases of S. lugdunensis bacteremia reflected healthcare-associated infections. Three isolates (patient 1 through 3) were positive for the mecA gene and exhibited methicillin resistance (27%, 3/11); all 3 were seen in healthcare-associated infections.
Seven patients had been subject to surgical procedures, such as drainage for S. lugdunensis infection treatment. At the time of the final follow-up, all patients were observed to be healthy without any symptoms attributable to S. lugdunensis infection.
Discussion
This study is the first to investigate the clinical and microbial features of S. lugdunensis focusing on the ward where the bacteria were isolated from children.
Our data indicate that 76% (19/25) of the S. lugdunensis isolates were identified in cultures submitted from critical care units (PICU, NICU, GCU, or PHCU), and there were significantly more cases of colonization in critical care units than in the general ward units (P < .01). We speculated that the relatively high detection rate reflected the fact that surveillance culturing was routine procedure.
German et al 14 reported that only 2.1% (7/347) of CoNS isolates were found to be S. lugdunensis, and only 1 isolate (14.3%, 1/7) was considered possibly clinically significant at a pediatric center. They reasoned that S. lugdunensis does not appear to be a common pathogen in children. Meanwhile, in our study, 44% (11/25 cases) of the 25 strains reflected infection rather than colonization. Our results suggest that S. lugudunensis should be recognized as an important bacterium that can cause invasive infections. The frequency of S. lugdunensis infection might be underestimated if general bacterial laboratories do not accurately identify all CoNS species. Accordingly, it is necessary to carefully determine in each individual case whether S. lugudunensis is a commensal (colonization or contamination) or an infection. No studies published so far have investigated in detail the background of pediatric hospitalized patients with S. lugdunensis infection. In our study, 9 of the 11 patients had underlying disease. Hence, most patients with S. lugdunensis infection had underlying disease, which for instance caused a breakdown of the skin barrier (patients 10 and 11) or required the use of implanted medical devices that disrupted the skin barrier such as CVC, VP-shunt, or CP-shunt (patients 2, 3, 6-8). It is assumed that S. lugdunensis, which is a part of the normal human skin flora, invades and causes infectious diseases triggered by the breakdown of the skin barrier, and caution is required for patients with a fragile skin barrier.
Antimicrobial drug resistance is a serious threat to successful antibiotic treatment of hospitalized children with staphylococcal infections. Although the susceptibility of S. lugdunensis to penicillin was good in the 1990's, penicillin resistance due to penicillinase production has increased in recent years. 15 In a report from the USA in 2010, penicillin resistance was as high as 45% among 42 strains of S. lugdunensis. 16 In the present study, penicillin resistance was even more pronounced, namely 84% (21/25 cases).
Reports on methicillin-resistant S. lugdunensis infections are rare and include mainly a few case reports. Pereira and Cunha Mde 17 detected the mecA gene in 69 of 100 CoNS strains, including 50% (1 of 2 strains) of the S. lugdunensis strains identified. In our study, the methicillin-resistance rate of S. lugdunensis was 52% (13/25 cases). Interestingly, resistance to cefazolin was significantly higher in isolates from intensive care units than in isolates obtained from children in the general wards. Recently, Ternes et al 18 reported that among 392 neonates admitted to the NICU, multidrug resistance was detected in 2.2% and 29.9% of the CoNS isolates at admittance and discharge, respectively (P = .053). The authors pointed out that NICU represents an environment with an increased risk for colonization by multidrug-resistant CoNS, which might be due to the persistence and/or horizontal spread of methicillin-resistant strains in critical care units.
Meanwhile, Yeh et al 19 reported that a higher proportion of the healthcare-associated isolates than of the community-acquired isolates were resistant to oxacillin (32.1% vs 2.1%, respectively; P < .001). In our study, methicillin-resistant S. lugdunensis was detected more often in isolates reflecting healthcare-associated infections than in isolates representing community-acquired infections (50% vs 0%, respectively; P = .064). Hence, we observed a tendency of methicillin resistance abounding in the healthcare-associated infections. S. lugdunensis healthcare-associated infections may involve an increased risk of severe invasive infections such as bacteremia, sepsis, or bacterial meningitis. Therefore, it is especially important to select appropriate antibiotics for treatment. The use of anti-methicillinresistant S. aureus agents should be considered in cases of S. lugdnensis infection in critical care units or healthcare-associated infections until antimicrobial susceptibility results are known.
On the other hand, in the general ward, no strain resistant to cefazolin was observed (0%), and cases of SSTI represented most of the infectious diseases in our study. Moreover, methicillin resistance was not observed in the children with community-acquired infection (0/5 cases). Therefore, S. lugdunensis infection in the general ward or community-acquired S. lugdunensis infection may currently be treated with cefazolin or oxacillin.
Our study on pediatric patients with S. lugdunensis revealed that the bacteriological characteristics of S. lugdunensis differ depending on the background of the patients, such as the type of ward to which they have been admitted. These results suggest that consideration of the patient's background may be critical to selecting the appropriate antibiotic therapy.
The limitations of our study include its retrospective nature, the fact that it was performed at a single center, and the small sample size. Moreover, the study may not universally reflect the clinical features of S. lugdunensis infection in children. Finally, molecular epidemiological analysis by multi-locus sequence typing or pulsed-field gel electrophoresis analysis was not been carried out. As S. lugdunensis infection in pediatric patients is rare, accumulation of cases is required to confirm our findings.
Conclusion
Our study suggests the need for caution regarding methicillin-resistant S. lugdunensis, which may cause severe invasive infection in pediatric patients at critical care unit.
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
Ethics and Consent
Informed consent was waived due to the retrospective nature of this study and the fact that we used a de-identified chart review. The methods applied in this study were approved by the Ehime University Institutional Review Board (Approval #: 1902008). | 2021-09-10T05:20:37.104Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "1cffb87b0df0b2ad925eca37cda5490ab5fcf7e1",
"oa_license": "CCBYNC",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/2333794X211044796",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1cffb87b0df0b2ad925eca37cda5490ab5fcf7e1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
51865980 | pes2o/s2orc | v3-fos-license | An Unusual Cause of Acute Myocardial Infarction Caused by a Large Pulmonary Artery Intimal Sarcoma
Graphical abstract
INTRODUCTION
Primary cardiac tumors are rare and have been found in 0.0017%-0.28% of autopsies in the general population. 1 About 25% of primary cardiac tumors are malignant, with cardiac sarcomas accounting for 75% of primary malignant cardiac tumors. Cardiac sarcomas are often locally invasive and fatal if untreated. 1 Several types of primary cardiac sarcomas exist, including right-heart sarcoma, left-heart sarcoma, angiosarcoma, and pulmonary artery sarcoma (PAS). 2,3 The first case of PAS was reported from an autopsy in 1923. 4 Because of the rarity of PAS, <250 case reports and case series focusing on histopathology and surgical management have been published to date. 4 Herein, we present the case of a 30-year-old man with pulmonary artery (PA) intimal sarcoma, admitted with shortness of breath and subsequently developing an acute anterior wall myocardial infarction.
CASE PRESENTATION
A 30-year-old man presented to the emergency department with progressive exertional dyspnea and pleuritic chest pain. His medical, surgical, social, and family histories were unremarkable; specifically, there were no risk factors for developing premature coronary artery disease or nonischemic cardiomyopathies.
Computed tomography (Figure 1) of the chest revealed a large soft tissue density mass (6.9 Â 6.1 cm, 30-60 Hounsfield units), extending from the level of the left atrium and lateral mitral annulus superiorly to the posterolateral aspect of the aortic root and main PA. Transthoracic echocardiography ( Figure 2, Videos 1-4) demonstrated a large pericardial effusion with a large extrinsic left atrial mass, adjacent to the main PA and the left sinus of Valsalva. Left ventricular ejection fraction was preserved (67%), without major valvular abnormalities. The inspiratory flow variations across the mitral and tricuspid valves were 58% and 64%, respectively ( Figure 3). Cardiac magnetic resonance imaging ( Figure 4, Video 5) demonstrated a large mass invading the main and right PAs, resulting in severe intraluminary obliteration. In addition, a large right lower lobe pulmonary infarct was also noted.
The patient was admitted to the cardiac intensive care unit in preparation for surgical resection of the extrinsic cardiac mass. He underwent urgent pericardiocentesis, given pretamponade physiology, to optimize hemodynamics before induction with general anesthesia for surgery. The procedure yielded 260 mL of serosanguinous pericardial fluid, and a pericardial drain was left in situ to gravity for 24 hours.
Four hours following pericardiocentesis, the patient developed an episode of polymorphic ventricular tachycardia, which quickly degenerated into ventricular fibrillation ( Figure 5). A cycle of chest compressions (3 min) and unsynchronized cardioversion resulted in successful return of spontaneous circulation. The pericardial drain was removed the next morning, yielding an additional 500 mL of pericardial fluid.
Repeat transthoracic echocardiography at this stage demonstrated resolution of the pericardial effusion. However, left ventricular ejection fraction became significantly depressed (30%-35%), with interval development of severe hypokinesis in the left anterior descending coronary artery (LAD) territory (Video 6). Serum troponin T and creatine kinase-MB were elevated at 1.4 ng/mL (normal range, 0.000-0.029 ng/mL) and 89 ng/mL (normal range, 0.0-2.4 ng/mL), respectively. Emergent left-heart catheterization ( Figure 6) revealed an anomalous left circumflex coronary artery arising from the right sinus of Valsalva and a separate LAD ostium from the left coronary sinus, with severe ostial narrowing and angiographic appearance concerning for extrinsic compression. An intra-aortic balloon pump was inserted, and the patient underwent urgent median sternotomy with radical resection of the mass, lymph nodes, PAs, and entire right lung. Intraoperative transesophageal echocardiography demonstrated the proximity of the mass to the aortic root and ostium of the coronary arteries ( Figure 7, Videos 7 and 8). Subsequently, a left-sided PA homograft was placed, as well as a reverse saphenous vein graft to the LAD.
The tumor measured 10 Â 7.0 Â 6.0 cm, with infiltration from the intima of the PA. Pathologic examination (Figures 8 and 9) showed intimal sarcoma with pleomorphism, extending through the PAs and branches into the right lung.
His immediate postoperative course was complicated by circulatory arrest, hemorrhage, and coagulopathy. He remained in cardiogenic shock for 2 days, supported by the intra-aortic balloon pump, inhaled epoprostenol, epinephrine, vasopressors, and an open chest. Chest closure and intra-aortic balloon pump removal were successful on postoperative day 2, followed by extubation on postoperative day 4. The patient was discharged on postoperative day 14. At follow-up, the patient remained clinically well. Further treatment with chemotherapy has been planned by the oncology department, to commence as an outpatient.
DISCUSSION
The location of the sarcoma determines not only the clinical presentation but also survival, morbidity, surgical approach, and perioperative mortality. 5 PAS presents with nonspecific signs and symptoms, including chest pain, dyspnea, hemoptysis, cough, constitutional symptoms, and/or right-sided heart failure. Because PAS can be mistaken for PA hypertension, bronchogenic cancer, aneurysm or pseudoaneurysm, or pulmonary embolism that does not respond to anticoagulation, the diagnosis of PAS can be delayed for as long as 3-12 months from the onset of symptoms. 2,3,6,7 A recent case series on pulmonary intimal sarcoma reported dyspnea in all patients (N = 20 [100%]), with chest pain (n = 7 [35%]), constitutional symptoms (n = 5 [25%]), and hemoptysis (n = 3 [15%]) being the other common symptoms. 4 Eighty-five percent of patients (n = 41) reviewed in case reports and case series presented with dyspnea, while 11% presented with cough. Two patients presented with syncope, 8,9 and one case of sudden cardiac death was diagnosed on autopsy. 10 Transthoracic echocardiography, chest radiography, and computed tomography remain the initial diagnostic tests of choice. Transesophageal echocardiography, following initial transthoracic echocardiography, increases the definition of left-sided masses, while chest, abdominal, and pelvic computed tomography with intravenous contrast allows better assessment of myocardial and pericardial infiltration, as well as extracardiac adjacent and distant metastases. 2,3 Cardiac magnetic resonance provides tissue charac-terization of a given mass and helps distinguish between matrix and thrombus content, relying on the degree of vascularity and tissue edema. 3 Cardiac magnetic resonance uses specific sequences to differentiate cardiac masses (e.g., PAS) from thrombus, relying on the delayed retention of gadolinium within the extracellular matrix of tumors, which often serves to distinguish between tumor recurrence and postsurgical adjacent mural thrombi. 3,11,12 Cardiac catheterization may be considered to assess tumor burden, particularly in cases in which extrinsic compression of the coronary arteries is suspected. Tumors from the ventricular myocardium have been reported in the literature to cause arrhythmias and sudden cardiac death because of obstruction to blood flow. In our case, we believe the patient went into ventricular fibrillation arrest soon after pericardiocentesis because of extrinsic compression of the LAD by the large PAS mass, following disruption of the cushioning effect provided by the large pericardial effusion.
Surgical resection involving replacement of the PA and possible pneumonectomy to obtain adequate margins remains the mainstay treatment to prolong survival, though there are no specific guidelines for its management. 4,6,13 PAS is poorly responsive to chemotherapy and radiation, but even though the role of radiation is controversial, the tumor's location above the myocardium possibly makes it amenable to radiation therapy. 6 Doxorubicin and ifosfamide have been reported to be among the most effective chemotherapeutic agents, with survival ranging from 5 to 20 months after treatment, but Penel et al. showed no objective stabilization of disease. 14,15 Nevertheless, multimodality treatment has been shown to improve outcomes, with median survival of 24.7 6 8.5 months compared with 8.0 6 1.7 months for singlemodality therapy. 6 Given the limited treatment options available, the aggressive nature of the tumor, and delayed diagnosis, prognosis remains poor, with median survival of 36.5 6 20.2 months after curative surgery compared with 11 6 3 months for incomplete resection. 6,13,[16][17][18][19][20] The common causes of death are predominantly right-sided heart failure from right ventricular outflow obstruction, and distant metastases. 16,17 CONCLUSION The large pericardial effusion likely minimized potential extrinsic compression of the LAD initially, caused by a large PA intimal sarcoma. Subsequent pericardiocentesis and loss of the cushioning effect from the pericardial effusion resulted in additional extrinsic compression of the LAD by the large PA intimal sarcoma, leading to a large anteroapical myocardial infarct. This case highlights the difficult management dilemma created by a large pericardial effusion with pretamponade physiology, caused by a large tumor mass. Clinicians should be mindful of the rare but potentially life-threatening possibility of coronary artery compression by the tumor mass following pericardiocentesis in similar clinical scenarios. Figure 8 Gross specimen of the tumor mass. The right PA is seen as a discrete transverse section of artery marked by an asterisk. The lumen of the artery is almost completely obliterated by tumor. The small rectangle marks the area shown in the microscopic field in Figure 9. The intravascular tumor is in continuity with the large extrinsic extracardiac mass. The tumor shows solid areas with fibrous stroma as well as areas of loose connective tissue stroma with spaces created by necrotic tumor. The outer smooth contour of the mass represents containment by a serosal surface. However, there is a ragged area toward the top, where the integrity of the tumor has been compromised. The green dye corresponds to spillover of surgical ink to assess the resection margins. | 2018-08-14T11:26:56.278Z | 2017-06-01T00:00:00.000 | {
"year": 2017,
"sha1": "4b2828867e0d024502f28de9e7dfed51897fe94c",
"oa_license": "CCBYNCND",
"oa_url": "http://www.cvcasejournal.com/article/S2468644117300373/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4b2828867e0d024502f28de9e7dfed51897fe94c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
131775602 | pes2o/s2orc | v3-fos-license | Safety and Effectiveness of Non–Vitamin K Antagonist Oral Anticoagulants for Stroke Prevention in Patients With Atrial Fibrillation and Anemia: A Retrospective Cohort Study
Background Major randomized trials assessing non–vitamin K antagonist oral anticoagulants (NOACs) for stroke prevention in atrial fibrillation generally excluded patients with hemoglobin <10 g/dL. This study evaluated the safety and effectiveness of NOACs in patients with atrial fibrillation and anemia. Methods and Results A cohort study based on electronic medical records was conducted from 2010 to 2017 at a multicenter healthcare provider in Taiwan. It included 8356 patients with atrial fibrillation who had received oral anticoagulants (age, 77.0±7.3 years; 48.0% women). Patients were classified into 2 subgroups: 7687 patients with hemoglobin ≥10 g/dL and 669 patients with hemoglobin <10 g/dL. A Cox regression analysis was performed to assess the risks of ischemic stroke/systemic embolism, bleeding, and death associated with NOAC versus warfarin in both subgroups, respectively. In patients with hemoglobin ≥10 g/dL, NOAC (n=4793) was associated with significantly lower risks of ischemic stroke/systemic embolism, major bleeding, and gastrointestinal tract bleeding than warfarin (n=2894); there was no difference in the risk of death. In patients with hemoglobin <10 g/dL, NOAC (n=390) was associated with significantly lower risks of major bleeding (adjusted hazard ratio, 0.43; 95% CI, 0.30–0.62) and gastrointestinal tract bleeding than warfarin (n=279), but there was no difference in the risk of ischemic stroke/systemic embolism (adjusted hazard ratio, 0.79; 95% CI, 0.53–1.17) or death. Subgroup analyses suggested that NOAC was associated with fewer bleeding events, irrespective of cancer or peptic ulcer disease history. Conclusions In patients with atrial fibrillation with hemoglobin <10 g/dL, NOAC was associated with lower bleeding risks than warfarin, with no difference in the risk of ischemic stroke/systemic embolism or death.
than those without anemia. 1,6 Physicians are, thus, faced with a treatment dilemma when choosing anticoagulant therapies in patients with AF and anemia.
Anemia is closely associated with peptic ulcer disease and cancer-related bleeding. 7,8 Peptic ulcer disease is the most common cause of bleeding in patients receiving long-term warfarin therapy. 9 Warfarin therapy in patients with a history of peptic ulcer bleeding raises management difficulties on the balance between the thromboembolic risk secondary to anticoagulation interruption and the hemorrhagic risk associated with a history of bleeding. 9 Treating AF with oral anticoagulants in patients with cancer is also a challenge because cancer may result in an increased risk of thromboembolism or bleeding. 10 Therefore, such patients may respond unpredictably to anticoagulant therapy; thus, thromboembolic and bleeding-risk prediction scores may not be reliable. 10. Non-vitamin K antagonist oral anticoagulants (NOACs) are now widely used as alternatives to warfarin for preventing stroke in AF because NOACs are as effective as but safer than warfarin. [11][12][13][14] The working dosage of NOACs is generally easier to ascertain because there is less variation among individuals and the drugs have a faster action onset and offset and exhibit fewer drug-food and drug-drug interactions than warfarin does. 15 However, most major randomized controlled trials of NOACs have excluded patients with hemoglobin <10 g/ dL. 11,13,14,16 In addition, there is no specific recommendation for anticoagulant therapy in anemic patients with AF and hemoglobin <10 g/dL in current guidelines. [17][18][19] Hence, the aim of the present study was to compare the safety and effectiveness of NOAC and warfarin when prescribed for stroke prevention in patients with AF and hemoglobin <10 g/dL.
Methods
The data that support the findings of this study are available from the corresponding author on reasonable request.
In this retrospective cohort study, patient data were collected from the Chang Gung Research Database, a deidentified database derived from the electronic medical records of the Chang Gung Memorial Hospital system in Taiwan. The Chang Gung Memorial Hospital is currently the largest Taiwanese medical care system, comprising 4 tertiary-care medical centers and 3 major teaching hospitals. This medical care system, with >10 000 beds and >280 000 inpatients per year, provides %10% of all medical service used by the Taiwanese people annually. [20][21][22] The hospital identification number of each patient was encrypted and deidentified to protect individuals' privacy. The diagnoses and laboratory data could be linked and continuously monitored using consistent data encryption. The institutional review board of Chang Gung Memorial Hospital approved the study protocol (approval serial No. 21080666B0). The institutional review board waived the need for informed consents from the patients and prentices/guardians because the database used in this study consists of unidentifiable, secondary data released to the public for research.
Study Cohort
This study was conducted on the basis of electronic medical records in the Chang Gung Memorial Hospital system in Taiwan from 2010 to 2017. A total of 19 632 patients, aged ≥65 years, who had been diagnosed with AF (International Classification of Diseases, Ninth Revision [ICD-9], code 427.31 or International Classification of Diseases, Tenth Revision [ICD-10], codes I48.0, I48.1, I48.2, or I48.91) and had had at least 1 prescription filled for oral anticoagulant therapy after diagnosis were included. We enrolled patients with AF, aged ≥65 years, because the Taiwan National Health Insurance only reimburses for NOAC prescriptions for these patients. The oral anticoagulant therapy consisted of warfarin or an NOAC (apixaban, dabigatran, edoxaban, or rivaroxaban). Patients were excluded if they had the following: (1) had deep vein thrombosis or pulmonary embolism up to 6 months before the index day (n=1040), (2) had received joint surgery (n=254) or a heart-valve replacement (n=653) up to 6 months before the index date, (3) had endstage renal disease before the index date (n=2331), (4) had IS or SE or died up to 7 days after the index date (n=3799), or (5) had not had data on hemoglobin levels for the 2 years before the index date (n=3199). After the exclusion, 8356 patients remained eligible for the study, and these patients were divided into 2 subgroups: patients with hemoglobin ≥10 g/dL (n=7687, 92.0%) and those with hemoglobin <10 g/dL (n=669, 8.0%). [9][10][11]14 The Figure is the flowchart of the enrollment process and the subdivision of the eligible study cohort into the 2 subgroups. The index date was defined as the first date on which warfarin or NOAC therapy was initiated. The risks of IS/SE, bleeding, and death were compared between NOAC and warfarin therapies in these 2 subgroups of anticoagulated patients with AF. The
Clinical Perspective
What Is New?
• Non-vitamin K antagonist oral anticoagulants, compared with warfarin, were associated with a significantly lower risk of major bleeding or gastrointestinal tract bleeding, but there was no difference in ischemic stroke, systemic embolism, or death in anemic patients with atrial fibrillation patients and hemoglobin <10 g/dL.
What Are the Clinical Implications?
• Non-vitamin K antagonist oral anticoagulant is a favorable alternative to warfarin in patients with atrial fibrillation and anemia, ≥65 years.
identified patients were followed up until the outcome event or the end of 2017, whichever occurred first.
Outcome Measures
The efficacy end point was the occurrence of IS/SE or death. The safety end point was the occurrence of major bleeding or gastrointestinal tract bleeding. Major bleeding was defined as clinically overt bleeding associated with at least a 2-g/dL decrease in hemoglobin or requiring a transfusion of at least 2 units of packed red blood cells or whole blood, fatal bleeding, or intracranial hemorrhage during the period of drug use or within the 14-day period after the last day of drug use.
Gastrointestinal tract bleeding was defined as hospitalization with a primary diagnosis of bleeding in any segment of the gastrointestinal tract, from the esophagus to the rectum, during the drug-use period or within the 14-day period after the last day of drug use. The follow-up period was defined as the time from the index date to the first occurrence of any study outcome or the end date of the study period (December 31, 2017), whichever came first. The anticoagulant type was treated as a time-dependent exposure. A 14-day period was the censoring window for drug switches. If an event occurred during the initial therapy period or within the 14-day period after the switch, the event and time were ascribed to the initial therapy. If an event occurred ≥15 days after the switch, the event and time were ascribed to the switch therapy. The diagnostic codes used to identify the study outcomes and the baseline covariates are summarized in Table S1.
Statistical Analysis
Data were presented as the meanAESD or median (interquartile range) for continuous variables and as proportions for categorical variables. Differences between continuous values were assessed using Wilcoxon's rank-sum test. Differences between nominal variables were compared with a v 2 test. We calculated event rates as the number of events divided by 100 person-years. The Cox proportional hazard regression with time-dependent exposure (anticoagulant type) was used to compare event rates between NOAC and warfarin therapies in the 2 groups of patients. When comparing the risk of IS/SE, major bleeding, gastrointestinal tract bleeding, or death between NOAC and warfarin therapies, the analyses were adjusted for covariates, including patient characteristics, baseline comorbidities, laboratory information, and baseline medications. Statistical significance was based on the level of a=0.05. All analyses were performed using SAS, version 9.4 (SAS Institute, Cary, NC).
Sensitivity and Subgroup Analyses
We performed 3 sensitivity analyses to validate our findings and check for potential selection biases. First, given the high mortality risk in patients with AF and anemia, we reanalyzed the data accounting for competing risks of death. Second, we reanalyzed the data by using 7 days as a censoring window for drug switches to assess whether the primary findings would have been changed if differential censoring windows between drug switches had been used. Third, we reanalyzed the data after excluding patients with missing values of covariates in the models to determine if missing data would change the results. Subgroup analyses were performed to explore the effects of anticoagulant types in patient subgroups with and without a history of peptic ulcer disease or cancer. 23,24 Results Baseline Characteristics 1%) had hemoglobin levels of 9 to 9.9, 8 to 8.9, 7 to 7.9, and <7 g/dL, respectively. Both CHA 2 DS 2 -VASc and HAS-BLED scores were significantly higher in the patient subgroup with hemoglobin <10 g/dL. Patients with hemoglobin <10 g/dL tended to be older, to be women, and to have more comorbidities and a history of heart failure or stroke. In summary, patients with hemoglobin <10 g/dL were older and weaker than those with hemoglobin ≥10 g/dL. A separate comparison between NOAC and warfarin users in the patient subgroup with hemoglobin <10 g/dL is given in Table 2. Within the group, patients using NOAC or warfarin had similar demographic characteristics and comorbidities, except for older age in NOAC users.
Patients With Hemoglobin ≥10 g/dL
The crude event rates per 100 person-years in the NOAC and warfarin groups were 5. 10
Sensitivity and Subgroup Analyses
The results were similar for IS/SE and bleeding when death was treated as a competing risk factor in the Cox model (Table 3). We reanalyzed the data by using 7 days as a censoring window for drug switches and found similar results to those obtained with a 14-day censoring window (Table S2). Table S3 shows the analysis results after excluding patients with missing values of covariates; the results were similar to the main results. For the subgroup analysis of patients with and without a history of cancer (Table 4) or peptic ulcer disease (Table 5), the results were generally consistent with the main results. The lower risks of major bleeding and gastrointestinal tract bleeding associated with NOAC therapy were similar in patients with and without a history of cancer or peptic ulcer disease. However, extreme caution needs to be taken when interpreting the subgroup analyses because of the limited sample size and number of events. The number of events is way below the general rule of thumb.
Discussion
The main findings of the present study are as follows: (1) Approximately 8% of patients with AF had a hemoglobin <10 g/dL when they were anticoagulated. (2) In patients with AF and hemoglobin <10 g/dL, NOAC therapy was associated with a significantly lower risk of major bleeding or gastrointestinal tract bleeding when compared with warfarin therapy, and there was no statistical difference between the 2 therapies in terms of their risk of IS/SE or death. This better (3) In anemic patients, the differences between NOAC and warfarin therapies in their effects on IS/SE, bleeding, and mortality were similar in patients with and without a history of cancer or peptic ulcer disease. These findings fill a knowledge void on the safety and effectiveness of NOAC therapy for patients with AF and hemoglobin <10 g/dL. Such patients were typically excluded from the major randomized controlled trials that have investigated the efficacy and safety of NOACs for preventing stroke in patients with AF, namely, the RE-LY (Randomized Evaluation of Long Term Anticoagulant Therapy) trial, the ROCKET AF (Rivaroxaban Once-Daily Oral Direct Factor Xa Inhibition Compared With Vitamin K Antagonism for Prevention of Stroke and Embolism Trial in Atrial Fibrillation), the ENGAGE AF (Effective Anticoagulation With Factor Xa Next Generation in Atrial Fibrillation) trial, and the AVERROES (A Phase III Study of Apixaban in Patients With Atrial Fibrillation). 11,12,14,16 The superiority of NOAC over warfarin for preventing stroke in patients with AF and mild anemia (hemoglobin, 9-12.9 g/dL in men and 9-11.9 g/dL in women) has been demonstrated in a post hoc analysis in the ARISTOTLE (Apixaban for the Prevention of Stroke in Subjects With Atrial Fibrillation) trial. 1 In that trial, patients with mild anemia were older, had higher CHADS 2 (Congestive Heart Failure, Hypertension, Age ≥75 Years, Diabetes Mellitus, Prior Stroke or Transient Ischemic Attack or Thromboembolism [doubled]) and HAS-BLED scores, and were more likely to have had prior bleeding events than those without anemia. 1 Apixaban therapy resulted in similar reductions in stroke and bleeding events relative to warfarin therapy in patients with and without mild anemia. 1 In patients with moderate anemia (hemoglobin <10 g/dL), NOAC seems to be an equally effective but safer anticoagulant than warfarin for preventing stroke for anemic patients with AF and hemoglobin <10 g/dL. The lower risk of major bleeding associated with NOAC therapy was consistently observed in anemic patients with or without a history of cancer or peptic ulcer disease. For patients with a history of cancer or peptic ulcer disease, NOAC may be a better oral anticoagulant than warfarin. Anemia is common in patients with AF, 18 and the choices of anticoagulant for patients with AF and anemia have been puzzling. A sizable proportion of patients with AF would not have been eligible for the 4 major clinical trials of NOACs, and the most common reason for their exclusion from these trials was anemia (15.1%). 25 In the 2016 European Society of Cardiology guidelines for managing AF, anemia is considered as a potentially modifiable bleeding risk factor; and it is an important predictor of bleeding in HEMORR 2 HAGES (hepatic or renal disease, ethanol abuse, Malignancy, Older Age, Reduced Platelet Count or Function, Rebleeding, Hypertension, Anemia, Genetic Factors, Excessive Fall Risk and Stroke), ATRIA (Anticoagulation and Risk Factors in Atrial Fibrillation), and ORBIT (Outcomes Registry for Better Informed Treatment of Atrial Fibrillation) scores. [3][4][5]17 Anemic patients with AF requiring anticoagulation were less likely to receive warfarin therapy because of bleeding concerns, and they more often discontinued warfarin therapy than nonanemic patients with AF. 26 In a subgroup analysis of the J-ROCKET AF (Japanese-ROCKET AF) trial, baseline anemia with warfarin therapy was one of the independent predictors of bleeding events. 6 In addition, patients with AF and anemia were also associated with a higher prevalence of stroke, a higher CHA 2 DS 2 -VASc score, and a greater risk of IS/SE. 1 Similar to the study of patients with mild anemia in the ARISTOTLE trial, 1 the present study found that patients with hemoglobin <10 g/dL were older, more likely to be women, and more likely to have experienced prior heart failure and stroke than patients with hemoglobin ≥10 g/dL. Both CHA 2 DS 2 -VASc and HAS-BLED scores were significantly higher in the anemic group. In these patients, NOAC therapy resulted in a similar reduction in the event rates of IS/SE and death and a significantly lower risk of major bleeding or gastrointestinal tract bleeding than warfarin.
Anemic patients with AF receiving anticoagulant therapy require more extensive follow-up because of their increased risks of bleeding and the high probability of anticoagulant interruptions. In the present cohort, 8% of the patients initiating oral anticoagulant therapy had a baseline hemoglobin level <10 g/dL. Before NOAC therapy can be initiated in such patients, their history of bleeding and clinical conditions that are likely to result in bleeding (ie, peptic ulcer disease, impaired renal or liver function, anemia, and thrombocytopenia) should be investigated, and corrected, if reversible. 17,18 Medications that could increase the risk of major bleeding, such as nonsteroidal anti-inflammatory drugs or antiplatelets, should be avoided or balanced with the risk and benefit of anticoagulant therapy. For patients receiving an NOAC therapy, it may be recommended to follow up the hemogram 1 month after anticoagulant initiation and then every 6 to
Study Strengths
The strengths of our study include using a large, well-defined population sample; available baseline hemoglobin, platelet count, renal function, and liver function data before the initiation of oral anticoagulant therapy; and a direct comparison of NOAC and warfarin therapies in patients with hemoglobin levels <10 g/dL and ≥10 g/dL. To our knowledge, this is 1 of the only 2 studies that have compared NOAC therapy and warfarin therapy in patients with AF and hemoglobin <10 g/dL. 1
Study Limitations
This study has several limitations. First, miscoding and misclassification are potential sources of biases in a database that relies on physician-reported diagnoses. However, such miscoding and misclassification are unlikely to have differed systematically between the 2 subgroups of patients, and our findings that NOAC was safer than warfarin and equally effective in patients with hemoglobin ≥10 g/dL agreed with metaanalysis and real-world data. 27,28 Second, because this study was a retrospective data analysis rather than a randomized controlled trial, both selection bias and unmeasured confounders were evident, despite statistical adjustments. Third, we did not assess the quality of warfarin control by calculating the time in therapeutic range because there were many missing values for prothrombin time from the follow-up period. In Taiwan, the measured time in the therapeutic range measured in one study is 56.6%, 29 which is lower than for the white or Japanese populations. The observed benefit of lower bleeding risks in NOACs might disappear when they were compared with well-managed warfarin. 30 Further studies to generalize and apply the findings of this study to other populations are, thus, warranted. Fourth, the sample size for patients with hemoglobin <10 g/dL was only 669. The small sample may not be sufficient to establish the efficacy and safety of NOACs and limit the generalization of the results. Similarly, given the limited patient number and event number in each subgroup, extreme caution is needed in the interpretation of subanalysis results.
Conclusions
In patients with AF and hemoglobin <10 g/dL, NOAC therapy was associated with lower risks of major bleeding or gastrointestinal tract bleeding than warfarin therapy, and the 2 therapies showed no significant difference in the risk of IS/SE or death. for study design and monitoring and data analysis and interpretation. This study is based, in part, on data from the Chang Gung Research Database, provided by Chang Gung Memorial Hospital. The interpretation and conclusions contained herein do not represent the position of Chang Gung Memorial Hospital.
Sources of Funding
This work was supported by research grants from the Chang Gung Memorial Hospital (CORPG3G0271) and the Chang Gung University (CIRPD1D0031), Taoyuan, Taiwan. | 2019-04-26T13:08:04.318Z | 2019-04-25T00:00:00.000 | {
"year": 2019,
"sha1": "e379d542017ee50c7c91d2a8f98e185094735011",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1161/jaha.119.012029",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e379d542017ee50c7c91d2a8f98e185094735011",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
258061722 | pes2o/s2orc | v3-fos-license | Towards in vivo characterization of thyroid nodules suspicious for malignancy using multispectral optoacoustic tomography
Purpose Patient-tailored management of thyroid nodules requires improved risk of malignancy stratification by accurate preoperative nodule assessment, aiming to personalize decisions concerning diagnostics and treatment. Here, we perform an exploratory pilot study to identify possible patterns on multispectral optoacoustic tomography (MSOT) for thyroid malignancy stratification. For the first time, we directly correlate MSOT images with histopathology data on a detailed level. Methods We use recently enhanced data processing and image reconstruction methods for MSOT to provide next-level image quality by means of improved spatial resolution and spectral contrast. We examine optoacoustic features in thyroid nodules associated with vascular patterns and correlate these directly with reference histopathology. Results Our methods show the ability to resolve blood vessels with diameters of 250 μm at depths of up to 2 cm. The vessel diameters derived on MSOT showed an excellent correlation (R2-score of 0.9426) with the vessel diameters on histopathology. Subsequently, we identify features of malignancy observable in MSOT, such as intranodular microvascularity and extrathyroidal extension verified by histopathology. Despite these promising features in selected patients, we could not determine statistically relevant differences between benign and malignant thyroid nodules based on mean oxygen saturation in thyroid nodules. Thus, we illustrate general imaging artifacts of the whole field of optoacoustic imaging that reduce image fidelity and distort spectral contrast, which impedes quantification of chromophore presence based on mean concentrations. Conclusion We recommend examining optoacoustic features in addition to chromophore quantification to rank malignancy risk. We present optoacoustic images of thyroid nodules with the highest spatial resolution and spectral contrast to date, directly correlated to histopathology, pushing the clinical translation of MSOT. Supplementary Information The online version contains supplementary material available at 10.1007/s00259-023-06189-1.
Introduction
The incidence of thyroid nodules in clinical practice is high and is still increasing [1]. Thyroid nodules can be detected with high-resolution ultrasound in 19 to 68% of randomly selected individuals using high-resolution ultrasound (US) of these patients will have a benign diagnosis [11], implying significant overtreatment with risk of postoperative morbidity (e.g., parathyroid injury, hypothyroidism, and laryngeal nerve injury) that affects the quality of life of these patients [12][13][14].
Patient-tailored management of patients with thyroid nodules suspicious for malignancy would benefit significantly from improved cancer risk stratification by improving the accuracy of nodule assessment. Multispectral optoacoustic tomography (MSOT) is a rapidly growing imaging modality that may accommodate this need by adding molecular contrast to conventional US. MSOT enables non-invasive macroscopic imaging (i.e., at the organ level) at depths of several centimeters. By detecting US waves induced by pulsed laser light illumination, referred to as optoacoustic or photoacoustic waves, MSOT has the potential to distinguish and quantify different intrinsic tissue chromophores such as oxy-(HbO2), deoxyhemoglobin (HbR), and lipid and melanin, as well as exogenous contrast agents [15]. The clinical potential of MSOT for disease characterization has been shown in various diseases, including thyroid nodules [16][17][18][19][20], breast cancer [21], or Crohn's disease [22]. The first studies imaging thyroid tissue with MSOT showed the potential to differentiate between malignant and normal tissue during ex vivo imaging [16,18]. In vivo MSOT imaging performed on healthy volunteers showed that it was feasible to image the thyroid and identify vascular structures, which were validated using Doppler ultrasound [17]. While Ultrasound Doppler can visualize blood flow in lesions, it is not suitable to visualize blood flow with low velocity. In contrast, MSOT uses the much more distinctive optical absorption of HbO2 and HbR to visualize vessels. Microvasculature typically exhibits low-velocity blood flow, and previous studies suggest that optoacoustic imaging may be more effective than ultrasound Doppler [23]. Exactly these changes in microvasculature blood flow may be relevant for differentiating benign and malignant nodules [24]. Two MSOT studies confirmed the possibility of distinguishing between benign and malignant nodules and healthy thyroid tissue in vivo, and the latter proposed a strategy using optoacoustic imaging to reduce unnecessary FNA [19,20]. The common result in all the studies on thyroid nodules was a decrease in hemoglobin oxygen saturation between malignant and benign nodules [16][17][18][19][20]. However, these studies' findings might be distorted because simple data processing and inversion techniques, typically based on back-projection algorithms, were employed that contain fundamental limitations decreasing image quality and spectral unmixing accuracy [21]. In this exploratory pilot study, we present in vivo MSOT images of thyroids nodules suspicious for malignancy with next-level image quality and accuracy, i.e., spatial resolution and spectral contrast, utilizing recently developed enhanced data processing and image reconstruction methods [21,[25][26][27]. For the first time, we directly correlate MSOT images with histopathology data on a detailed level, the latter considered the gold standard confirming the high spatial resolution and spectral contrast of our images. We collect data from 38 thyroid nodules, analyze data sets from 27 thyroid nodules and present a detailed analysis of five representative cases to outline thyroid nodule features. Anatomical and optoacoustic features from selected regions of the image are provided. Despite these promising features in selected patients, we cannot reproduce results discriminating benign and malignant thyroid nodules based on mean oxygen saturation in the nodule according to previous publications [16][17][18][19][20]. Thus, we investigate general limitations and distorting effects that obstruct quantitative MSOT imaging to date. The findings presented herein provide, to the best of our knowledge, the highest quality optoacoustic images of thyroid nodules suspicious for malignancy to date.
Study design and patients
The current study is a non-blinded, single-center, prospective, pilot study. Patients ≥ 18 years with thyroid nodules who underwent diagnostic US combined with FNA or had an indication for thyroidectomy (e.g., distant metastasis or goiter) were included between September 2020 and April 2021. An index nodule was defined as a nodule for which diagnostic imaging was indicated. Since this was a pilot study, patient inclusion was stopped after including 21 nodules with final histopathology (Fig. 1). Patients were excluded if they had previous surgery in the head and neck area on the ipsilateral side of the index nodule, received prior radiotherapy in the head and neck area, or were pregnant. All patients gave written informed consent before enrolment. The study was approved by the institutional review board Groningen and registered at ClinicalTrials.gov (NCT04730726). All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. Approval was granted by the Medical Ethics Committee of the University Medical Center Groningen (METc number METc 2019/371).
MS-OPUS image acquisition
All MSOT procedures were performed using a clinical hybrid US-MSOT (MS-OPUS) system (MSOT Acuity Echo prototype; iThera Medical GmbH, Germany). This system uses a Nd:YAG laser (25-Hz repetition rate, 4--7ns pulse duration) for the emission of 25-mJ light pulses. A two-dimensional handheld concave detector array (256 1 3 transducer elements with a center frequency of 3.4 MHz and bandwidth (− 6 dB) of 60% in receive-transmit mode, 125° angular coverage) was used for cross-sectional imaging with a field of view of 40 × 40 mm and in-plane spatial resolution up to 200 µm [21,28].
The thyroid nodule and contralateral healthy thyroid was imaged transversely with the patient in a supine position with the neck in hyperextension. The US signal was detected for anatomical guidance of the imaging procedure. At the position of interest, in average 20 MSOT images consisting of 14 optical wavelengths ranging from 680 to 1195 nm (680, 700, 730, 760, 800, 850, 900, 930, 970, 1000, 1030, 1064, 1100, and 1195 nm) have been acquired (acquisition time of single frame: 0.56 s). These wavelengths were chosen to enable detection and reliable unmixing of Hb, HbO2, lipids, H2O, and collagen in accordance with previous studies [29,30]. Simultaneously, a co-registered US image was obtained with the MS-OPUS device. Per patient in average nine positions and orientations of interest of both healthy and diseased thyroid tissue were recorded. The number of imaged positions varied between patients depending on the number of thyroid nodules. In total, the imaging time per patient ranged from 5 to 10 min including the time to determine the correct position and probe orientation. Patients were asked to hold their breath during image acquisition at single selected positions of interest to reduce breathing motion. To later correlate the location and orientation of the chosen scans to histopathology, the operator notated, based on anatomical structures, the place of the probe during imaging based on anatomical structures. The segmentation of the thyroid nodule was performed with the knowledge of the nodule position based on previous clinical ultrasonography.
Data processing, image reconstruction, spectral unmixing, and data analysis
For further analysis, the MSOT images with slightest motion were selected utilizing the co-registered US images as reference. MSOT acquisitions at wavelength 1195 nm were discarded for all scans due to a very low signal-to-noise ratio triggered by strong absorption of the probe's coupling pad. The remaining raw data of the selected frames was additionally pre-processed with a machine learning-based denoising algorithm [25]. This method strongly reduces Fig. 1 Study workflow including the numbers of included patients and imaged index nodules. Black lines represent the standard of care. The dashed blue lines represent the study intervention and the data included in the final analysis. 1 Two patients were excluded due to technical failure of the MSOT device. 2 Patient was scheduled for thyroidectomy after histologically proven distant metastasis of papillary thyroid cancer. 3 Patient was diagnosed with goiter causing obstructive symptoms of breathing or swallowing difficulties and was directly scheduled for thyroidectomy. 4 One patient was excluded from data analysis as this patient was still in follow-up for metastatic colorectal cancer (Bethesda 4). n patient = number of patients included, FNA = fine needle aspiration, n nodule = number of index nodules the electrical noise present in the measurement data and thus, boosts morphological and spectral MSOT image quality, in particular the ability to access the rich multispectral contrast at high spatial resolution. Subsequently, the denoised signals were bandpass filtered (Butterworth filter, 0.5-12 MHz) before image formation. MSOT images were then reconstructed utilizing a custom model-based algorithm [26,27]. The implementation is available in a public repository. 1 The model-based image reconstruction strongly enhances image resolution and spectral contrast compared to standard back-projection methods in optoacoustics has been shown in previous publications [21,26,27,31]. The algorithm performs a non-negativity constrained least squares minimization with an additional shearlet L 1 -regularization [32,33]. The minimization problem was solved via boundconstrained sparse reconstruction by separable approximation. The algorithm requires 7 to 14 min to reconstruct an image with 14 wavelengths [34]. The speed of sound in the coupling medium of the probe and inside tissue applied for US propagation modelling were tuned manually for optimal image quality. In addition, the algorithm corrects for the total impulse response of the transducers, which has been shown to enhance the quality and fidelity of reconstructed images even further [21,26]. The regularization parameter was optimized manually with an L-curve.
We employed linear spectral unmixing with literature spectra of HbR, HbO2, lipid, melanin, collagen and H2O on the complete MSOT dataset as performed in previous studies [17,19,20]. In addition, we performed blind spectral unmixing, which has shown to yield more distinct separation of molecular targets than linear unmixing, because it can account for distorting effects like spectral coloring [35]. For blind spectral unmixing, both the most significant (i.e., mathematically best matching) absorption spectra and their contributions to the intensity in each pixel (coefficients) are determined in a data-driven manner by performing a non-negative matrix factorization. L 2 -regularization was employed to decrease the effects of high-frequency noise, and L 1 -regularization ensured the biologically expected sparse contribution of absorbers to pixel intensities [36]. Both spectral unmixing algorithms were applied to scans of both thyroid nodules and healthy tissue of all 27 patients in post-processing. Linear unmixing required 7.35 s and blind unmixing 27.20 s per image. The number of blindly unmixed spectra was set to eight empirically and regularization parameters were optimized using an L-curve.
MS-OPUS scans of the four blind unmixing components with largest biological interpretability similar to HbO2, HbR, lipids, and H2O were used by two reviewers to evaluate all scans independently. This selection was performed by one medical doctor with research experience in nuclear, optical, and optoacoustic imaging of the head-and neck region and one engineer with research experience in optoacoustic imaging. The selected scans were discussed in the research team comprising mathematicians, radiologist, pathologist, and head neck surgeon. Scans which showed clinically relevant indicators of malignancy observable in MSOT were selected and compared with histopathology for the identification of possible malignancy markers.
Going beyond the analysis of single patients, we performed a statistical analysis to discriminate malignant from benign thyroid nodules in our complete data set. To reproduce the results of previous studies, we computed the mean coefficients of HbO2 and HbR over the thyroid nodule and the contralateral healthy thyroid utilizing the linear unmixing results. Oxygen saturation (SO2) was computed as ratio of HbO2 over total blood volume for each pixel [16][17][18][19][20]. Additionally, we also computed the mean coefficients of blindly unmixed spectra with largest biological interpretability linked to these chromophores. The thyroid nodule, contralateral healthy thyroid and other selected regions of interest (ROIs) were manually segmented in the US image and transferred to the co-registered MSOT image. Vessel diameters were determined as full-width-at-half-maximum (FWHM) along profile lines across the vessel. The profile lines were selected manually.
The mathematical equations for all algorithms applied for data processing, image reconstruction, spectral unmixing and data analysis are provided in the Supplementary
Histopathological analysis
During surgery, the specimen was marked, and during the whole pathological process, the specimen's orientation was documented. This made it possible to correlate the scans to histopathology and to check the availability of sufficient histopathological data to correlate MS-OPUS scans to histopathology. Because the MS-OPUS transducer's sensitivity field leads to imaged cross-section of approximately 3-mm thickness, the histopathological slide (0.5-3um) with highest similarity to the MS-OPUS image inside this volume was selected. Histopathological data was discussed with a dedicated thyroid pathologist.
Immunohistochemical staining was performed at the Department of Pathology of the University Medical Center Groningen to assess microvascularity in malignant thyroid nodules and in normal thyroid tissue. Based on the location during the imaging of the patients, we selected tissue slides of malignant thyroid nodules and healthy thyroid tissue from the same cases. The selected slides were stained with anti-CD31 antibody, a vascular marker of angiogenesis, using a standardized protocol. Hematoxylin and Eosin (H&E) staining was already performed per standard protocol and available for correlation. Additionally, the microvessel density was scored by an expert pathologist as 0 (low) tot 5 + (very high) and correlated with the H&E slide and all vascular structures on the H&E pathology slides were marked. Vessel diameters were determined as the width of the manually segmented vessels on the H&E slides.
Patient characteristics
In total, we asked 52 patients for consent (32 females and 20 males) of which 29 agreed to participate and met the inclusion criteria after screening. Thus, a total of 29 patients were enrolled, of which two patients were excluded due to technical failure of the MSOT device (one moment of technical failure of the laser trigger board in two patients scanned in succession), totaling 27 included patients in this study (Fig. 1). Fourteen patients were male (51.9%) and the median age was 60 years (IQR 51.0-66.0 years) ( Table 1, Supplementary Table 2). All patients were Caucasian, making large differences in melanin content unlikely [37]. In total, we imaged 38 thyroid index nodules (Supplementary Table 3). A total of 17 patients underwent thyroidectomy leading to 27 index nodules with final histopathology data, totaling 11 malignant and 16 benign nodules. Cytology served as the gold standard in 10 patients, harboring four malignant nodules (Bethesda 6) and six benign nodules (Bethesda 2) (Supplementary Table 3). One patient with a Bethesda 4 nodule is still in follow-up for metastatic colorectal cancer, and due to the lack of final histopathology, was not included in the analysis.
Spectral unmixing of MSOT images
Chromophores of interest for analyzing thyroid nodules were HbO2, HbR, lipids, H2O, and collagen. HbO2, HbR, and H2O were included, because a common feature in thyroid cancer is enhanced angiogenesis and abundance of microvascularity [38]. Lipids were included since several studies report significant alteration of lipid profiles in thyroid cancer compared with adjacent nontumor tissues or benign lesions [39,40]. Also, collagen was analyzed since altered expression of collagen associated with tumorigenesis is reported in thyroid cancer [41]. Reference absorption spectra of chromophores of interest for analyzing thyroid nodules (HbO2, HbR, lipids, H2O, and collagen) and the blind spectral unmixing components with the strongest similarity are shown in Fig. 2. Supplementary Fig. 1 shows all blind spectral unmixing components. Throughout the manuscript we display blind unmixed MSOT images. For comparison we also included all linearly unmixed components for one scan in Supplementary Fig. 5. For blind unmixing, we could not identify a single component that linked solely to collagen. Melanin was included in linear unmixing, but was not investigated further, because it is not present in thyroid nodules. In addition, all patients were Caucasian with comparable skin color, so the optical absorption of melanin affecting the overall light penetration depth and spectral coloring did not have to be considered any further.
Scan selection for detailed analysis of MS-OPUS images
In all 12 selected cases, MS-OPUS showed distinctive patterns. However, in 4 of 12 cases, MS-OPUS could not be correlated to histopathology as no histopathology data was available from the plane at which the MS-OPUS scan was performed. In the remaining eight cases, all MS-OPUS scans showed distinctive patterns on MS-OPUS that could be correlated to histopathology. The following sections will showcase five of these cases and correlate them with available histopathology. The remaining cases are displayed in Supplementary Fig. 6.
Identification of thyroid vascularization on MS-OPUS validated by histopathology
We describe four cases (Case 1-Case 4) with distinctive findings in the MS-OPUS images which are substantiated by histopathology and could not be detected with US alone (see Supplementary Fig. 8 for pure US images of each Case). All patients were diagnosed with papillary thyroid cancer and underwent a total thyroidectomy. Figure 3 shows the findings per case with thyroid nodules delineated in white.
The first distinctive pattern visually identified on the MS-OPUS scans of all selected cases relates to vascular structures. Thyroid cancer commonly features enhanced angiogenesis and an abundance of microvascularity [38]. Moreover, aggregated vascular complexes have been identified in the stroma of tumor papillae, but not in the healthy thyroid or adenomas [38]. Therefore, vascularity may be a distinguishing feature for thyroid cancer. In order to investigate this more thoroughly, first the high-resolution visualization of vessels on MSOT must be validated. In order to do so, MS-OPUS blends (overlay of MSOT with co-registered US image) were correlated to final histopathology. In all four cases, the shapes and diameters of prominent vessels in the nodule on MSOT accurately reproduce histopathology sections (Fig. 3). For example, in Case 1, the MS-OPUS blend displays a blood vessel located 5 mm from the center of the nodule with a diameter up to 0.3, whose absorption spectrum is clearly dominated by HbO2 (Fig. 3, Case 1, panel A (white arrows), B (absorption spectrum), C (diameter of the vessel)). The presence of this vessel is histologically validated, showing a diameter of 0.33 mm and localization of 5 mm from the center of the nodule (Fig. 3, Case 1, panel D (vessel delineated in red), E (vessel delineated in red)). Case 2 also displays that shape and diameters measured at several cross-sections of the vessels on MS-OPUS (~ HbO2) closely resemble shape and diameters on corresponding histopathology sections (Fig. 3, Case 2,). In Case 3, we identify and confirm on histopathology two vessels in a healthy thyroid lobe on MS-OPUS again dominated by HbO2 absorption, including one vessel (ROI 1) deep in the tissue with a diameter of 0.3 mm and another one on the lateral cranial side of the thyroid gland (ROI 2) (Fig. 3, Case 3). The analysis of this patient's papillary thyroid carcinoma is described as Supplementary Case 3 in Supplementary Fig. 2. In Case 4, we identify three vessels with strong HbR contrast at high resolution (Fig. 3, Case 4, panel T (white lines), U (white arrow), V (absorption spectrum). Because the absorption spectra of the selected ROIs show greatest similarity to HbR (see panel V), we display this component. The component similar to HbO2 is displayed in Supplementary Fig. 3. Two vessels are observed in the thyroid nodule and one in the surrounding muscle (ROI 3). We note that the vessel in ROI 3 has its origin inside of the thyroid nodule but also enters the muscle on top of the nodule, which could be explained by the presence of extrathyroidal extension in the surrounding muscles as determined with histopathology (Fig. 3, Case 4, panel Z (vessel delineated in red, muscle delineated in yellow). In fact, the diameter measured on MSOT at the cross-section of the vessel that invaded the muscle matches the diameter of the corresponding extrathyroidal vessel at histopathology (Fig. 3 Case 4, panel Y).
Collectively, the vessel diameters derived as FWHM on MS-OPUS showed an excellent correlation with the diameters of manually segmented vessels on histopathology (n = 7, 4 different cases, Fig. 3 panels B,C,I,J,K,Q,R,W,X,Y), with a regression analysis showing an R 2 -score of 0.9426.
Microvascularity in thyroid nodules as potential malignancy marker identified on MS-OPUS
The second distinctive pattern we observe on MS-OPUS is the abundance of microvascularity in multiple malignant thyroid nodule cases, which could reflect tumor angiogenesis [38]. For example, in Case 1, component 2 shows very fine structures in high density up to a depth of 2 cm in addition to the previously described dominant vessel, which can be interpreted as high microvessel density (Fig. 3, Case 1, panel A, yellow arrow 4-6). Staining for anti-CD31 antibody, a vascular marker of angiogenesis shows high microvessel density throughout the complete nodule on the corresponding histopathology sections, which is also increased compared to surrounding healthy thyroid tissue (Fig. 3, Case 1, panels F and G). The same is observed in Case 2 (Fig. 3) and Case 3 ( Supplementary Fig. 2).
Non-significant differences between malignant and benign thyroid nodules by mean oxygen saturation and collagen architecture in the capsule
Despite the high quality of the overall data set and the promising findings for single patients, we cannot reproduce significant differences between malignant from benign thyroid nodules in accordance to previous studies. Mean concentrations of HbO2 and HbR in the nodule ROI, as well as corresponding SO2 obtained with linear unmixing do not show a significant difference (Fig. 4, panel A). The same applies to the mean coefficients of component 2 (related to HbO2) and 4 (related to HbR) and their ratio, which are obtained by blind unmixing (Fig. 4, panel B). Thus, we fail to obtain quantitative results from our MSOT data with mean coefficients over the nodule ROI and are not able to determine a decrease in oxygen saturation like in previous studies [16][17][18][19].
Additionally, we are not able to identify collagen in thyroid nodules from MS-OPUS data. For example, Case 5 showcases a prominent capsule of the thyroid which is visible in the histology slide (Fig. 4, Case 5, panel E). Thyroid capsules are composed of connective tissue with significant collagen concentration [42]. MSOT also shows contrast in the boundary regions of the nodule (Fig. 4, Case 5, panel C, region between arrows 2 and 3). However, the mean absorption spectra of this ROI recovered from MSOT show greatest similarity to HbO2 (Fig. 4, Case 5, panel D). Thus, we are unable to detect the capsule on the MS-OPUS image, because strong perfusion is no unique capsule marker and presence of collagen cannot be recovered. Therefore, the proposed variations in collagen architecture of benign versus malignant nodules cannot be established.
Possible explanations, why we are neither able to establish a decrease in oxygen saturation for malignant thyroid nodules compared to benign ones nor quantify collagen in the thyroid nodule capsule from our MSOT data are discussed in the following paragraphs.
Limitations of quantitative MSOT imaging
Major challenges for quantitative MSOT imaging arise from acoustic reflection artifacts [43]. Human tissue is naturally heterogeneous exhibiting multiple textures with varying acoustic properties (i.e., speed of sound and mechanical density). Thus, there is a mismatch in acoustic impedance between different textures which constitutes as a reflective interface for US waves. In MSOT, optical absorbers emit US waves in all spatial directions. Consequently, the wave of a single absorber may arrive at a US detector multiple times. The first incidence corresponds to the direct impacting wave, whereas any following can be regarded as reflection. If the physical optoacoustic model only accounts for direct wave propagation, the signal origin of the reflected wave will not be localized correctly. An artificial optoacoustic source mirrored to the opposite side of the reflective interface will emerge in the image ( A thorough investigation of the anatomy in the US image shows multiple interfaces in the subcutaneous fat which mirror strong H2O contrast of the dermis in the ROI of the muscle fascia. We would like to emphasize that vascular patterns of malignancy identified in single cases in this study (see paragraphs 3.4 and 3.5) were ruled out to be reflection artifacts by five experts.
Quantitative MSOT imaging is also impeded by limited penetration depth due to strong optical absorbers like hemoglobin and melanin in superficial skin layers. Light attenuation strongly varies significantly depending on the positioning of the probe on the skin, the skin tone [37] and the heterogeneous tissue. The stronger decay of light fluence limits the imaging depth, because the optoacoustic signal generated by absorbers is decreasing more rapidly, resulting in a lower signal-to-noise ratio [44]. Moreover, the heterogeneous tissue introduces the effect of spectral coloring, which refers to the phenomenon of distorted optical absorption spectra recovered from MSOT data compared to references from literature. This variation is caused by inadequate modeling of light fluence decay inside tissue for different wavelengths [36]. Mean absorption spectra in ROIs 1 and 2 in Case 3 display strong similarity to the absorption of HbO2 (Fig. 5, Case 3, panel E). This is biologically reasonable, because histopathology unveils a blood vessel in the healthy thyroid in this region (Fig. 5, Case 3, panel F). However, the mean absorption spectra determined with MSOT denote a clear notch in the wavelength range between 970 and 1000 nm compared to the reference HbO2 absorption spectrum. This notch displays the effect of spectral coloring and is triggered by strong H2O absorption in the dermis region on top of the selected ROIs. Because H2O strongly absorbs light at the wavelength range between 970 and 1000 nm (Fig. 5, Case 3, panel E), fluence decreases faster with depth for these wavelengths than for other wavelengths. However, this variation in fluence decay for different wavelengths is not incorporated in the physical model used for image reconstruction. Light absorption deep in tissue for the two wavelengths (970 and 1000 nm) will thus appear to be decreased resulting in distorted absorption spectra. Inherently, the quantification of biomarker presence utilizing linear unmixing with reference literature absorption spectra will be affected negatively.
In our data set, we observed a relative mean squared error of 9.03% for linear unmixing. In other words, 9.03% of the MSOT image contrast cannot be explained with the contribution of the biomarkers selected for unmixing. Because blind spectral unmixing recovers the mathematically best matching absorption spectra from the measurement data itself, it can account for the distortions caused by spectral coloring. Therefore, blind spectral unmixing generally provides a more authentic representation of MSOT measurement data than linear unmixing resulting in a relative mean squared error of only 0.84% in this data set [35]. Yet, interpretation of blind unmixing results remains challenging, because only some of the recovered absorption spectra can be linked to specific chromophores (Fig. 1).
In addition to reflection artifacts, reduced depth penetration and spectral coloring, scientists may erroneously interpret optical absorption as a mixture of different chromophores. This phenomenon of spectral cross talk also deteriorates image quantification [45]. For example, a large artery can clearly be identified in Case 2 (Fig. 3, Case 2, panel H, region between arrows 1, 2, 3 and 4) by the mean optical absorption spectrum in the MSOT image (Fig. 3, Case 2, panel I). The corresponding histology section confirms this observation displaying a vessel of equal size and shape (Fig. 3, Case 2, panel L). As expected, blind spectral unmixing will recover strong contributions by a component with absorption similar to HbO2 (component 2) in this ROI (Fig. 3, Case 2, panel I). However, the absorption spectrum of component 2 features weaker absorption at wavelengths above 850 nm compared to the literature absorption of HbO2 (Fig. 2, panels A and B). In order to account for this deviation, HbO2 contrast will resemble a mixture of components by blind unmixing. Consequently, also component 3 exhibits strong intensity in ROI 1 in Case 2 to account for the absorption at 930 nm (Fig. 5, Case 2, panel B). If component 3 was to be utilized independently for biomarker quantification due to its strong similarity to the optical absorption of lipids, presence of lipids surrounding the artery could be erroneously concluded by the scientist. However, this interpretation contradicts findings in the corresponding histology slide (Fig. 5, Case 2, panel C).
Ultimately, chromophore quantification from MSOT may also be distorted because strong optical absorbers cover weak ones in MSOT [46]. For example, tissue with high collagen concentration, like connective tissue (including collagen) in the capsule of the thyroid gland and thyroid nodule, is very often also strongly perfused [42]. HbO2 and HbR trigger stronger signal in the acquired near-infrared wavelength range compared to other chromophores like collagen [47]. Additionally, the absorption spectrum of collagen does not contain any isolating features in this optical range which would enable a differentiation from other absorbers [29]. We hypothesize a cover up by stronger absorbers is the case in Case 5 of this study (Fig. 5, panel c-f). The patient was diagnosed with multinodular goiter. The immunohistochemistry image shows a collagen rich thyroid gland (Fig. 5, panel f). However, MSOT only shows HbO2 and HbR related contrast located in the thyroid capsule (Fig. 5, panel c, between arrow 1 and arrow 2) without any signal attributable to collagen The absence of collagen signal in MSOT may be a result of a cover up by strong optical absorbers like HbO2 and HbR (refer to 3.6 and Supplementary Fig. 5 for linear unmixing images).
Discussion
In this pilot study, we apply previously enhanced signal processing and image reconstruction methods for clinical MS-OPUS to provide unprecedented image quality in Fig. 3 Four representative cases of spectral features in thyroid nodules with distinctive vascularization and high microvascularity in the nodule at high resolution correlated to histopathology. Case 1 (a-g), papillary thyroid carcinoma delineated in white. MS-OPUS (a) shows rich microvascularity (yellow arrows 4, 5, and 6, a more detailed image is added as Supplementary Fig. 4) in the thyroid nodule and a blood vessel (white arrows) crossing the cranial side of the nodule. The absorption spectrum is dominated by HbO2 (b, CR = 0.9999984). MSOT estimates a 0.3 mm vessel diameter along line 3 (c). Histopathology confirms the presence of this vessel with a diameter of 0.33 mm (d, e) and immunohistochemistry with anti-CD31 antibody shows higher microvascularity in the thyroid nodule (quantified as 5 + , f) than in the surrounding healthy thyroid tissue (quantified as 1 + , g). Case 2 (h-o), papillary thyroid carcinoma delineated in white. MS-OPUS (h) shows rich microvascularity in the thyroid nodule (yellow arrows 7 and 8, a more detailed image is added as Supplementary Fig. 4) and identifies a vessel with varying diameters which splits into two branches (white arrow 1 and splitting at white arrows 2 in 3 and 4). The absorption spectrum is dominated by HbO2 (i, CR = 0.9999601). MSOT estimates a vessel diameter ranging from 0.4 mm (line 5) (j) to 0.9 mm (line 6) (k). Histopathology confirms the presence of this vessel in the exact same shape and shows a diameter ranging from 0.45 to 0.86 mm (l and more detailed in m). Immunohistochemistry with anti-CD31 antibody showed higher microvascularity in the thyroid nodule (quantified as 3 + , n) compared to the contralateral healthy thyroid tissue (quantified as 1 + , o). Case 3, healthy thyroid tissue delineated in white. MS-OPUS shows two vessels, white arrow 1 and line 2 (P, arrow 1 CR = 0.9999853, line 2 CR = 0.9999928). The absorption spectrum is dominated by HbO2 (q) MSOT estimates a vessel diameter along line 2 of 0.3 mm (r). Both vessels are identified on histopathology, ROI 2 with a vessel diameter of 0.32 mm (s). Case 4, papillary thyroid cancer delineated in white with extrathyroidal extension. MS-OPUS (t, u) identifies three vessels (line 1 CR = 0.9999995, line 2 CR = 0.9999923 and line 3 CR = 0.9999950) with absorption spectrum similar to HbR (v). The vessel indicated by white arrow 3 (u), with its origin inside of the thyroid nodule but also entering the muscle on top of the nodule, visualizes extrathyroidal extension of the tumor in the surrounding muscles, as confirmed on histopathology (z, where red is vessels and yellow is muscle). MSOT estimates a vessel diameter along line 1 of 0.6 mm (w) along line 2 of 0.3 mm (x) and along line 3 of 0.6 mm (y), with matching diameters on histopathology (z) ◂ optoacoustic imaging of thyroid nodules. These features could, for the first time, be linked directly to reference histopathology data. In correlation with histopathology, we identify possible features of malignancy visible in MS-OPUS, such as abundance of microvascularity in malignant thyroid nodules, and visualize extrathyroidal extension in the surrounding muscle. We showcase representative examples seen in five cases. The techniques and findings presented herein provide, to the best of our knowledge, the highest quality optoacoustic images of thyroid nodules to date.
Our approach shows the ability to resolve blood vessels with diameters as small as 250 μm at depths of up to 2 cm. We surmise that the microvessel density in deeper levels of the nodules could not be visualized with MSOT due to decreasing light fluence. To validate our results, we linked the MS-OPUS scans directly to histopathology data, showing an excellent correlation between the vessel diameter measured on MS-OPUS and histopathology. Comparable imaging quality for thyroid nodules has so far not been reported. Despite the lower resolution of MSOT compared to microscopy applied in histopathology and an estimated 10% shrinkage of the tissue during the fixation process [48,49], we expect similar vessel diameters in both modalities. The sparsity regularization in the Shearlet domain enhances edges like small vessels [50]. Because we reconstructed images at 100 µm resolution, vessels can be reconstructed with higher precision than the physical resolution would suggest. Moreover, diameters in MSOT were determined as FWHM along a profile line, whereas the diameters in histopathology were determined as widths of the manually segmented vessel in the image. The validated MS-OPUS scans were subsequently used to study vascular patterns as a distinguishing feature for thyroid cancer, as it is known that thyroid cancer, on pathology level, shows enhanced angiogenesis, distinctive morphological features and abundance of microvascularity [38]. We found that MSOT may be capable of identifying an abundance of microvascularity in malignant thyroid nodules. Previous publications demonstrated the ability of MSOT to image vessels with resolution up to 200 μm at depths up to 2 cm in varying tissue types [21,26,[51][52][53][54]. Therefore, MS-OPUS can potentially improve detection of malignant thyroid nodules preoperatively based on the microvessel density. Furthermore, as a result of the validated MS-OPUS scans, we are able to show the presence of extrathyroidal extension of the nodule into a surrounding muscle. Literature shows that the extent of extrathyroidal extension is an adverse prognostic factor in thyroid carcinomas associated with higher recurrence risk and lower survival [55,56]. In fact, in the American Thyroid Association guidelines of 2015, the presence of extrathyroidal extension is one of the features that implies high malignancy risk (> 70-90%) [7]. The presence of extrathyroidal (c-f) shows a patient with multinodular goiter.. The MS-OPUS blend displays strong MSOT contrast in the boundary region of the thyroid (Case 5, panel c, region between arrows 1 and 2, thyroid gland is delineated in white). The mean absorption spectrum of the thyroid capsule (between arrows 1 and 2, CR = 0.9999984) obtained from MSOT (Case 5, panel d) is very similar to HbO2 without specific features related to collagen. The corresponding tissue slide (f) shows collagen rich tissue below the capsule of the thyroid gland without any signal of collagen on MS-OPUS extension in a differentiated thyroid carcinoma will place the patient into a high-risk group, which mandates a more aggressive treatment strategy consisting of total thyroidectomy and post-operative radioactive iodine therapy [7,57,58]. This treatment strategy has a major impact on patients caused by hypothyroidism resulting in the need of thyroid hormone replacement therapy and thereby affecting the quality of life. Total thyroidectomy is associated with complications such as iatrogenic hypoparathyroidism, dysgeusia and xerostomia also resulting in a poor quality of life [59][60][61]. The sensitivity and specificity of US in detecting extrathyroidal extension varies from 62.9-65.2% to 81.8-97.6% [62]. Here, we show the potential of MSOT to serve as a non-invasive technique to qualitatively assess the presence of malignancy and extrathyroidal extension, allowing for a more aggressive and thus appropriate treatment (i.e., surgery and dose of radioactive iodine).
Earlier studies on MSOT imaging of the thyroid suggest its potential to differentiate malignant from benign nodules and normal human thyroid tissue based on differences in mean HbR, HbO2, and SO2 concentrations in the nodule region [16][17][18][19][20]. Furthermore, the architecture of collagen varies in benign versus malignant thyroid nodules capsules and collagen could help detect the disruption of the capsule or the thyroid gland, which is a sonographic finding associated with extrathyroidal extension [42,63]. Consequently, MS-OPUS might provide information about an additional malignancy marker in vivo. Although in the current study, we achieve the highest quality, i.e., spatial resolution and spectral contrast optoacoustic images of thyroid nodules to date, we are unable to replicate the results of these previous MSOT studies. Thus, we were not able to find statistically significant differences between malignant and benign thyroid nodules based mean coefficients over the thyroid ROI. To further investigate, we present general limitations of the field of optoacoustics inhibiting quantification of biomarker presence. Image artifacts due to acoustic reflections cause non-reasonable contrast superimposing optoacoustic signals in the same ROI. Spectral coloring due to unaccounted light fluence variation distorts the absorption spectra of chromophores recovered from MSOT images. Furthermore, strong optical absorbers both limit the light penetration depth due to their strong presence in superficial skin layers and can mask optical absorption of other chromophores like collagen. Finally, for the acquired wavelengths, the absorption spectrum of collagen does not exhibit unambiguous features like highest optical absorption or individual variations.
These limitations combined strongly impede the quantification of chromophore presence from MSOT. In addition, the mean is a measure of the absorbed energy in the region and thus, strongly dependent on nodule size, the segmentation accuracy, and the overlaying tissue with varying light absorption. Consequently, we emphasize that visual inspection of standalone MSOT images can lead to wrong interpretation. Inferring the presence of intrinsic biomarkers based on the mean absorption spectrum in a defined ROI applying spectral unmixing techniques might contradict biological reality. Therefore, we recommend to also look for qualitative optoacoustic features of malignancy within a ROI rather than only quantifying the mean absorption spectrum within a ROI.
Our small cohort limits the utility of establishing patterns predictive of malignancy in thyroid nodules. A future validation study with a more extensive data set should compare histopathology and MS-OPUS images for more patients and confirm our findings. Moreover, it may succeed using the applied data processing and image reconstruction methodology. The present study was the first step in identifying such possible optoacoustic features that could be used for the characterization of thyroid nodules.
In this study, by coincidence only papillary thyroid cancer or benign thyroid nodules were included. In a subsequent study, different types of thyroid cancers (such as follicular thyroid cancer) and benign lesions should be included to study if our results can be translated to other types of thyroid nodules. Future studies should also focus on diminishing artifacts in MSOT images, since artifacts may lead to wrong interpretation of images, possibly resulting in inadequate diagnosis and treatment in clinical applications. This requires training of clinicians to decrease the number of artifacts resulting from data acquisition. Algorithmic improvements incorporating optical tissue priors and acoustic ultrasound information as well as uncertainty quantification will both mitigate artifacts like acoustic reflections and enhance the data analysis facilitating the interpretation [64][65][66]. Additionally, MSOT probes could be optimized for deep tissue penetration and MSOT images should be acquired at more wavelengths in the selected wavelength range. This may lead to more distinct spectral unmixing results enabling a clearer detection of HbO2 and HbR absorption and the differentiation of different collagens as potential markers of malignant thyroid nodules [67]. For an analysis of a large MS-OPUS data set, we advocate to extend the statistical analysis beyond mean and standard deviations in selected ROIs. Metrics like spectral or spatial correlations of optical absorbers should be investigated to substantiate proposed findings and quantifications [68]. The application of novel machine learning methods could enhance image quality further and unveil more complex optoacoustic features. Ultimately, it may improve the understanding of MSOT contrast beyond the state-of-the-art and help to predict malignancy in thyroid nodules. At the current stage, a trained operator is still necessary to analyze MS-OPUS images. We assume that MS-OPUS will contribute to the diagnostic process of thyroid nodules in the future. Thereby, overtreatment (i.e., diagnostic hemithyroidectomy) of patients with thyroid nodules and the inherent postoperative morbidity could be decreased massively leading to a higher quality of life for patients with thyroid nodules. Data availability The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request. | 2023-04-12T06:16:57.020Z | 2023-04-11T00:00:00.000 | {
"year": 2023,
"sha1": "b32c66f85e202af10f46cbc27079a3ee8b2d9f86",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00259-023-06189-1.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "6bd1796f00d57b558cf2dce675b060b76e7cc346",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
256492728 | pes2o/s2orc | v3-fos-license | The influence of product differentiation, price, and positioning on purchasing decisions at Niceso stores in South Tangerang
ABSTRACT
Facing competition in today's market, companies need the right strategy in order to grab the attention of consumers. Companies must anticipate increasingly competitive developments by carrying out good strategies for companies to develop in the fierce competition in the market (Taringan et al., 2022). The success of a company's marketing is not only judged by the number of customers but also the strategy of how to retain customers, the company will survive in the market if it succeeds in also maintaining the level of purchases from consumers (Wibowo et al., 2015). Niceso is a retail company with a modern place concept that provides various types of products ranging from stationery, toys, bags, household appliances, beauty, beauty equipment accessories, children's toys and others. As a Niceso retail store, it has competitors with many products from the Niceso store, it must choose and place a good strategy (Wibowo et al., 2015). Niceso must look at it in terms of product differentiation, price and product positioning in order to influence customer decisions to prefer Niceso products. Because market power is in the hands of consumers, so Nice so stores must think about creating a product that consumers need and want (Rauf, 2019).
The importance of taking the right strategy is carried out to be able to to improve consumer purchasing decisions, because Price is a marketing variable that is important for companies to pay attention to because price greatly influences the level of consumer decisions in buying these goods and the profit that will be achieved by the company. The company expects a price that can be affordable by consumers, but with the suitability of product quality and the price of competitors' products in the market, the price consideration is not too low. Price is the amount purchasing decision factors depend on the accuracy of the strategy implemented by the company (Rahim, 2018). The purchase decision is the stage in the buyer's decision-making process where the consumer actually buys (Kotler & Armstrong, 2018). decision making by consumers to decide to buy an item will see the product, the Niceso company must build customer perceptions of the product. Customer perception is very important for companies to be able to create an advantage over their products, making products from the company exist in the minds of customers (FN Simamora & Situmeang, 2018). To create customer perceptions, companies need to build a good strategy that can be started from within the company (Mauliansyah, 2018). Companies can differentiate by looking for existing sources of competitive advantage, the differences that the company has, and thinking about an effective position in the market. One that can be done is the differentiation strategy. Differentiation is defined as the process of designing a series of differences in meaning to differentiate a company's offering from competitors' offerings (Yuvira et al., 2021).
LITERATURE REVIEW
of money that must be paid by the customer for a product that the customer will buy (Kotler & Armstrong, 2018). To create product memory in consumers, companies need to implement a good image in the minds of consumers. Positioning is a company strategy for designing a product and marketing mix so that the company's products can be remembered by consumers, so that consumer perceptions understand the actions taken by the company to compete with its competitors (Purnasari, 2018). If the brand image of Nice so products is well remembered by consumers, it means that the use of positioning is appropriate, and with product differentiation it makes choices for consumers in purchasing decisions.
The results of previous studies indicate that the product differentiation variable has a positive and significant effect on purchasing decisions (ayodya, 2016). Product differentiation has a significant positive effect on consumer purchasing decisions according to research (Gunawan & Maftuchach, 2022). But according to research result Dances et al. (2022)that product differentiation is partially insignificant to purchasing decisions. The relationship between price and purchasing decisions is that prices affect consumer purchasing decisions, the higher the product price, the lower the decision to buy the product and vice versa (Kotler & Armstrong, 2018). According to research results (Sudjatmika (2017) that partially the price variable has no significant or negative effect on purchasing decisions. Based on the results of the study that price has a significant positive influence on consumer purchasing decision variables (IK Simamora & W, 2013). Positioning cannot have a significant effect on purchasing decisions according to research (Rauf, 2019). But according to research results Simamora & W (2013) revealed that based on the research table it can be seen that positioning has a significant positive effect on the purchasing decision variable. Based on the results of various previous studies, there are still contradictions or gaps between the existing variables. Between variables there is a positive effect and there is no effect. This prompted this research to deepen the relationship between variables how much influence product differentiation, price, and positioning have on purchasing decisions. Because of the importance of establishing product differentiation, price and positioning in determining strategies to influence purchasing decisions.
Product Differentiation
Product differentiation is the result of a company's different origins to differentiate the product from competing company products that make customers want their products more (M. Simamora, 2004). Meanwhile, another understanding, namely product differentiation, has the meaning "actually differentiating the market offering to create superior customer value" which means a design making a difference in showing the market so that it has high value in the minds of customers (Kotler & Armstrong, 2018). The activity of designing a series of product disparities which means differentiating what the company produces from its competitors based on some of the definitions above it can be concluded that the notion of product differentiation means a company tactic to compete with its competitors by differentiating its products so that customers are more attracted to their products differences make customers interested and produce these products have useful value and differentiate the company's products from its competitors. Product differentiation indicators by Kotler (2005)there are several indicators in product differentiation, among others; feature, form, performance quality, durability, conformance quality, reliability, repairability, style, design.
Price
Price is a convention of a value that becomes an exchange requirement in a purchase transaction. In another sense, price is what must be spent to get a product (Haryanto, 2010). Another opinion says that the price is a product that is made and marketed properly and can be sold at a high price and can make a big profit (Kotler & Keller, 2016).Price is the amount of money that has been determined by a product or the amount of value that is exchanged using the benefits of owning the product that has been exchanged (Kotler, 2005). Prices have indicators are as follows according to Kotler & Armstrong (2018) namely the price is affordable by the purchasing power of consumers, the suitability between price and quality, the price has competitiveness with other similar products, the suitability of price with benefits.
Positioning
Positioning is a combination of product differentiation and market segmentation, positioning focus is the views and preferences of buyers regarding a product in the market. Market segmentation helps companies determine the characteristics of the target market. Companies doing positioning will produce product positions, namely product descriptions, not aligned and relatively superior compared to competitors. So positioning is a management tactic that collects research and segmentation to shape the impression of the product in accordance with the expectations of the target market (IK Simamora & W, 2013). In another sense, positioning is a way for a product to place and build a value in consumer memory. and there are basic indicators of positioning, namely attributes, benefits, quality, price, usability, users, competitors, and world consumer culture according to (Kotler & Armstrong, 2018).
Purchase Decision
In marketing activities can influence consumers to decide to buyor use the product or service. according to Kotler (2005), evaluation of solutions and purchasing decisions at the beginning there is an intention to buy, which measures the direction of consumers to take certain actions on a product with a comprehensive scope.
Dependent Variable Relationship with Independent
The relationship between product differentiation and purchasing decisions products. Seen in the decision making according to opinion Subianto (2007) a process that starts from thinking about a problem, can be an alternative solution to a decision to make a choice on one particular alternative to achieve a purchasing goal. The purchase decision process occurs gradually and lengthwise before making a purchase choice, marketers must also carefully target consumers with problems or consumer desires that lead to a particular object.
The relationship between price and purchase decision
Consumers will see the price for the product which will relate to the purchase decision. when consumers have reviewed and compared these products unlimitedly but by comparing product price standards as a comparison to make product purchase transactions (Hoffmann et al., 2013). This price can affect consumer decisions in making purchases, if the price is high, the purchase decision will be lower, but if it has a low price, the purchase decision will be higher. (Kotler & Armstrong, 2018).
Thus, the hypothesis can be formulated as follows: H2: Price influences purchasing decisions Companies must be able to create a product that is different and has a special image in the eyes of consumers in competition between similar products. Nice so's similar products vary in many places making it a challenge for Nice so to increase sales and dominate the market. Consumers will search for information, evaluate, introduce new products and make purchasing decisions. consumers will seek information about product differentiation to attract consumer purchase interest, thereby increasing purchases of product purchases (FN Simamora & Situmeang, 2018). Thus, the hypothesis can be formulated as follows:H1: Product differentiation influences purchasing decisions.
Relationship positioning with purchasing decisions
The company uses a positioning strategy that can create a good product image in the minds of consumers that is able to encourage product purchase rates. several product placements can be carried out in marketing activities on target markets in the form of attributes, applications, benefits and uses, competitors, categories, prices, users. (pfoertsch & philip, 2007). Thus, the hypothesis can be formulated as follows: H3: Positioning influences purchasing decisions.
Research Hypothesis
hypothesis is a temporary result of the formulation of the problem in research (Sugiyono, 2010). the hypothesis in the study as follows: H1: It is suspected that the product differentiation attribute (X1) has an effect on purchasing decisions (Y) H2: It is suspected that the price attribute (X2) has an effect on purchasing decisions (Y) H3: It is suspected that the positioning attribute (X3) has an effect on purchasing decisions (Y) H4: Product differentiation (X1), price (X2), and positioning (X3) are thought to influence purchasing decisions (Y).
METHOD
Judging from the type of data, the approach used in this research is a quantitative approach. Quantitative data is a research method based on concrete data, research data in the form of numbers with statistical data measurements as a calculating tool, with the relationship being studied to reach conclusions (Sugiyono, 2010). The population in this study are buyers of Niceso products in South Tangerang. The research data uses primary data with the data collection method taken, namely the online questionnaire.
Respondent Profile
In November 2022 an online questionnaire was distributed and received 146 respondents. With a final sample of 122 valid respondent responses and will be analyzed. The characteristics of the respondents are presented in table 1.
Validity test
Test the extent to which it is able to measure something that must be measured. According to Sugiyono (2017) that the degree of accuracy between the actual power that occurs on the object with the data that has been collected by previous researchers.
Source : Data Processing, 2022 It can be seen from the
Reliability Test
Reliability test to measure consistency indicator of a variable. A test to obtain information on the level of reliability, the efficacy of a questionnaire in retrieving a data indicated by the alpha coefficient value it has (Rauf, 2019). A questionnaire is declared reliable if the respondents' results on statements are consistent from time to time (Ghozali, 2009).
From the data results it can be seen that the Cronbach's Alpha measurement is 0.934 which is higher than >0.60. It can be concluded that the variables tested in this study are declared reliable. In the output table above in Cronbach's Alpha If Item Deleted all variables are > 0.60, it can be concluded that all variables are reliable.
R-Square test
The R-square has a good value between 0-1 with a number close to one, meaning the data is getting better. The following is the R-Square test table. It can be seen in the F test data table above that the F count is obtained at 109,901 with a probability level of 0,000. in the table with a probability of 0.000 <0.05, it can be concluded that the regression model can be used to predict the level of purchasing decisions.
Classic assumption test
Normality test to test a regression model the residual value is normally distributed or not (Ghozali, 2017).If the graph obtained with the dots approaches the diagonal line, it can be concluded that the regression model is normally distributed.
Multicollinearity Test
Multicollinearity is a linear relationship between independent variables (Ghozali, 2017). The test aims to test the regression model whether there is a high or perfect correlation between the independent variables. This test can be seen from the Variance Inflation Factor (VIF) table value with the tolerance value through the SPSS program with the following test results.
It can be seen from the data above that the tolerance value of each independent variable is >0.1 and the VIF value is <10, so it can be concluded that the regression model does not contain multicollinearity.
Heteroscedasticity Test
The heteroscedasticity test aims to test whether there are differences in the variance of the residuals between observations. The high regression model does not contain heteroscedasticity (Ghozali, 2009).
CONCLUSION
The results of the study show that Niceso's product differentiation variables influence purchasing decisions. Product differentiation variable (X1), shows t arithmetic equal to 2,885> t table 1.98137 with a significant level of 0.005 <0.05. So it can be concluded that the product differentiation variable has a significant positive effect on purchasing decisions at Niceso stores. This shows that product differentiation is one of the factors supporting purchasing decisions. These results also support previous research by Yuvira et al. (2021)that product differentiation has a significant positive effect on purchasing decisions. And research according come on (2016) These results indicate that the variable is one of the factors that influence purchasing decisions. This is supported by previous research by Purnamasari (2018)revealed that positioning has a significant effect on purchasing decisions, as well as research from Mauliansyah (2018)shows that positioning has a significant effect on purchasing decisions.
Based on the results of research on "The Influence of Product Differentiation, Price and Positioning on Purchasing Decisions at Niceso Stores in South Tangerang". So, it can be concluded from this research, namely the results of the analysis that has been done that product differentiation, price, and positioning have a significant positive effect on purchasing decisions. In the product differentiation variable, it has been known from the research results that it has a significant positive effect on purchasing decisions. It is also known that the price variable has a significant positive effect on purchasing decisions, as well as the positioning variable, the results prove that it has a significant positive effect on purchasing decisions. Based on the results of research that has been carried out regarding product differentiation, price, and positioning towards purchasing decisions, it can be used as a benchmark as study material to add references for the development of Niceso stores in South Tangerang. By paying attention to product differentiation, the price of Niceso goods and the positioning of Niceso stores, can influence the level of customer decisions so as to increase the reputation of Niceso stores, especially in the South Tangerang area. | 2023-02-02T16:37:10.553Z | 2022-12-30T00:00:00.000 | {
"year": 2022,
"sha1": "d809e4d9f16b4e3231b9999c66afd53b3672bd1f",
"oa_license": "CCBY",
"oa_url": "https://journal.privietlab.org/index.php/PSSJ/article/download/177/89",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "0a755ee054bc07a56e28f089c6ad68ecc350c62b",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": []
} |
236638251 | pes2o/s2orc | v3-fos-license | Estimation of Carbon Footprint of Residential Building in Warm Humid Climate of India through BIM
In recent years Asian Nations showed concern over the Life Cycle Assessment (LCA) of their civil infrastructure. This study presents a contextual investigation of a residential apartment complex in the territory of the southern part of India. The LCA is performed through Building Information Modelling (BIM) software embedded with Environmental Product Declarations (EPDs) of materials utilized in construction, transportation of materials and operational energy use throughout the building lifecycle. The results of the study illustrate that cement is the material that most contributes to carbon emissions among the other materials looked at in this study. The operational stage contributed the highest amount of carbon emissions. This study emphasizes variation in the LCA results based on the selection of a combination of definite software-database combinations and manual-database computations used. For this, three LCA databases were adopted (GaBi database and ecoinvent databases through One Click LCA software), and the ICE database was used for manual calculations. The ICE database showed realistic value comparing the GaBi and ecoinvent databases. The findings of this study are valuable for the policymakers and practitioners to accomplish optimization of Greenhouse Gas (GHG) emissions over the building life cycle.
Background
Societies worldwide have a significant unease over emissions of Greenhouse Gases (GHG) causing a boost to earth temperature [1]. One of the predominant causes of climate change and global warming is the increase in carbon emissions [2]. The built environment is accountable for more than 33% of worldwide ozone depleting GHG discharges, and its impact on the environment is exceptionally high [3]. The explanations behind high GHG emanations are energy utilization during construction, operation and destruction of the structure [4]. Carbon emissions estimation for structures is made basically in developed nations like the United States, Sweden and Germany. India has not been quite as engaged with the estimation of carbon emissions attributable to an absence of accurate information databases for all construction materials [5,6].
We are moving towards a crucial stage for worldwide endeavors to handle an environmental emergency such as the climate crisis-an incredible challenge of this time. Natural impacts were an Earth-wide temperature boost and essential energy. The nations that vowed to reach Net-Zero-Emissions (NZE) by mid-century or not long after keep • Scope definition and objective; • Analysis of inventory; • Environmental impact assessment; • Results and interpretations.
Motivation
A significant contributor to the country's economy is the investment led sector; the construction industry in India has played a significant part by adding over 5% to India's Gross Domestic Product (GDP). The Indian built environment area is facing a lot of difficulties at present. This can be significantly credited to the current work inefficiency and absence of data sharing between industry partners. Therefore, a need to adopt Building Information Modeling (BIM) in the construction industry has become vital. BIM fundamentally defines the utilization of project three-dimensional (3D) model to improve its plan, development, activity and maintenance [10]. The BIM-LCA adoption rate differs around the world. India being the lowest in BIM-LCA, with an adoption rate 10-18% as compared with 71% of BIM clients in United States alone, it may not actually be appropriate to apply the past discoveries and information to the context of India.
The computation of LCA carbon footprint will cost additional time. The time and exertion can be decreased to oversee BIM information in LCA. BIM can create rich information sources like material amounts [4]. The capability of BIM is to such an extent that the entire structure LCA is worked with. The physical qualities of a structure are addressed in advanced BIM. BIM offers virtual coordination incitement, augmented perception and upgrade. A few examinations say that BIM and LCA, the software, will have upgrade requirements for contributing information which is done manually, so the LCA model is set up [4]. The different information about development stages such as utilization of machines and materials is taken in an amounts estimation sheet [5]. BIM has information regarding building climatic conditions, requirements of comfort, schedules of operations, and integrated BIM-LCA software's tools.
LCA Phases and Building Application
LCA is applied to sustainable green building rating and energy assessment of buildings for building maintainability and essential energy optimization. LCA assesses the effects of crude material extraction, manufacturing, construction, operation, maintenance, repair, replacement and demolitions. The stages of LCA are the product stage, transportation stage, operational stage and end of life stage as shown in Figure 1. Product stage incorporates the creation of raw material to products, transportation and onsite construction. The area of building tools for transport environmental conditions, size of building life expectancy of the structure and strategies for materials produces construction techniques. During the operational stage, energy utilization from warming, cooling ventilation, AC, water supply, and use of other equipment are considered. Life cycle assessment is attributed to the environmental and ecological factors and other innovative strategies to deal with the energy consumption during its operational stage and finally end of life stage.
of machines and materials is taken in an amounts estimation sheet [5]. BIM has information regarding building climatic conditions, requirements of comfort, schedules of operations, and integrated BIM-LCA software's tools.
LCA Phases and Building Application
LCA is applied to sustainable green building rating and energy assessment of buildings for building maintainability and essential energy optimization. LCA assesses the effects of crude material extraction, manufacturing, construction, operation, maintenance, repair, replacement and demolitions. The stages of LCA are the product stage, transportation stage, operational stage and end of life stage as shown in Figure 1. Product stage incorporates the creation of raw material to products, transportation and onsite construction. The area of building tools for transport environmental conditions, size of building life expectancy of the structure and strategies for materials produces construction techniques. During the operational stage, energy utilization from warming, cooling ventilation, AC, water supply, and use of other equipment are considered. Life cycle assessment is attributed to the environmental and ecological factors and other innovative strategies to deal with the energy consumption during its operational stage and finally end of life stage. The study carried by Cheng et al. shows that a rich source of information is provided by BIM for LCA [11]. The operational stage assumes the main part of optimizing the GHG of a structure through the entire life cycle of the building [11]. Xu el al. evaluated exact building energy performance estimated in BIM, which is a powerful innovation [12]. Yang et al. calculated that the operation phase accounts for about 69% of total carbon emissions, while 24% is contributed from the building material production where concrete produces The study carried by Cheng et al. shows that a rich source of information is provided by BIM for LCA [11]. The operational stage assumes the main part of optimizing the GHG of a structure through the entire life cycle of the building [11]. Xu el al. evaluated exact building energy performance estimated in BIM, which is a powerful innovation [12]. Yang et al. calculated that the operation phase accounts for about 69% of total carbon emissions, while 24% is contributed from the building material production where concrete produces 82% and is the most used construction material [13]. The researchers conducted a comparison of LCA databases, and the results commonly indicate fundamental gaps in the methodologies used, which sometimes result in significant differences in the assessment results [14,15]. The impact of the LCA results calls for a distinctive approach and choice of a definite database for the results. Different building structure types influence the results of carbon emissions. Past investigations have covered different building types, including villas, estates, skyscraper homes, office buildings and high-rise buildings. However, there is a lack of research on residential apartment buildings with LCA ISO standard 14,040 specifications.
LCA Databases
Researchers show the utilization of various LCA software impacst the outcomes owing to various strategies carried out in the software regardless of whether the information database utilized is something very similar, and LCA techniques are exact and comparable [16]. Significant outcomes are inferable from software-database combinations. The LCA tools decision has influenced the results of the assessment. There is non-consistency in the results of the two tools while evaluating the principal materials which contribute a high measure of fossil fuel by-products. The principal hotspot for its distinction is that the databases are inbuilt in the two tools [17]. The consistency and uniformity of LCA data sets must be improved in the area of the structure. The investigation was led by a few researchers, and an enormous number of analysts have discovered that strategy has some major fundamental gaps in its aftereffects regarding the same items which lead to results that have varieties in the LCA attributable to software-database blends [15].
Existing literature specifies the LCA for numerous products by incorporating with databases GaBi, ecoinvent and the environmental footprint database [14,15]. The findings are based on the significance of item frameworks with different databases for significant choices, as suggested [15]. The researchers found that it is almost difficult to assemble planning for every one of the cycles, so to lessen the cut-off error, they discovered a framework limit drawn by relying upon several databases [18]. From the published papers, it is found that there is a huge contrast in LCA results from numerous variations in the databases utilized. Various segments and cycles are associated with the life cycle databases, so the outcomes are unpredictable and inaccurate.
The study was conducted by several scholars where a large number of researchers found out that the methodology has some fundamental gaps in the results for the same products, which led to results which have variations in the life cycle assessments owing to software-database combinations. Some scholars thought about the data set system; nature of data factors influenced the examination result, and this is because of the distinction in data set received software, various components, information sources, areas and contrasts in scope. The life cycle databases massively affect the computation of carbon emissions. In developing countries, there are moderately finished life cycle databases, but for developing countries like India, LCA databases are available only for a few materials, which is inadequate to perform LCA. Moreover, most existing datasets do not consider the qualities of region, manufacturing and assembling strategy into account. Therefore, there is need for comparison of regionally available similar databases to access the LCA results.
LCA of Construction Materials
An assessment done for the three-bedroom semi-detached house in Scotland for 10 diverse construction materials was investigated and regular primary materials exhaustively utilized were taken in this study [19]. Carbon emissions are compared with respect to six concrete mixes during production and placement phases made in Portland cement, governing on impacts produced from global warming potential [20]. The office building in Finland invested on environmental life-cycle assessment over 50 years of service life [21]. The results of the study show most of the carbon emissions impact is associated with building material manufacturing like steel, concrete and paints and electricity use [21].
BIM-LCA Integration
Several researchers have used BIM to perform LCA of buildings. The information exchange through Bill of Quantities (BoQ) or schedule of quantities sheet in BIM-LCA integration is the most common way to interpret combination between BIM and LCA tools. The BoQ is exported from BIM 3D model data that contain material information and quantities. The information is inputted to LCA manually or automatically. The most Energies 2021, 14, 4237 5 of 16 adopted approach is that the data imported into the LCA tool is inputted manually; this technique is very tedious and contains lots of mistakes. The plug-in LCA software with 3D model is the second most adopted approach [22]. Thus, it is essential to implement the automated method.
Faster results are obtained from BIM integrated LCA plug-in tools, but they make use of generic data which is one of their limitations. The BIM-LCA integration can be partitioned into two types: the first is the data extraction from the BIM model directly such as quantities and materials and collaborating with the accessible life cycle databases to obtain the LCA of the building structure. The second is the computation of LCA when BIM objects are embedded with environmental properties [23]. The first method is the most compatible method and can be used in this study.
Research Gap
All the above literature contributed to errors due to multiple computations of several materials producing confusion in implementation of carbon coefficients. Topographically, the above literature on LCA of structures has added Korea, the United States, China, Sweden, Finland and Scotland carbon emissions databases and government to the nearby local climatic conditions. These studies are related to tropical desert environments, subfrigid environments, regions with blistering summers and cool winters. In any case, research on warm and humid climate in India has not been further developed. In spite of a few investigations about the ecological effect of structures, it is hard to track down data about the LCA of residential buildings in India. Existing literature focuses on the LCA of buildings on the GHG emissions and energy consumption of residential buildings, largescale buildings, hospital building and office building located in the developed countries and for the respective climatic conditions. It was found that in India there is a lack of investigation on LCA of residential buildings due to non-availability of a life cycle database and less adoption of BIM in projects.
Existing literature considers all building materials; thus, the computation of results become very complicated. Therefore, in this study the distinction of life cycle databases is explained considering few construction materials for the better computation and understanding of databases and corresponding Carbon Emissions Coefficient (CEC) used. In India, commonly used Reinforced Cement Concrete (RCC) structure is a conventional construction building method unlike those of other countries like China, UK, Sweden and Australia. In Europe, the utilization of BIM in construction projects is seen in around 33% of cases [24]. Among emerging economies around the world, India is falling behind in BIM-LCA adoption and is confronting comparable difficulties, including absence of experienced experts and significant expense. Hence, this study emphasis on BIM-LCA integration using One Click LCA in build with two databases.
From the research gaps mentioned, this study focuses on the following aspects: 1.
Evaluation of carbon emissions with four main contributing construction materials.
2.
Evaluation of carbon emissions at every stage of the building life cycle with different databases using soft tools interconnected with BIM and with manual calculations.
3.
Comparing the results of BIM-LCA and manual computations for conventional RCC residential buildings constructed in warm and humid climatic condition of India with a different lifecycle inventory database.
System Boundaries and Functional Unit
The unit processes are mostly dictated by the system boundaries which are to be included in the LCA. Defining system boundaries is partly based on a subjective choice, made during the scope phase when the boundaries were initially set. The output and input processes are specified outside the boundaries [25]. The functional unit has two key considerations; they are mainly the size and structure of the building, floor area
System Boundaries and Functional Unit
The unit processes are mostly dictated by the system boundaries which are to be included in the LCA. Defining system boundaries is partly based on a subjective choice, made during the scope phase when the boundaries were initially set. The output and input processes are specified outside the boundaries [25]. The functional unit has two key considerations; they are mainly the size and structure of the building, floor area and geographical data. The boundaries considered in the present study are as shown in Figure 2. In the previous studies, a tally was used for LCA computations. A manual material mapping process was carried out using Tally EPD database; no automated material mapping process was performed. Due to errors and lack of quality of data in Tally, this study adopted One Click LCA which enables the process of automated material mapping.
The environmental impacts of the different building materials of life cycle phases of building stages are determined with three databases blends. To begin with, manual calculations are carried out by utilizing Inventory of Carbon and Energy (ICE) database and software computation carried out using One Click LCA, considering GaBi and ecoinvent databases [26][27][28]. One Click LCA is trailed by EPDs given the ISO 14040 and EN 15804 standards [29]. The present study investigates the carbon footprint of structures with four phases of the LCA. For the current study, building life cycle stages are considered as In the previous studies, a tally was used for LCA computations. A manual material mapping process was carried out using Tally EPD database; no automated material mapping process was performed. Due to errors and lack of quality of data in Tally, this study adopted One Click LCA which enables the process of automated material mapping.
The environmental impacts of the different building materials of life cycle phases of building stages are determined with three databases blends. To begin with, manual calculations are carried out by utilizing Inventory of Carbon and Energy (ICE) database and software computation carried out using One Click LCA, considering GaBi and ecoinvent databases [26][27][28]. One Click LCA is trailed by EPDs given the ISO 14040 and EN 15804 standards [29]. The present study investigates the carbon footprint of structures with four phases of the LCA. For the current study, building life cycle stages are considered as shown in Figure 2 within system boundaries. Particularly, it incorporates the construction stage (including material production, material transportation and construction on site), transportation stages (fuel consumption, number of vehicles and quantity of materials transported), the operational stage (including HVAC, lighting, water supplying and equipment use) and the demolition stage (destruction and renovation).
For an alternate extent of LCA, different life cycle stages are either compulsory or optional. In product level, EPDs modules A1 to A3 are obligatory under EN 15804 [30], while any remaining stages are optional. In any case, various accreditations and estimation frameworks may restrict the modules determined.
The entire computation of LCA is determined with time effectiveness, with the strength and estimation of LCA. The chance of evolving inside One Click LCA relies upon the materials and the decrease of carbon emissions. Finally, the strategy for esti-Energies 2021, 14, 4237 7 of 16 mation for each phase of the life cycle, the combination of material databases and the purposes behind the experiential difference in the outcomes are clarified. The methodology for the evaluation of LCA according to ISO 14040 using BIM is presented in the flowchart demonstrated in Figure 3. equipment use) and the demolition stage (destruction and renovation).
For an alternate extent of LCA, different life cycle stages are either compulsory or optional. In product level, EPDs modules A1 to A3 are obligatory under EN 15804 [30], while any remaining stages are optional. In any case, various accreditations and estimation frameworks may restrict the modules determined.
The entire computation of LCA is determined with time effectiveness, with the strength and estimation of LCA. The chance of evolving inside One Click LCA relies upon the materials and the decrease of carbon emissions. Finally, the strategy for estimation for each phase of the life cycle, the combination of material databases and the purposes behind the experiential difference in the outcomes are clarified. The methodology for the evaluation of LCA according to ISO 14040 using BIM is presented in the flowchart demonstrated in Figure 3.
Building Information Modelling
A residential apartment building (G+2) project, a RCC structure located at Kakkanad, Ernakulam, Kerala, India, is taken for the contextual analysis for the venture appearing in Figure 4a. The 2D plans of the constructed RCC building are given in Figure 5. Ernakulam has warm and humid climatic conditions experiencing a maximum temperature of 38 °C in the summer and a minimum temperature of 20 °C during winter. The residential apartment building has a constructed area of 720 m 2 . The building has a RCC flat roof of 125 mm thick and concrete masonry walls of 200 mm thickness. The burnt clay brick is also used for the parapet walls.
Building Information Modelling
A residential apartment building (G+2) project, a RCC structure located at Kakkanad, Ernakulam, Kerala, India, is taken for the contextual analysis for the venture appearing in Figure 4a. The 2D plans of the constructed RCC building are given in Figure 5. Ernakulam has warm and humid climatic conditions experiencing a maximum temperature of 38 • C in the summer and a minimum temperature of 20 • C during winter. The residential apartment building has a constructed area of 720 m 2 . The building has a RCC flat roof of 125 mm thick and concrete masonry walls of 200 mm thickness. The burnt clay brick is also used for the parapet walls. A 2D building plan is prepared in AutoCAD, shown in Figure 5, then exported to Autodesk Revit Architecture 2018, to make a 3D drawing and assign specifications. Autodesk Revit is Multi-disciplinary BIM software used to model inside the architectural and structural environment, realizing floor plans for building and houses. Enscape truly permits you to accomplish a beautiful presentation, rendering features with the snap of one button as demonstrated in the 3D view in Figure 4b.
Life-Cycle Database and Assessment of Building
The LCA is executed in steady stage to determine the carbon emissions effects. The selection of life cycle database is the important step involved in LCA. To perform the LCA through manually, the information's required are material amount at construction stages and machine electricity consumption, the number of hours and fuel consumption at transportation stages, and consumption of energy during operational stage is essential. The embodied energy and embodied carbon factors included in the ICE database are unique [26]. The GaBi database has by far the largest life cycle database data industry coverage worldwide [27]. The ecoinvent data set gives access to unit measures to support cradle to gate inventories covering diverse industrialized regions [28]. The ecoinvent contains worldwide industrialized data of life cycle inventory on stock information on energy supply, asset extraction, material inventory, synthetic substances, metals, agri-business and transport administrations. A 2D building plan is prepared in AutoCAD, shown in Figure 5, then exported to Autodesk Revit Architecture 2018, to make a 3D drawing and assign specifications. Autodesk Revit is Multi-disciplinary BIM software used to model inside the architectural and structural environment, realizing floor plans for building and houses. Enscape truly permits you to accomplish a beautiful presentation, rendering features with the snap of one button as demonstrated in the 3D view in Figure 4b.
Life-Cycle Database and Assessment of Building
The LCA is executed in steady stage to determine the carbon emissions effects. The selection of life cycle database is the important step involved in LCA. To perform the LCA through manually, the information's required are material amount at construction stages and machine electricity consumption, the number of hours and fuel consumption at transportation stages, and consumption of energy during operational stage is essential. The embodied energy and embodied carbon factors included in the ICE database are unique [26]. The GaBi database has by far the largest life cycle database data industry coverage worldwide [27]. The ecoinvent data set gives access to unit measures to support cradle to gate inventories covering diverse industrialized regions [28]. The ecoinvent contains worldwide industrialized data of life cycle inventory on stock information on energy supply, asset extraction, material inventory, synthetic substances, metals, agribusiness and transport administrations.
Calculation of Life Cycle Assessment
The equations governing the total carbon emissions are given below: U tot = U con + U mt + U ope + U dem (1) where U tot , represents total carbon emissions of all stages; U con , represents carbon emissions at construction stage; U mt, represents carbon emissions at transportation stage; U ope , represents carbon emissions at operational stage; U dem, represents carbon emissions at destruction stage.
Construction Stage
The construction stage carbon emissions contain quantity of materials multiplied by the carbon emissions coefficients. The CEC is obtained from the database chosen for corresponding materials.
where W mp represents quantity of material; S mp represent CEC.
Transportation Stage
Equation (3) shows the calculation of transportation stage carbon emissions (U mt ): where U mt represents carbon emissions generated by material during transportation; S mt represents the CEC of construction material hauling; FC represents consumption of fuel in liters.
Operational Stage
The only carbon emissions produced by electricity consumption are the essential measure to be calculated.
U o = U oa + U ol + U oe = (N oa + N ol + N oe ) × S ele (4) where U oa is the carbon emissions emitted by lighting fixtures; U ol is the carbon emissions emitted by washing machine; U oe is the carbon emissions emitted by other building equipment; N oa is the quantity of electricity consumed by lighting fixtures; N ol is washing machine electricity consumption; N oe is other building equipment electricity consumption; S ele is the CEC for electricity consumption.
Destruction Stage
The destruction stage is considered as 10% of the construction stage. Unavailability of data for demolition leads to assumptions. The type of equipment used for demolitions is tools such as concrete breaker and demolition hammer.
LCA Through Manual Calcluations
The carbon emissions are manually calculated by using the ICE database. Before performing manual computation, in the construction stage, the building material quantity is evaluated from Revit estimation software and presented in Table 1. In the transportation stage, fossil fuels have high CEC, so it is mandatory to analyze CO 2 emitted for transportation of materials to site, which mainly depends on capacity of vehicles, vehicle type, mileage and number of trips for obtaining total carbon emissions. Difference in fuel consumption due to changes in terrain is not considered. The implemented shipping mode is the main road shipment. The Light Duty Vehicles (LDV) and Medium Duty Vehicles (MDV) are used for transportation of materials. The fuel efficiency of these vehicles is presented in Table 2. The carbon emission factor for diesel vehicles is considered as 2.6444 kg CO 2 e/lit. Number of trips = Quantity of materials/Load carrying capacity (6) The distance for each material is calculated separately as shown in Table 3. The resultant fuel consumption in liters is calculated from total distance divided by fuel efficiency. The total distance is calculated by multiplying number of trips by the distance travelled for to and fro trip from factory to site given in Table 3. Table 4 shows fuel consumption for the material transportation. Therefore, total fuel consumption (Diesel) is 323.09 lit. For operational stage, electricity is the major component used. The service life span of the building is assumed as 80 years. The CEC for electricity is 0.95 kg CO 2 e/kWh [31]. The location of the case study is in Ernakulam, Kerala, where the summer season is moderately hot and winter reasonably cold. Electricity is the single powerful resource throughout the operation and maintenance stages [31]. There are 9 apartments in the building, and the electricity consumption of different floors is collected separately from electricity bills for 2 months. The commonly used equipment is a refrigerator in most of the houses in Kerala with full time working hour than any other equipment. Table 5 provides the details of the electricity consumption collected from the electricity bills. Ground floor 440 567 548 446 317 421 2739 First floor 320 521 518 298 136 354 2147 Second floor 360 522 505 276 326 365 2354 Therefore, total yearly electricity consumption is 7240 kWh.
LCA Using Software
The computation of two types of databases GaBi and ecoinvent was assigned to the 3-D BIM model using plug-in software One Click LCA. For Revit families and components, One Click LCA precedes the material mapping process automatically. Based on its EPD database in addition to the BIM software data extraction, it also mapped the defined Revit components to the suitable materials [32]. It is concluded that Revit component data is extracted by One Click LCA, and objects are recognized and mapped to the materials based on the component definitions in the Revit model [32]. The CEC adopted in the One Click tool for major building materials are presented in Table 6. The transport of material goods is considered by road medium. The CEC for vehicle is considered as 0.40 kg CO 2 e/ton km. The CEC for electricity grid is considered as 1.03 kg CO 2 e/kWh. The CEC is multiplied by the comparing material quantity to get the construction stage carbon footprint. The outcomes obtained for the determined individual materials are classified in Table 7.
Life Cycle Assessment Results for Software Tools
After inputting all the necessary details into the software, the following results are obtained and presented in Table 8. To calculate the operation energy consumption, Green Building Studio is used, and electric consumption value is obtained as 17,443 kWh. The value of energy consumption obtained in Green Building Studio appears to be 58% higher than the original energy consumption values obtained from the actual electricity bills issued by state electricity board. The results show existing building accounts for 14.95%, 0.13%, 83.42% and 1.50% of total carbon emissions of 659609.2 tCO 2 e for construction, transportation, operational and demolition stages, respectively, in manual computations using ICE database. Similarly, the percentage contribution of the other two databases is also calculated as shown in Table 9. Figure 6 shows the carbon emissions at all the stages for GaBi database. Being a RCC structure, cement contributes about 66.6%, concrete masonry blocks about 9.4%, ceramic tiles and bricks contribute about 8.1%, and 15.7% of total carbon emissions of construction stage accounts for about 98.65 tCO2e in ICE database. In ecoinvent database from cradle to gate impacts, most contributing material is cement about 89 tCO2e, around 52.5%, concrete blocks around 4.3%, bricks 39.4% and ceramic tiles 3.8%. Similarly, in GaBi database, cement contributes around 64 tCO2e about 69.3%, concrete blocks 8.4%, bricks 15.9% and ceramic tiles 6.4%, respectively. In addition, it is evident from Figure 7 that cement is the highest contributor of carbon emissions in all databases. Being a RCC structure, cement contributes about 66.6%, concrete masonry blocks about 9.4%, ceramic tiles and bricks contribute about 8.1%, and 15.7% of total carbon emissions of construction stage accounts for about 98.65 tCO 2 e in ICE database. In ecoinvent database from cradle to gate impacts, most contributing material is cement about 89 tCO 2 e, around 52.5%, concrete blocks around 4.3%, bricks 39.4% and ceramic tiles 3.8%. Similarly, in GaBi database, cement contributes around 64 tCO 2 e about 69.3%, concrete blocks 8.4%, bricks 15.9% and ceramic tiles 6.4%, respectively. In addition, it is evident from Figure 7 that cement is the highest contributor of carbon emissions in all databases. GaBi database is taken as the reference database, and relative differences in the ou comes to the other databases are calculated using Equation (7) and presented in Table 1 Table 10. Relative difference in findings of carbon emissions.
Life Cycle Database
Carbon Emissions (CE) (tCO2e/m 2 ) Relative Difference GaBi database is taken as the reference database, and relative differences in the outcomes to the other databases are calculated using Equation (7) and presented in Table 10. This technique can demonstrate index of comparison of a positive or negative distinction between the reference databases. The result of relative difference of −58.3% is nearly half of the result given by the reference database. The result of LCA with ecoinvent database is a similar result to the reference database.
The current examination covers large differences between the analyzed software database combinations and manual calculations, while the outcomes for software databases GaBi and ecoinvent are comparable. It is observable that the outcomes for ICE database are small and lower than GaBi and ecoinvent. Results for the GaBi database in One Click LCA, incidentally, are similar to ecoinvent database. The explanation is that ICE data set and ecoinvent database depend on information from different suppliers and geographic locations. The manual method has given appropriate, more precise and accurate LCA assessment results for each and every material inspected in the project, unlike BIM enabled LCA. The CEC also affects the estimation results. Because of the absence of coefficients of carbon emission comparable to coefficients in certain materials in India, this study adopted databases from different nations and compared them. In any case, the scientific research on different coefficients of carbon dioxide data sets should be investigated to tackle this issue.
Since the point of the investigation was to test the consistency of the two chosen methods and 3 databases, this study did not present a full investigation of the inconsistencies. There are two sources; first is distinctions in the three datasets of the LCA data for similar materials, secondly differences in the best manageable material area accessible in the data sets in similarity with the real material utilized in the structure. The main source is the software used for different purposes probably won't have produced the very same output regardless of whether the data sets were equivalent or not. In few of the points there is a moderate difference in between databases are discovered and various information components have differences in originates. Buildings are complex products quite possibly the main issues of LCA of structures. The dissimilar background information comparability of LCA results which involve multiple data for the evaluation. The assessment result represents equivalent patterns regarding the databases are confirmed in this investigation and comparable percentage variations between the three data sets are revealed by the outcomes.
Conclusions
This study recommends a life cycle assessment method through computation of carbon emissions collaborating manual LCA theory and BIM-LCA technique. The BIM tools are utilized for 3D model creation and calculations are estimated utilizing the ICE, GaBi, and ecoinvent database to ascertain building carbon emissions. The case study shows a residential apartment building in southern part of India with warm and humid climate contributes around total carbon emissions of 0.916 tCO 2 e/m 2 which is the realistic value and software database combinations give 2.2 tCO 2 e/m 2 , and 2.3 tCO 2 e/m 2 for GaBi and ecoinvent database, respectively. The most carbon producing construction material in construction stage is cement. According to the GaBi and ecoinvent database above computation results shows the bigger segment of buildings carbon emissions are created during the operational stage have higher total carbon emissions accounts for about 83.4%, 90.2%, 86.1% which is the realistic result of ICE database used for Indian climatic conditions. The second biggest being the construction stage represents about 14.95%, 5.8%, 10.16%.
The result of the investigation has given an understanding of framework analysis for optimizing carbon emissions. The difference in results is due to the methods used for computation of stages. Software databases method is less time consuming than manual method but the databases automatically assigned by the software which cause errors and improper assigning of databases programmed. As a result, manual computation is done for proper understanding of differences caused by the BIM-LCA enabled method. It is accordingly inferred that the BIM-based LCA should develop better execution of results for optimizing carbon emissions and future use. The contribution of this research is attributed to the knowledge body regarding green construction and sustainable development. It assists with accomplishing the greenhouse gas emissions optimization over the entire life cycle of a structure. This optimization is relevant for contractors, homebuyers, and governments who are repeatedly searching for approaches to accomplish a low-carbon economy. | 2021-08-01T03:06:54.112Z | 2021-07-14T00:00:00.000 | {
"year": 2021,
"sha1": "6e3eae3378fe9b9f7581f8efc89575eae85a4c97",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1073/14/14/4237/pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "6e3eae3378fe9b9f7581f8efc89575eae85a4c97",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
245907763 | pes2o/s2orc | v3-fos-license | "Unconstitutionality of Patent Extensions"
The sole paragraph of Art. 40 of the Brazilian Intellectual Property Act (Law No. 9279/1996) is unconstitutional. This decision has ex nunc effects. Exempted from the ex nunc effects – with only ex tunc effects – are (i) the judicial actions filed by or on 7 April 2021 (date of the partial upholding of the preliminary injunction in the present case), and (ii) the patents granted with an extension of the term related to pharmaceutical products and processes, as well as healthcare devices and/or materials.
1. The sole paragraph of Art. 40 of the Brazilian Intellectual Property Act (Law No. 9279/1996) is unconstitutional. 2. This decision has ex nunc effects. 3. Exempted from the ex nunc effects -with only ex tunc effects -are (i) the judicial actions filed by or on 7 April 2021 (date of the partial upholding of the preliminary injunction in the present case), and (ii) the patents granted with an extension of the term related to pharmaceutical products and processes, as well as healthcare devices and/or materials.
Summary:
Direct action for a declaration of unconstitutionality. Sole paragraph of Art. 40 of Law No. 9.279/1996. Industrial Property Act. Extension of the period of validity of patents in the event of an administrative delay in the examination of the application. Indeterminacy of the period of validity of the exclusive right to use the invention. Infringement of legal certainty, of the timeliness of the patent, of the social function of intellectual property, of a reasonable duration of the proceedings, of the efficiency of public administration, of free competition, of consumer protection, and of the right to health. Application upheld. Modulation of the effects of the judgment.
1. The protection of industrial property, which is enshrined as a fundamental right in Art. 5(XXIX) of the 1988 Constitution, is limited in time and based on the social interest and the technological and economic development. It is therefore an institution with a constitutionally determined objective, and is not limited to an individual right, as it concerns the society and the development of the Country.
2. According to Art. 40, caput 1 of Law No. 9279/1996, the period of validity of a patent is 20 (twenty) years for inventions, and 15 (fifteen) years for utility models, calculated from the date of the filing. The Brazilian Intellectual Property Act (IPA) provides for an additional rule in the sole paragraph of the provision: From the date of the granting of the patent, the period of validity may not be less than 10 (ten) years for an invention patent and 7 (seven) years for a utility model patent. Therefore, we can deduce from Art. 40 the significance of two time limits for determining the period of validity of patents: the date of the filing and the date of the granting of the patent.
3. The sole paragraph of Art. 40 establishes a variable term of protection, since this depends on the processing time of the respective administrative procedure at the National Institute of Industrial Property (INPI). Thus, if the authority needs more than 10 (ten) years in the case of an invention or more than 8 (eight) years in the case of a utility model to take a final decision, the entire benefit period will exceed the period of validity provided for in Art. 40, caput.
4. The sole paragraph of Art. 40 of the IPA is said to have been introduced with the aim of compensating for the backlog of patent applications (backlog) at the INPI. This phenomenon has existed since the adoption of Law No. 9279/1996, with which, in alignment with the Agreement on Trade-Related Aspects of Intellectual Property Rights (TRIPS Agreement), certain products that were previously not subject to exclusive use were made patentable. Following the implementation of the [TRIPS] Agreement and the adoption of Law No. 9.279/1996 without recourse to the transitional period granted to the developing countries, the federal authorities were unable to cope with the additional burden of new registered products, resulting in a large backlog of applications.
5. Article 33 of the TRIPS Agreement guarantees that the patent will be valid for at least 20 years from the date of filing of the application. The rationale of the [TRIPS] Agreement is that the mere filing creates a presumption in favour of applicant. The additional period of validity from the date of the granting of the patent which is set out in the sole paragraph of Art. 40 therefore does not follow from the TRIPS Agreement, nor does it find a parallel in other jurisdictions where the additional exclusive rights follow a logic which is substantially different from that of Brazilian law, since those have a limited application, are limited to certain cases, and do not constitute automatic rights. The instruments that are adopted abroad to extend the exclusive useful life of inventions -in their various forms, durations, and specific rules -contain mechanisms that prevent the validity of the patent from being extended longer than necessary.
6. The sole paragraph of Art. 40 is inappropriate in several respects, since it has the effect of making the period of validity of patents indefinite. The final period of validity of a patent in Brazil is only known from the date of the actual granting of the patent, which can take more than ten years. In practice, this has the consequence that there is no time limit on patent protection in Brazil, leading to the absurd scenario where patents are in force in the country for extremely long periods of time -around 30 years -which exceeds the limits of reason, and which brings the country into conflict in the field of intellectual property law with respect to other jurisdictions.
7. As long as the sole paragraph of Art. 40 is in force, the period between the filing and the granting of a patent will always be indefinite, with or without a backlog at the INPI. This is because the processing time of the Patent Office is an indefinite factor, given the complexity which is associated with the analysis of this type of filing -which is variable and depends on the product and the corresponding technological sector -as well as the complications which may occur during the administrative procedure -some of which are brought on by the applicants themselves in order to benefit from the automatic renewal which is provided for in the rule in question. Even if the INPI overcame the chronic delay in examining patent applications, the unconstitutionality of the sole paragraph of Art. 40 would remain.
8. The extension of the period of validity of the patent, which is provided for in the IPA, not only does not contribute in any way to resolving the chronic backlog of applications which are submitted to the INPI, but also leads to non-compliance with the time limits laid down in Art. 40, caput, since it reduces the consequences of the administrative delay and extends the benefit period for the respective applicants to the disadvantage of other market participants, the public administration, and society as a whole. There is sufficient evidence in the files to suggest that the contested provision increases the backlog, and thus virtually contributes to the emergence of the phenomenon which it seeks to prevent, which constitutes a direct breach of the principles of reasonable duration of proceedings (Art. 5, LXXVIII, Federal Constitution) and administrative efficiency (Art. 37, caput, Federal Constitution).
9. The effects of an extension of the period of validity of a patent on the [Brazilian public] unified health system deserve attention, as this, being one of the largest public health systems in the world and including a network of services aimed at making access to free healthcare universal, requires public funds compatible with the scope and complexity of the system, which, however, encounter financial and budgetary problems typical of a developing country such as Brazil. The commercial dominance provided by the patent over very long periods of time affects the population's access to public health services, as it puts a strain on the system, namely by eliminating competition and by imposing the purchase of pharmaceutical products for a price unilaterally set by the rights owner and increased by the payment of royalties, which are purchased and distributed by the public authorities.
10. The longer the exclusivity period for the holder of the pharmaceutical patent, the greater the burden will be on the public sector and on society, bearing in mind that medicines must be procured on a large scale for the implementation of public health policy. This connection becomes even more serious and urgent in light of the international health emergency resulting from the Covid-19 pandemic. Coping with a crisis of such magnitude requires the management of scarce resources of various kinds, not only those associated with the purchase of medicines with possible indications for the treatment of the disease. Pressure on the healthcare system has increased worldwide, thus increasing the demand for inputs across the whole supply chain.
11. The undue extension of the period of validity of pharmaceutical patents is unjust and unconstitutional, since it privileges private interests to the disadvantage of the community, has an extreme impact on the provision of public health services in the country and, consequently, violates the constitutional right to health (Art. 196 of the 1988 Constitution). The extension of the validity of patents has a direct impact on the country's public health policy and hinders citizens' access to medicines, healthcare measures and services, harming not only competitors and consumers, but above all those who depend on the unified health system to ensure their physical well-being and survival.
12. The vagueness of the period of validity which is laid down in the sole paragraph of Art. 40 of Law No. 9.279/1996 creates legal uncertainty and violates the democratic state governed by the rule of law itself. The predictability of the validity of patents is an essential condition for market participants (applicants, potential competitors and investors) to be able to take rational decisions. The absence of clear rules also leaves room for discretion and the opportunistic and unconventional application of the rules of the game, such as, for example, the strategies used by applicants for the extension of the exclusive right to exploit the products.
13. The time limit which is provided for in Art. 5, Sec. XXIX of CF/88 (Federal Constitution) must be interpreted in the light of the scope of patent protection, which is not limited to the protection of the interests of inventors/patent applicants, but also ensures the use of the invention by society as a whole (i) on the basis of clear rules, and (ii) for a reasonable period of time. Thus, the competitive advantage which is granted to the creators of inventions or utility models should have a specific and foreseeable duration so that not only the beneficiaries but also the other players in the industry can accurately assess the time of the expiry of the period of validity of a patent. In this sense, the contested provision does not comply with the requirement of temporality, since, by linking the validity of the patent to the date of its grant, i.e. indirectly to the time taken by the respective procedure at the INPI, the period of validity of the benefit becomes indefinite, which contributes to the extrapolation of the time limits laid down in Art. 40, caput of the IPA and to a lack of objectiveness and predictability of the whole procedure 14. A time limit of the patent makes it possible to reconcile the protection of inventions with the fulfilment of the social function of property, since, although it protects the rights of inventors of inventions or utility models for a certain period of time and encourages and remunerates investment in innovation, it guarantees to the rest of the industry and, ultimately, society, the possibility of reaping the benefits of the results of creativity as soon as the rights to use them expire.
15. The sole paragraph of Art. 40 of the IPA permits the postponement of the market entry of competitors and favors the permanence of exclusivity effects for an indefinite and excessive duration, which promotes market dominance, allows the elimination of competition and the arbitrary increase of profits, deepens inequality between economic operators, and transforms what would be justified and reasonable unconstitutional, thus configurating an infringement of the social function of intellectual property (Art. 5, clause XXIX, c/c, Art. 170, clause III), of free competition, and of consumer protection (Art. 170, clauses IV and V).
16. The delay in the examination of patents is a reality that must be combated in order to ensure legal certainty for all market participants. Nothing justifies an administrative examination period of around ten years. I appeal to the Federal Public Administrator (the National Institute of Industrial Property, the National Health Surveillance Agency, and the Secretary of Science, Technology, and Strategic Inputs of the Ministry of Health) to make effective efforts to remedy the shortcomings in the examination of patent applications.
17. The direct action is upheld and the sole paragraph of Art. 40 of Law No. 9.279/1996 is declared unconstitutional.
18. Modulation of effects of the decision by granting effects ex nunc from the publication of the text of the present judgment in order to maintain the extensions of time limits granted on the basis of that statutory provision, and thus to ensure the validity of the patents already granted and still in force by virtue of the application of that statutory provision. Excluded from the modulation are: (i) the judicial actions which were filed by 7 April 2021 (the date of the partial upholding of the preliminary injunction in the present case), and (ii) the patents granted with an extension of the term related to pharmaceutical products and processes, as well as healthcare devices and/or materials. In both of these situations the ex tunc effect is applicable, which will result in the loss of the extensions of the period of validity granted on the basis of the sole paragraph of Art. 40 of the IPA; the period of validity of patents, laid down in Art. 40, caput of Law No. 9.279/1996, must be respected and any concrete effects already arising from the extension of the period of validity of said patents must be protected.
Judgment
After analyzing, reporting on and discussing the proceedings, the Judges of the Supreme Federal Court agree, in accordance with the text of the judgment and the opinion of the reporting judge, Judge Dias Toffoli, by majority of votes, with the dissenting opinions of Judges Roberto Barroso and Luiz Fux (President), to uphold the direct action and the application for a declaration of unconstitutionality of the sole paragraph of Art. 40 of Law No. 9279/1996. Furthermore, the Judges agree by a majority of votes, in accordance with the terms of the rapporteur, Judge Dias Toffoli, to modulate the effects of the decision declaring the sole paragraph of Art. 40 of the IPA unconstitutional, conferring it ex nunc effects as of the publication date of the text of this judgment so as to maintain the term extensions granted to patents on the basis of the legal provision and to preserve the validity of patents already granted and still unexpired due to the application of said legal provision. Exempted from modulation are: (i) the judicial actions filed by or on 7 April 2021 (date of the partial upholding of the preliminary injunction in the present case), and (ii) the patents granted with an extension of the term related to pharmaceutical products and processes, as well as healthcare devices and/or materials. In both cases an ex tunc effect applies, which will lead to the loss of the term extensions granted on the basis of the sole paragraph of Art. 40 of the IPA, respecting the validity period of patents laid down in Art. 40, caput of Law No. 9279/1996 and safeguarding possible concrete effects already produced due to the extension of the period of validity of the said patents. Judges Roberto Barroso and Luiz Fux (President) would modulate the effects of the decision to a greater extent; Judges Edson Fachin, Rosa Weber and Marco Aurélio dissent. | 2022-01-14T05:14:09.113Z | 2022-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "01ff4dd0135c587f192144f4bad1b9df1fae0c2d",
"oa_license": null,
"oa_url": "https://link.springer.com/content/pdf/10.1007/s40319-021-01144-9.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "01ff4dd0135c587f192144f4bad1b9df1fae0c2d",
"s2fieldsofstudy": [
"Law"
],
"extfieldsofstudy": []
} |
97527914 | pes2o/s2orc | v3-fos-license | Re-entrance in nuclei: competitive phenomena
Using the shell-model Monte Carlo method, we investigate how temperature and rotation affect pairing properties for nuclei in the fp - gds region. The re-entrance of pairing correlations with temperature is predicted at high rotational frequencies. It manifests through an anomalous behavior of the specific heat and level density.
Introduction
Phase transitions, as seen in superconductivity, superfluidity or ferromagnetism, are the result of competition between the order interaction and thermal fluctuations. Ordered phases (less symmetric) usually reside at lower temperatures and transition to disordered phases (more symmetric) at higher temperatures due to thermal fluctuations. Re-entrance (or partial order) phenomena manifests itself in successive phase transitions as a function of temperature or other extensive quantity. For example, re-entrance was discovered experimentally for liquid crystals in 1975 [1], showing a higher symmetry nematic phase occurring at a lower temperature than the lower symmetry smectic phase. Interestingly, the discovery of this phenomena led to many technological applications in electronic displays. Other condensed matter systems that display re-entrant phenomena include ferromagnetic insulators [2] and orbital antiferromagnetism [3].
The properties of nuclei are governed by the nuclear interactions. The singlet-S and triplet-P channels in the nucleon-nucleon interaction generate global nuclear pairing properties. Indeed, a large even-even nucleus will exhibit specific pairing properties, such as a large pairing gap from the ground state to the first excited state [4]. The nuclear Hamiltonian also generates other phenomena such as intrinsic deformation, with an experimental signature of the energy of a band of states increasing like E(J) ∼ J(J + 1) where J is the spin of the state. Pairing and deformation are competitive phenomena in nuclei. One can heat a nucleus thermally and induce a pairing transition. This transition occurs at around a temperature of T = 0.7 − 1.5M eV where thermal fluctuations overcome the effects of pairing [5][6][7][8]. A clearly defined peak occurs in the specific heat at these temperatures. If the nucleus is well deformed, that peak structure can be smeared. Early work on thermally assisted pairing predicted by Kammuri in 1964 [9], describes a local increase of pairing correlations in a rotating nucleus with excitation energy. One can also enhance or suppress structures in a nucleus through cranking, which practically involves adding to the Hamiltonian a time-reversal breaking term ωJ z , where ω is the cranking frequency, and J z is the projection of spin along the axis of rotation. As the cranking frequency increases, pairing within the nucleus should decrease due to spin alignment [10].
The objective of this work is to present a short survey on our studies using the shell-model Monte Carlo method to examine how temperature and rotation affect pairing properties on the N = 40 systems, 68 Ni, 70 Zn, 72 Ge and 80 Zr, which exhibit distinct phase-transitional behavior [10,11]. Interestingly, we have found one system, 72 Ge in which a re-entrant phenomena occurs where at high cranking frequency there is a small window in temperature where pairing actually increases at a critical temperature [10]. This leads to a specific heat that shows a sharp decrease with temperature. We also present early results on current work investigating the phase-transitional behavior of the odd-odd N = Z nuclei, 70 Br, 74 Rb and 78 Y.
The Shell-Model Monte Carlo Method
The shell-model Monte Carlo (SMMC) method [12] allows investigations of nuclei at finite temperatures with the relevant degrees of freedom included. This makes it possible to account for the thermal and quantal fluctuations which are important to describe phase transformations in finite-size systems. The SMMC approach describes nuclear observables at finite-temperature as thermal averages where β = 1/T is the inverse temperature, T r N is the many-body trace at fixed particle number N , and −βH is the imaginary-time many-body propagator. For certain classes of residual nucleon-nucleon interactions [13], like the attractive pairing+quadrupole force employed in this work, the evaluation of observables is exact, subject only to statistical errors related to the Monte Carlo integration. As we are concerned here with a description of collective quadrupole and pairing correlations at relatively low energies, we have employed a pairing+quadrupolequadrupole Hamiltonian where Q 2µ is the mass quadrupole moment operator given by with projection µ, a † jmtz (a jmtz ) creates (destroys) a nucleon of isospin t z in the orbital jm, andã jm = (−1) j+mâ j−m , and the seniority pairing operator P † is defined as The details of our SMMC calculations follow those of Refs. [10,11]. Namely, calculations were performed in the complete (0f 1p − 0g1d2s) model space for protons and neutrons. The singleparticle (s.p.) energies have been determined from a Woods-Saxon potential parametrization of 56 Ni. Using the parameters G = 0.106 MeV and χ = 0.0104 MeV −1 fm 2 , we reproduce the lowenergy spectrum of 64 Ni and 64 Ge. Nuclei are described by valence protons and neutrons outside the closed core of 40 Ca. In order to generate the angular momentum polarization, we consider the RouthianĤ ω =Ĥ − ωĴ z , where the cranking frequency ω (in units of MeV) enters through the cranking term. Our SMMC calculations were performed on Jaguar, a Cray XT5 at Oak Ridge National Laboratory (ORNL) with 18,688 nodes containing dual hex-core AMD Opteron processors, totalling 224,256 cores. Simulations investigated cranking frequencies ω = 0.0 to 0.5 MeV and the temperature parameter β was split into N β ∆β slices with ∆β = 1/32 MeV −1 . Each parameter set was run with up to 15 840 statistical samples, requiring a total of 190,080 cores for 4 hours for a full parameter study of a single nucleus.
Nuclei in the fp-gds region
To understand the competition between nuclear pairing correlations as a function of the rotational frequency and temperature for nuclei, we first apply the SMMC method to look at known nuclei in the f p − gds region where sizable pairing correlations dominate in their ground state without the effects of rotation. Studying the temperature-induced interplay between deformation and pairing in four N = 40 isotones, 68 Ni, 70 Zn, 72 Ge, and 80 Zr shows that shape and pairing contribute to the specific heat of the nucleus [11]. Theoretical studies to identify phase transitions in nuclei have focused on the relationship between pairing correlations and an associated peak in the specific heat C v = dE/d(kT ). To assess the magnitude of pairing correlations, it is convenient to employ the J = 0 pair operator Fig. 2 shows the corresponding specific heat calculations for the four N = 40 isotones. Only for the full interaction (Q + P ) do peaks appear in the specific heat corresponding to the increase in pairing strength at low temperatures. In the case of 68 Ni, proton pairing correlations are very weak and the peak in the C v is associated with the collapse of the neutron pairing with temperature. For 70 Zn, the two peaks in the specific heat can be associated separately with the proton pairing and neutron pairing transitions to normal phase, which occur at different rates with temperature. The proton and neutron pairing transitions in 72 Ge occur in a similar range of temperatures, resulting in a pronounced single peak in C v . In the strongly deformed 80 Zn, there is an absence of a sharp maxima in C v with a pairing strength equal for proton and neutron that is significantly reduced from the lighter N = 40 isotones.
Re-entrance in 72 Ge
In the absence of rotation, the SMMC calculations [11] gives clear evidence for the breaking of isovector pairs at temperatures around kT c ≈ 0.6 MeV in 72 Ge which is reflected by a noticeable peak in the specific heat C v . To study the effects of rotation, we employ the cranking term from ω = 0.0 to 0.5 MeV. In a model of non-interacting particles, the neutrons in 72 Ge would completely occupy the f p shell. However, correlations induced by the residual interaction make it energetically favorable to scatter neutrons across the N =40 shell gap which is about 2.5 MeV. From Ref. [10], we see the effects of rotating the nucleus. Figure 3 shows single-neutron occupations in the wave function of 72 Ge as a function of rotational frequency at two temperatures: kT = 0.47 meV (slighly above g.s.) and 1.6 MeV (well above T c ). In the g.s. configuration, the total neutron occupation of the gds shell is about 3.5, with about 3 neutrons in the g 9/2 orbital. Upon rotating the nucleus, the s.p. cranking term ωĵ z , representing the combined effect of the Coriolis and centrifugal force, generates angular momentum polarization by lifting the magnetic m-degeneracy of s.p. states. [14]. The ground state band is identified as isospin T = 1, arising from isovector pn correlations, with T = 0 states favored at higher rotational frequency, or equivalently higher excitation energy at about 1.0 MeV.
To study the effects of rotation and temperature on pn pairing correlations in odd-odd N = Z nuclei, we performed SMMC calculations of 74 Rb and two adjacent odd-odd selfconjugate nuclei 70 Br and 78 Y. 78 Y has a highly deformed J = 0, T = 1 ground state with considerable quenching of the pn pairing compared to 74 Rb [15] and 70 Br is predicted to have a J = 0, T = 1 ground state [16], which has not been verified experimentally. The J = 0 pairing strength seen in Fig.6(b) shows contrasting behavior to that seen in 72 Ge, where a local increase in the pairing strength occurs at low frequency and low temperature rather than at high frequency. A clear indicator of a phase transition, such as the dip seen for 72 Ge, does not appear for these nuclei seen in Fig 6(a). The high frequency specific heat exhibits interesting behavior for 74 Rb and 78 Y, by crossing the low frequency specific heat and remaining unusually high. Understanding of these new behaviors requires additional study.
Conclusions
In summary, using the SMMC technique we explored the interplay between temperature and rotation in f p − gds nuclei and its effect on pairing resulting in phase transitions. Our calculations demonstrate the presence of the partial order phenomenon associated with the reappearance of pairing at high rotational frequencies and intermediate temperatures for 72 Ge. Similar behavior is not currently seen for odd-odd N = Z nuclei, but requires further studies to adequately understand results. | 2019-04-06T00:44:06.397Z | 2013-07-03T00:00:00.000 | {
"year": 2013,
"sha1": "68f855629a1d6e29c1a1dafbb143016505edc923",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/445/1/012029",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "3fe68afd24b6aa80e193f9f260ea1d9cd165dcec",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
252967821 | pes2o/s2orc | v3-fos-license | Upper limits on the Polarized Isotropic Stochastic Gravitational-Wave Background from Advanced LIGO-Virgo's First Three Observing Runs
Parity violation is expected to generate an asymmetry between the amplitude of left and right-handed gravitational-wave modes which leads to a circularly polarized stochastic gravitational-wave background (SGWB). Due to the three independent baselines in the LIGO-Virgo network, we focus on the amplitude difference in strain power characterized by Stokes' parameters and do maximum-likelihood estimation to constrain the polarization degree of SGWB. Our results indicate that there is no evidence for the circularly polarized SGWB in the data. Furthermore, by modeling the SGWB as a power-law spectrum, we place upper limit on the normalized energy density $\Omega_\text{gw}(25\,\text{Hz})<5.3\times10^{-9}$ at $95\%$ confidence level after marginalizing over the polarization degree and spectral index.
Parity violation is expected to generate an asymmetry between the amplitude of left and righthanded gravitational-wave modes which leads to a circularly polarized stochastic gravitational-wave background (SGWB). Due to the three independent baselines in the LIGO-Virgo network, we focus on the amplitude difference in strain power characterized by Stokes' parameters and do maximumlikelihood estimation to constrain the polarization degree of SGWB. Our results indicate that there is no evidence for the circularly polarized SGWB in the data. Furthermore, by modeling the SGWB as a power-law spectrum, we place upper limit on the normalized energy density Ωgw(25 Hz) < 5.3 × 10 −9 at 95% confidence level after marginalizing over the polarization degree and spectral index.
Introduction.
The stochastic gravitational-wave background (SGWB) is a superposition of gravitational waves (GWs) from numerous unresolved and uncorrelated sources. These GW sources can arise either from astrophysical processes like compact binary coalescences (CBCs) [1][2][3], rotating neutron stars [4][5][6], stellar core collapses [7], or cosmological contributions, such as phase transitions (PT) [8][9][10][11], cosmic strings [12][13][14][15] and inflation models [16,17]. The detection of SGWB will provide a better understanding about the distribution of astrophysical sources, the history of early Universe and testing the theories of gravity. Until now, the terrestrial laser interferometers like Advanced LIGO [18] and Virgo [19] have been combined to search for SGWB in their output strain data. Based on current noise power, the results show that there is no detectable correlation and thus upper limits on the energy density of both isotropic and anisotropic SGWB have been set in [20][21][22].
The SGWB is widely assumed to be unpolarized, implying that the different polarized GW modes are treated as statistically identical and independent in the analyses. This assumption is quite reliable for a superposition of signals from clustered sources and parity is conserved in general relativity (GR). However, once the parity is violated, the asymmetry between the amplitude of left and right-handed GW modes leads to a circularly polarized SGWB. Some processes in the early Universe, such as helical turbulence during a first-order PT [23], can generate a polarized SGWB. When we go beyond GR, various theories of gravity, including Chern-Simons gravity [24][25][26][27], Hořava-Lifshitz gravity [28,29] and ghost-free scalar-tensor gravity [30] etc, can also give rise to parity violations. Accounting for these theories, particular polarization mode can be enhanced or compressed through * Corresponding author: huangqg@itp.ac.cn amplitude birefringence effect [31] during its propagation, resulting in unequal left and right-handed components in SGWB. In all, detecting a circularly polarized SGWB might yield a profound discovery for fundamental physics.
In this letter, we perform the first multi-baseline search for the circularly polarized isotropic SGWB in the data of Advanced LIGO-Virgo's First Three Observing Runs. The method for detecting the circularly polarized SGWB by ground-based interferometers was proposed for the first time in [32,33]. This method depends on the detector's nontrivial reaction to the Stokes V parameter. Even though SGWB can be detected in a one-baseline analysis, the existence of polarization cannot be affirmed because one baseline cannot distinguish the left-handed component from the right-handed one in principle. In this sense, in [34], the authors can obtain the upper limit on parity violation indirectly by assuming a fiducial model with a power-law spectrum of SGWB and then detecting the deviation, because only the correlation of LIGO Hanford-LIGO Livingston pair was available at that time. In principle, there are two different polarization modes, at least one more baseline is needed for detecting the circularly polarized SGWB. Fortunately, from O3 observing run, Virgo has been involved in searching for SGWB. In this letter, the extra outputs of LIGO-Virgo pairs are firstly adopted for distinguishing different polarization modes in the data.
Circularly Polarized SGWB. The SGWB is formed as a superposition of plane waves propagating along all possible directions with various frequencies: where e A ij is the polarization basis tensor. As far as we are concerned, the circularly polarization basis A = {R, L} arXiv:2210.09952v2 [astro-ph.CO] 23 Jan 2023 is a favorable choice which is related to the linearly polarization basis by For isotropic SGWB, the quadratic expectation values of (4) and I(f ), V (f ) are the so-called Stokes' parameters. To reveal their meanings, for a sinusoidal plane wave, the concise expressions for both of them are given by In this sense, I(f ) and V (f ) denote the total intensity of SGWB and the degree of parity violation, respectively. If V = 0, the parity is violated. The normalized energy density of SGWB is defined by where ρ c = 3H 2 0 c 2 /(8πG) is the critical energy density and Ω gw is related to the Stokes' parameter I(f ) by Separating different circularly polarized modes. The detection of SGWB relies on the correlation analysis [35] of strain data. If two detectors couple together to form a baseline, during a certain period T , a cross-correlating statistic is defined aŝ wheres 1,2 (f ) are the Fourier transform of the time-series strain outputs,Ĉ(f ) is expected to be Gaussian distributed with expectation and variance (small signal-to-noise ratio limit) Here ∆f is the width of frequency bins and P 1,2 (f ) are the one-side noise power spectrum of the detectors. Γ(f ) is the overlap reduction function of the baseline. The overlap reduction function can be calculated by the antenna pattern of the detector [34] as follows Note that there are three baselines denoted by H-L (LIGO Handford-LIGO Livingston), H-V (LIGO Hanford-Virgo) and L-V (LIGO Livingston-Virgo), respectively. The overlap functions of the interferometer pairs involved in our analysis are plotted in Fig. 1.
In this letter we adopt the data released by LVK Collaborations during O1∼O3 observing runs [36]. The analyzed frequency band is 20 ∼ 1726 Hz with a resolution of 1/32 Hz. For the case with more than one baseline, the combined likelihood can be written as (12) The noise correlation matrix N is diagonal with elements in Eq. (10) and Maximizing the likelihood is equivalent to minimizing whereĈ = √ N −1Ĉ and Γ = √ N −1 Γ. This leastsquares problem can be solved by applying singular value decomposition [37,38] here U and W are unitary matrices, Σ is diagonal with singular values w i arranged from large to small. This decomposition defines the Moore-Penrose inverse The maximum likelihood estimator iŝ and the covariance becomes Here we need to remind that the magnitude of w i is a criterion of matrix singularity. When w i → 0, the covariance matrix tends to be infinite. This happens quite often due to the degeneration of overlap functions and glitches in detector's frequency band. Actually, these insensitive frequency bins not only make little contribution to the constraint on energy spectrum, but also make trouble to numerical calculation. Therefore, in practice, the frequency bins with w i /w max < 10 −10 will be removed in our analysis. This criteria cuts off about 18.9% frequency bins ofĈ(f ). The estimations of Stokes' parameters I and V in the frequency band 20 ∼ 100 Hz corresponding to the most sensitive band of LIGO-Virgo is plotted in Fig. 2. These results indicate that there is no evidence for the circularly polarized SGWB in the data of LIGO-Virgo's first three observing runs.
Power-law models. By adopting Bayesian inference technique according to Eq. (12), we can place constraints on various GW spectra. In the LIGO-Virgo frequency band, for most theoretical models, Ω gw (f ) can be approximated as a power-law: where α is an index which is assumed to be a constant and the reference frequency f ref is taken to be 25 Hz in this letter. In particular, cosmic string and slow roll inflation are well approximated by α = 0 in LIGO-Virgo frequency band [12,16,39], CBCs produce a spectrum with α = 2/3 [40], and α = 3 is a fiducial choice because it denotes a flat spectrum of I(f ) and some astrophysical processes like supernovae can produce such a kind of signal [41]. In addition, we introduced a new parameter which encodes the parity violation. If Π(f ) = 0, the parity is violated. The range of Π(f ) is [−1, 1] in which the lower and upper bounds correspond to full left or right-handed polarizations, respectively. For simplicity, Π(f ) is taken as a constant in our analysis. First of all, Θ = (Ω α , α) and Π are taken to be free parameters and I(f ) can be derived using Eq. (7). According to Bayes theorem, the posterior is given by The ratio of evidence, so-called Bayes factor, is a factor measuring the relative possibility of hypotheses. Similar to [20,42], we take a log-uniform prior for Ω α and choose the lower bound to be 10 −13 . See Table I for the priors for other parameters. Our results of the posterior distribution for the parameters are shown in Fig. 3. The Bayes factor is log B = −0.2 between signal and pure noise hypothesis, which indicates that there is no evidence for claiming the existence of such signal. Besides, we have not found any significant restriction on the polarization parameter Π. At 95% confidence level (CL), the 1D marginalized posterior of Ω α gives the upper limit of 5.3 × 10 −9 on the strength of SGWB at 25 Hz. For some given theoretical models, α can be fixed. Therefore we also provide the constraints on the models with α = 0, 2/3, 3, respectively. Our results are given in Fig. 4. The left panel illustrates 95% exclusion contours in Ω α -Π plane. The 1D probability distributions of Ω α and Π are shown in center and right panels. To demonstrate how the constraints on SGWB strength depend on the GW polarization, the upper limits of Ω α for the unpolarized (Π = 0) and fully polarized (Π = ±1) cases are listed in Table II. Due to the correlation contribution from V (f ), we are able to put tighter constraints on polarized SGWB, in particular in the circumstance of α = 3. After marginalizing over the polarization degree, the upper limits for the power-law spectra with α = 0, 2/3, 3 are 3.6 × 10 −9 , 2.3 × 10 −9 and 4.5 × 10 −11 at 95% CL, respectively.
Conclusions.
In this work, the three independent baselines in the LIGO-Virgo network allow us to provide the first constraint on the circularly polarized isotropic SGWB by adopting maximum likelihood estimation and Bayesian statistics. Our results indicate that there is no evidence for the polarized SGWB in the data of Advanced LIGO-Virgo's first three observing runs. By now, the ground-based detectors are not sensitive enough to claim a detection of SGWB. Moreover, the sensitivities of LIGO-Virgo pairs are worse than LIGO Hanford-Livingston pair. This further reduces the efficiency of detector networks and we are unable to obtain a significant constraint on the parity parameter Π(f ). But the extra baselines from LIGO-Virgo pairs do help to reduce the degeneracy between Ω α and Π. Therefore a lower upper limit for the strength of polarized SGWB compared to the unpolarized SGWB [20] is obtained in our analysis. | 2022-10-19T01:15:51.843Z | 2022-10-18T00:00:00.000 | {
"year": 2022,
"sha1": "c684aa3dba0e411b260bd58523cc7af749436272",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "c684aa3dba0e411b260bd58523cc7af749436272",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
219102287 | pes2o/s2orc | v3-fos-license | Homelessness in Higher Education is not a Myth: What should Educators be doing?
This paper investigated homelessness and housing insecurity on college campuses in the United States. Using a mix design and framed by the theoretical frameworks of resiliency and social justice theories, this research sought to assess the barriers and interventions, if any, for students encountering homelessness while in college. The data analysis found three recurring themes: Education regarding homelessness, resource development, and the elimination of barriers. Research from this study underscored the need for interventions to be developed to assist the student in supporting retention. Additionally, the development of interventions allows faculty and staff to advocate for students while helping the university in meeting enrollment and graduation goals.
Statement of the Problem
A significant gap in the research exists as few studies on homeless college students have been conducted (Crutchfield et al. 2016). Colleges have a stake in the success of their students and need to create interventions, grounded in data to help students overcome barriers to success. Broton and Goldrick-Rab (2016) stated institutional practices and policies contribute to the issue and need to be addressed at the local level, thus eliminating those barriers and creating supports to enhance the chances of retention and success. Research is in its infancy stages "when it comes to explaining how students experience these challenges, when and where they obtain help, and how needs insecurity affects their schooling" (Broton & Goldrick-Rab, 2016). The problem, simply stated, is that institutions have few, if any, resources in place to assist students experiencing housing insecurity. Higher education personnel know little about how best to support students that are homeless within the context of financial aid regulations and the constraints of institutional knowledge and budgets.
Significance of the Problem
Homeless students at the post-secondary level deserve a level playing field as they work to obtain a degree. Creating interventions to enhance persistence and completion while providing for basic needs is in both the institutions" and the students" best interests (Chapalot et al., 2015). Empowering students to utilize available resources and creating a comprehensive approach to issues that limit educational success, such as housing insecurity, builds trust with the community and combats the problem of access (Crutchfield et al., 2016).
Higher education institutions should be a safe zone for students to live and grow academically. Still, the lack of safe shelter or resources does not align with that ideal in practice at many schools. Advocacy groups have been successful in helping the homeless and unaccompanied youth in gaining access to support through the aid process. Still, that legislation stops short of meeting the homeless needs of these students (Crutchfield et al., 2016). Students have access to resources but not enough to solve the issues of housing and food security (Broton & Goldrick-Rab, 2016). Higher education institutions need to address these issues in the context of providing services and linkages to the financial aid process for homeless students (Crutchfield et al., 2016).
Relevant Research
education (Huang, Fernandez, Rhoden, & Joseph, 2018). According to Huang et al. (2018, p.209), homeless youth are in "inherent disadvantage in an increasingly competitive and education nation." The instability of living arrangements causes stress and a focus on basic needs, thus creating roadblocks as students navigate postsecondary education (Chapalot et al., 2015;Crutchfield et al., 2016;Hallett, 2010;Huang et al., 2018). Many students struggle with the basic pieces of the college process, including applications, college visits, and application fees due to their lack of parental/adult support and financial stressors (Crutchfield et al., 2016;Huang et al., 2018). Hallett (2010, p.12) asserted housing instability "shapes access to college as well as how students participate in the educational process once they are admitted". Admitted students require monetary deposits to hold their spot in the class, housing deposits are required for on or off campus housing choices and travel to the college for orientations, course scheduling and financial aid assistance can prove difficult for those without proper guidance "to fully engage in the educational process" (Hallett, 2010, p.12). Support in developing study skills, decision-making skills, and balancing academic workloads with working are obstacles that homeless students face due to the lack of stable homes and parental supports once in college (Crutchfield et al., 2016;Hallett, 2010, Huang, et al., 2018. Few colleges have programmatic interventions to help homeless students, partially due to the low numbers of homeless that enroll at any one institution (Hallett, 2010). Nevertheless, the need for supports is critical to lead to academic success and persistence toward a degree (Broton et al., 2014;Hallett, 2010). Students receiving supports described help from financial aid staff and others that build the connections to enhance the academic experience and eliminate non-academic stressors that assisted them in staying in school (Skobba et al., 2017). Obtaining a degree or other credential is critical in moving a homeless student beyond their current situation and out of the poverty cycle (Chapalot et al., 2015;Crutchfield et al., 2016;Hallett, 2010, Skobba, et al., 2017.
Conceptual Framework
Two theoretical frameworks guided understanding of this study and provided context to the issue. Resiliency theory (Masten & Odbradovic, 2006) framed how impacted students were by homelessness and how the institution can support the resiliency of this population. Furthermore, the tenets of social justice and critical social theory framed why this population of students needs support to retain and persist in higher education.
An area of research review was regarding the inequalities in access to education that are a social justice concern that is lived daily by those who are homeless (Ausikaitis et al., 2015). Even though the challenges to educational access, especially stable housing, the student experience was investigated initially through retention theory and then through resiliency theory.
A review of resiliency theory and its wide-ranging foci formed a more robust conceptual framework for this research. Since resiliency theory is so broad, the researcher narrowed the focus to the concept of adaptation, meaning how individuals create their path despite setbacks (Masten & Odbradovic, 2006). Research by Masten and Odbradovic (2006), Masten et al. (2014), and others specifically explored the homeless student and the development of resiliency. Specifically, post-secondary resilience investigation for the homeless is few. Therefore, carefully examined were resiliency concepts surrounding student development to provide a conceptual framework for best practices in promoting student success and resiliency (Masten & Odbradovic, 2006;Masten et al., 2014).
A study by Watt, Norton, and Jones (2013) applied resiliency theory to foster youth and how they navigate the systems within higher education. In working with these students, social workers began to view the individual students as survivors, "who had developed unique skill sets and utilized a wide array of resources" (Watt et al., 2013). Recognition of those experiences and seeing them as assets support the resiliency of the student and support a positive approach to interventions instead of using a deficit model (Watt et al., 2013). Moving from a caseworker model, and using resiliency as a means of empowerment, disrupts negative stigmas that foster youth and others, such as the homeless experience in society (Saleebey, 2000;Thomas, 2000;Watt et al., 2013). Students who felt included, especially those having experienced hardships, tended to be retained at a higher level and overcome negative self-perceptions (Thomas, 2000;Watt et al., 2013). The interventions created to support resiliency can help advance students "educational goals and provide access to a better future. Access to education provides a mechanism to "build and maintain and middle-class lifestyle," which can change the trajectory of an individual (Chapalot et al., 2015). Eliminating barriers, including housing insecurity, can help many students who are mostly in marginalized groups (Brown, 2006). Thus, the resiliency theory tenets became a conceptual framework for this inquiry as access and persistence can be enhanced by this support of resiliency that is grounded in a review of social justice theory.
Similarly, homeless students endure many challenges in accessing higher education, including access (Chapalot et al., 2015). Similarly, students who fall into the categories of low income, homelessness, or other marginalized groups experience issues of equal access (Brown, 2006). Difficulties in accessing education create a "predetermined mold designed for school failure and social inequity" (Brown, 2006, p. 701). To facilitate student success and the elimination of barriers that perpetuate social inequities, educators need to create spaces for students to get the help and resources that they need to retain and persist (Ausikaitis et al., 2015;Brown, 2006;Chapalot et al., 2015). Through addressing the issues of the homeless students and providing interventions, the system can change "to allow for meaningful inclusion of everyone, particularly those who are consistently disadvantaged or marginalized" (Ryan, 2006, p. 6).
Change within the current higher education system must occur by understanding and then reacting to those barriers that do not allow for the inclusion of disadvantaged groups. Ryan (2006) administered social justice in ways that highlighted the need for integration and creating a path for all to be involved in standard social practices, such as schools and the communities. Higher education is not exempt from the challenges of homelessness. Through the two conceptual frameworks of resiliency theory and social justice theory, the researchers examined the creation of best practices, with the ultimate goal of eliminating obstacles as homeless students work to obtain a degree.
Research Design and Research Questions
Critical paradigms provided the researchers "to promote the deconstruction and critique of institutions, laws, organizations, definitions, and practices for power inequities and inequities of effectiveness" (Guido, Chavez, & Lincoln, 2010). Through careful analysis of the data, the critical theory lens allowed the researchers to engage in what Creswell (2014) referred to as "transformational advocacy". Creswell (2014, p. 189) defined mixed methods research as: an approach to research in the world in the social, behavioral, and health sciences in which the investigator gathers both quantitative (closed-ended) and qualitative (open-ended) data, integrates the two and then draws a conclusion based on the combined strength of both sets of data to understand research problems.
The type of mixed design for this inquiry was what Creswell (2014) denoted as convergent design, whereby the quantitative data and the qualitative data are collected concurrently, and both datasets were analyzed separately, and then the results compared. Using the qualitative research approach enabled the researchers to develop descriptions and themes from the qualitative data while also providing quantitative analysis to triangulate the data (Creswell, 2014).
Furthermore, data gathered via interviews, document research, and focus groups allowed for the examination of concepts that enhances the development of a theory of fundamental social processes (Merriam & Tisdell, 2016;Starks &Trinidad, 2007). Thus, the focus of resiliency and social justice theory informed the inquiry as to the basis for assessing the effectiveness and development of interventions to eliminate barriers for homeless youth in higher education. The following research questions guided this investigation: What barriers exist in higher education for homeless students? And, How can institutions create policies and procedures to support the homeless student population in the context of resiliency and social justice?
Method
Purposeful sampling provided a mechanism for the researchers to answer the questions posed for the study by directly targeting participants that could give the best information to guide the study (Creswell, 2014). An initial selection of the participant group began with a review of the COSUAA membership. COSUAA is comprised of large, public institutions from across the United States. Using the membership of COSUAA (n=130) as the survey population allowed for data collection across the country from similar institutions.
Additionally, a site selected from COSUAA membership was the setting of interviews and focus groups on enabling the researchers to collect qualitative data. Regional University (RU) was chosen because of proximity to the researchers and socioeconomically diverse undergraduate population. Interviews with crucial administrators (n=3), including Director of Student Financial Assistance, Director of TRiO Services, and Director of Housing, occurred on campus. The identification of these administrators was purposeful, as they are involved with students daily.
Furthermore, according to the triangulation of the data gathered from document analysis, survey data (n=130), and the interviews (n=3), a focus group (n=5) occurred. This group, consisting of a housing representative, financial aid counselling staff, and academic advisors, provided a varied perspective concerning students and their challenges in and outside of academics. The selection of the focus group permitted data collection from those who interact with students on a one to one basis and provides rich, thick descriptive data to the study.
Using the voices of those working directly with students alongside the documents and survey data assisted the researchers in creating a narrative to answer the research questions and identify best practices to support resiliency (Creswell, 2014).
Focus Group
Conducting research using a focus group allowed the researchers an understanding of "how people feel or think about an issue, idea, product or service and are used to gather opinions" (Krueger & Casey, 2015, p.6). Furthermore, using a focus group provided for the creation of a rich, thick narrative that added to data collected from the survey and interviews (Creswell, 2014;Merriam & Tisdell, 2016). In addition to creating a narrative, the focus group provided a mechanism for the researcher to gain an understanding of the issues surrounding homeless youth. Furthermore, this group consisted of people who are closely involved with the creation or implementation of policies and procedures to help the homeless population. Krueger and Casey (2015) cited the use of focus groups as vehicles to help with decision-making and guiding program, policy, or service development and by using this means of data collection. Focus groups included financial aid administrators, student success personnel on campus, and student housing personnel (n=5). The smaller size also created an environment for interactive discussion, which can give a different type of data not collected in an individual interview (Hennink, 2014). Within the group setting, everyday social interaction was observed, and data gathered while participants engage in discussion between each other (Hennink, 2014).
The focus group questions explored the treatment of homeless students in higher education, paying particular attention to the financial aid process. Open-ended questions allowed the researcher to collect meaningful data and provided descriptive data (Hennink, 2014;Krueger & Casey, 2015;Merriam & Tisdell, 2016). Employed was member checking to ensure that the researchers (Merriam & Tisdell, 2016) captured the interpretation of words and ideas accurately. Using member checking allowed the researcher to capture content validity and strengthen the credibility of the research, which is crucial when using qualitative data collection (Creswell, 2014).
Interviews
In addition, the researchers conducted interviews (n=3) to triangulate the findings further. Interview subjects included the Director of Student Financial Assistance, Director of Trio Services, and the Director of Student Housing at Regional University. The first interview was conducted face-to-face, while a second follow-up interview was conducted by telephone. Mertens (2005) claimed three benefits when conducting interviews: the depth of information, relationship development, and participant flexibility. These interviews included open-ended and semi-structured questions to establish an understanding of the construct of the homeless and to frame how participants shaped their experiences (Merriam & Tisdell, 2016). To ensure accuracy, interviews were audiotaped to allow for engaged listening by the researcher during the interviews, as well as providing a mechanism for reflection and transcription of the data. Utilizing interviews assisted the researchers in strengthening data collection and "understanding the lived experience of other people and the meaning they make of that experience" (Seidman, 2013, p. 9).
Online Questionnaire
Administered was an online survey to the membership of COSUAA, allowing for the capture of national perspectives surrounding the issue of homeless students and retention with a return rate of 50% Results of the survey were analyzed using descriptive statistics and the portion that consisted of open-ended questions was coded and analyzed similarly as the focus groups and interviews. The descriptive statistics gleaned from the survey results were included in the analysis, which provided multiple sources for data (Yin, 2014). To evaluate the reliability of the survey (Creswell, 2014), the researcher utilized the test-retest reliability coefficient that determined the degree scores are consistent over time. The survey was administered to the same group of 19 educators within a three-week interval. Then the scores from the survey were correlated using the Pearson product-moment correlation coefficient (r) to establish the stability for the reliability of the survey. A high coefficient of stability was the criterion for good test-retest reliability. For the survey, the correlation between the test and retest was .486, which is significant at the .005 level according to the SPSS analysis, indicating the reliability of the survey.
Document Analysis
Examined were documents from the institution along with written and online resources to review and reflect on issues surrounding homeless youth and higher education. Public records created the ability for the researcher to understand things that "had taken place before the study began" (Merriam & Tisdell, 2016, p. 164). The database of COSUAA members will further provide the researchers with an opportunity to review individual school websites for information on homeless student resources, if any. Analyzed were the following documents from the institution: financial aid policies and procedures as they relate to homeless students as defined by the FAFSA. Other documents included websites designed as resources for those in the housing crisis, as well as links to outside agencies that the school uses for referrals. As Merriam and Tisdell (2016, p. 246) noted, "Documents of all types can help the researcher uncover meaning, develop understanding, and discover insights relevant to the research problem".
Setting
The University setting where the qualitative research was conducted was purposefully selected from the member institutions of COSUAA. COSUAA is an advocacy and training organization for the financial aid profession and consists of large, public institutions with over 10,000 students. The institution was chosen based on its membership in COSUAA, the homeless student resources identified on its website, and the large percentage of Pell recipients that make up its student body, along with the close location proximity to the researcher. The concept of purposefully selected sites is based on what will "best help the researcher understand the problem and the research questions" (Creswell, 2014). Furthermore, purposeful sampling provided the researchers to select a site for richer and in-depth study to understand the problem better and answer the research questions (Creswell, 2014).
Data Analysis
Data obtained from the survey were tabulated and analyzed using SPSS Version 25. The Likert scale questions were measured with a scale that ranged from one to five, with five indicative of extensive. The mean and standard deviation for each item was determined to establish the frequency of responses. Using descriptive statistics provided summaries of some of the questions and data illustrated with charts and bar-graphs to provide a visualization of the frequency of responses, measures of central tendencies, and measures of variation (Fink, 2017). Questions that ask for counts and percentages of students to describe the schools" population were also tabulated to show the distribution and regional similarities if discovered.
Qualitative data analysis included the organization and cross-examining of data in ways that enable researchers to see patterns, identify themes, and make interpretations (Merriam & Tisdell, 2016). According to Merriam and Tisdell (2016), this organization of the data was done as the researcher compares similar themes and examines how these relate to the variables within the sample population. The researchers used the traditional approach of coding as the themes emerged out of the data analysis (Creswell, 2014). The creation of essential categories, according to Merriam and Tisdell (2016), allowed for quick identification of the information as one works through the analysis of the data. Emerging themes were triangulated with other data collection tools to help guide the research.
Results
Below are the findings from the data analysis of the two research questions. Analyzed were the quantitative and qualitative data, along with documents from the institution.
Q1. What barriers exist in higher education for homeless students?
Qualitative data primarily answered research question two from participants in the three interviews, the focus group (N=5) and the survey (N=50). The survey provided a quantitative benchmark to assess the perception of homelessness on individual campuses. The survey question asked, "How much awareness is there on campus to the needs of students that are homeless?" The descriptive statistics displayed in Figure 1, illustrated most universities surveyed did not have much awareness of the issue of homelessness or did not view it as prevalent. The lower end of the scale encompassed 27% of the 50 respondents. Specifically, the survey question asked if there is a general awareness on campus and revealed little to no awareness. Conversely, the participants of the interview and focus group cited specific issues to students" needs. Identified was financial resources as a barrier for students that are homeless as a thread during interviews and the focus group. Director 2 and 3 both stated finances are an issue, especially as they relate to housing. While "Cost is an issue," stated one focus group participant, "when we are on breaks, students that are homeless don"t have a place to go, and if they decide to stay on campus over break, there is an additional fee." Director 3 explained that they could provide housing, but it comes with a cost, and "we don"t provide differential pricing based on need." Students find themselves needing a place to stay, but the university cannot provide free placement and with many students stating, according to Director 3, "nobody wants to go into debt." From the university"s standpoint, Director 3 stated, we work "diligently to be as equitable as possible because everything is an auxiliary, and everything is on the backs of other students." Providing discounts to one group based on a characteristic, such as need or housing security, impacts others. As a housing office, "we are unfortunately in a position to evaluate and make judgments about what is stress and a barrier versus exceptions to policy and that kind of stuff." Director 3 discussed the barriers and issues, but at the end of the interview stated, "It saddens my heart, but I can"t…I don"t know how to fix it".
Focus group participants had similar responses to the barrier question. The group discussed costs at length. Focus Group participant A stated, "How are they going to pay for it? Students may have necessities like a shower and a couch, but how do I get the next meal?" Another participant explained how barriers of documenting homelessness for financial aid and the FAFSA and cost affect students: Mainly students are coming from low income, coming from first-generation backgrounds, especially students who are experiencing homelessness. This is the competing variable. If I don't continue with my education, if I don't have enough aid, if I can't prove that I'm independent, then I will be homeless, and I can't go back to that life. Therefore, it is a constant balance of whether or not they're able to maintain employment or get enough scholarships and also do well in their classes to make sure that they can persist.
Academic Success Coaches that comprised the Focus Group talked about the whole college student experience. Further noting how some students had to make decisions to work over attending class or pay for food rather than purchase a book or material for a class. Students also encounter relationship issues that have led to homelessness. One example is a woman who shared an apartment with a boyfriend. The coaches talked about how, when he left her, she was left with all the rent and could not afford the payment and "stopped attending class while she was trying to look for other options." Director 1 spoke of safety issues in relationships that caused students to leave stable housing or not have a safe place to go on breaks or summer vacation. "I have students who are in a transient situation; they are a temporary situation where shelter is not fixed." Some students, Director 1, shared, "have aged out of the foster care system and thus have no place to go." He went on to describe other families having a change in housing back at home, too.
The structures of higher education still think and act as if most students coming here are middle income or better from two-parent households, and Johnny"s room is the same as when he left it with his trophies still there. He can come back at any time, and an apple pie is waiting for him. And the reality is more and more students do not have that situation. They"re coming, especially the students we see [in TRiO} are coming from rental properties which are by their nature short term. Students in the situation described above, are not afforded a space in a house as their parents or support systems; simply do not have the room to shelter them any longer. Barriers exist as students experiencing homelessness "the traditional thing you might say is oh, someone looks like maybe they are dirty and haven"t slept well." However, homelessness is not, as Director 1 stated, "How it is portrayed in the media or Hollywood." When asked about the prevalence of homelessness at RU, Director 1 stated, "more than people think, and I think the reason is that people have a lasting image in their mind to what homelessness looks like. The panhandler on Cleaver Boulevard, right?" It often happens when students "come to the University and whose parents then downsize or relocate to control expenses," and this means "the student does not have a home per se besides the one that they have temporarily while they"re in classes." Awareness of the issue creates a barrier by having a conceptualization of homelessness that does not match what the student is experiencing.
Q2. How can institutions create policies and procedures to support the homeless student population in the context of resiliency and social justice?
In analyzing data to answer this question, the researcher asked participants in the focus group and interviews about their definition of resiliency. The overarching theme was about overcoming obstacles and continuing to one"s goal. One respondent stated, "Someone flexible someone who bounces back is resilient." Director 3 explained resiliency as "being emotionally and cognitively and physically able to move on. You need to have the capacity to stick with". Focus group participants illustrated resiliency as the "overcoming of a challenge or hardship" and "having a goal that you stick to no matter what happens, whatever gets in your way keep pushing forward through adversity." After gathering the definitions, the researcher asked how higher education could support resiliency. A focus group participant explained that supporting resiliency was about helping students through the One Person campaign modelled off suicide prevention in Ireland.
They have a campaign that they use in Ireland. It is something that I brought with me to this job. It"s the one-person campaign so as long as you have the one person and so everybody gets their person when they come into higher education. So, I try my best to be that one person for all my people. Therefore, it's not just about mental health, although that's a massive component of it. But making sure that they have somebody that they know cares, somebody, they know, and somebody they trust will be delicate with their situation. Be sensitive to their feelings and then also be honest and be consistent with them.
Creating a bond with students is difficult for academic success counselors, as they have large caseloads. One focus group participant discussed that "it can be tough with a large number of students, but letting know that you"re the person that they can open up to and share things in that you are here to be a resource for them." It is difficult because students have complex situations, but the focus group was going to have a course in mental health first aid that "will be good to have tools to help students where they are." Beyond tools to help students, questions asked of all participants about resources within the institution and community to assist students that were experiencing homelessness. RU has a food pantry called Campus Cupboard on campus and had some community partners, such as Catholic Charities and local shelters, that staff could refer students to in times of need. Survey participants mentioned emergency grant assistance United Way as a community partner, and 40 out of 50 respondents had a food pantry on campus.
No survey respondents gave feedback based on the question of "If you are not doing outreach, what kinds of programs or interventions would you like to see on your campus concerning helping the homeless students? When posed the same question, the RU participants cited that they would like to see "a one-stop-shop idea where we are all accessible to each other so that we can answer all the questions at their meeting." The focus group also talked of "serving as the middleman or advocate for the student; the mediator between the student and the campus because it"s so unintuitive and intimidating." Acting as an advocate would allow " to have those conversations alongside the students with the other people who are experts in aid, academic realms or care team vicinity would be helpful."
Discussion
Organizing and examining the qualitative data to find emerging themes is a mechanism to achieve a deeper understanding of the data (Creswell, 2014). Emerging themes from the interview and focus groups and was enhanced by the review of the quantitative and qualitative analysis from the survey. Three themes presented in the data triangulation were Education surrounding the issue of student homelessness, Resource development, and Eliminating barriers through policy and procedure.
Education Regarding Homelessness
As institutions work to recruit and retain students, there must be awareness that the students have experiences that exist beyond measurement by an academic transcript or vita. Creating a plan to assist practitioners in student development, financial aid, and retention offices to enhance access and student success by combating issues like homelessness counteracts issues of access and social justice (Chapalot et al., 2015;Crutchfield et al., 2016). Participants in the interview and focus group attest to the desire to influence a student"s chance of success by eliminating the barriers but are impacted by a lack of education on homelessness and how that presents in higher education. Addressing issues by linking students to services to assist with housing and food insecurity is a key tenant of newer research, but these are not widely known across campus. As illustrated in the qualitative and quantitative analysis, participants were not sure about what policies and procedures existed to help students in crisis on their campuses. Moreover, in basic document analysis on RU"s website, there were no resources for students to access issues with shelter or housing for emergent situations quickly.
Creating a safety net for students encountering housing instability addresses the need for interventions to support resiliency and the social justice concerns surrounding access to higher education. Including homeless students in the narrative of higher education attainment supports diversity and limits the marginalization of the underrepresented, including students of color and of different orientations that need additional supports to be successful (Hallett & Crutchfeld, 2017). Students without that safety net and no guidance have more difficulty in engaging in the educational process (Hallett, 2010). Data collected in the focus group spoke to the need for that safety net and the ability for students to be able to focus on their academics without the additional stressors of finances and housing. Academic Success Coaches speaking about being an advocate for their students in crisis aligns with the research by Marshall (2004) that contended professionals within higher education do not have a grasp on what they can do to assist the marginalized and are not educated in how to address issues of social justice. Data from the survey, focus groups, and interviews illustrated the lack of conviction when speaking about what resources were available and demonstrated a lack of awareness about policy and procedures surrounding the issue of homelessness. Navigating the systems within the institution proved challenging for administrators and staff as they work to advocate for students and help them break out of the situation of poverty and disrupt the systems of "inequity that continue to marginalize homeless students in higher education (Gupton, 2017).
Besides the notion of creating supports to assist students, the concept of homelessness, and how it influences students was interpreted in many different ways across the data. Data collected via the interviews found that two of the three directors had less of a student-focused approach to how they viewed students in insecure housing situations. Besides, homelessness was often described in the qualitative analysis linked with the words mental illness. Students finding themselves in an insecure housing situation can be in a variety of situations, with mental illness not necessarily being one.
Resource Development
The data suggested that administrators often had limited awareness of resources on or off-campus that could help a student outside of the classroom. For example, data collected about policy and procedures to help students was limited from the survey participants and non-existent with the participants in the on-campus visit by the researcher. Not having a clear path to interventions or resources does not allow the student to get assistance without barriers. The data was evident in the focus group that the referral to the campus Care team was ambiguous and did not provide follow up for the staff that made the referral. This led to a lack of confidence in the campus"s ability to triage and treat the issues. Connection to campus is essential to support a student academically and socially.
The conceptual frameworks cited the need for students to feel connected to the campus as a component of retention theory. This theme was echoed in the context of resiliency theory as education provides a means to escape homelessness through the creation of supports and relationships (Gupton, 2017). In addition, the promotion of mentors to promote a positive environment of support and encouragement is a way, according to Hallett (2012), to limit additional risk for a student and improve their achievement. Building the support network enhances trust and relationships with adults and peers within the institution. Homeless students due to their unstable environments do not have long-term relationships with mentors and other means of support, which are critical to developing skills to react to the atmosphere of higher education and coping with stressors (Gupton, 2017;Stratton et al., 2007).
Furthermore, institutional interventions that are currently in place are sporadic at RU and do not examine the structural changes to serve students, but tend to see interventions as a quick fix. Hallett and Crutchfeld (2017) advocated for more of a holistic and fundamental approach to invoke change instead of operating in a reactionary environment. Securing a fix for a housing issue is a start, but it is not the entire solution to creating retention and completion success.
This housing insecurity and related financial factors limit the ability of the student to be involved in social and academic activities regardless of why they become homeless. A focus group participant echoed this concept by illustrating the paradox of working versus going to class. They explained how the student struggled with the decision. Disconnecting from the goal of graduation or retention to the next semester causes not only academic stress, but also isolation from peers and the college experience (Kerby, 2015). An institution cannot control for all external stressors for any student. Still, a basic understanding of the issues and how supports are created and implemented for retention impact provides context and buy-in from staff and faculty.
Besides the student experience, the data suggested a need for resources for faculty and staff in this space. Homelessness and housing insecurity are not just a problem for the low-income or mentally ill as the data from the interviews and focus groups illustrated. Many students are coming to college from home, but due to other factors, they have become housing insecure. Students that are estranged from family due to personal choices or sexual orientation are encountering housing insecurity (Hallett & Crutchfield, 2017). Educating staff and faculty about housing insecurity would eliminate perceptions of the issue as one for poverty-stricken or the mentally ill. Data gathered during this research supports lack of awareness of the homeless situation and is supported by the survey.
Elimination of Barriers
Throughout the resiliency framework, many researchers discussed the creation of supports for students to feel a connection to the campus, peers, or faculty, and staff. Barriers created by the institution to protect resources or the perceived integrity of the very systems designed to support students are the culprits in creating ill will and a lack of trust in the system (Ausikaitis et al., 2015). Two of the Directors interviewed spoke of the systems as a necessity to protect resources and prevent the working of the system. Within the context of social justice, interventions are needed but have to be framed within the context of support, not as punitive. Supportive interventions require a greater scope than the traditional financial aid model that is the only official support in place at RU.
The focus group participants were appreciative of the emergency funds that sometimes were available to a student in a housing crisis. Still, the only students being assisted are the ones that are comfortable sharing their stories. There is some data available via the FAFSA regarding homelessness or the risk of being homeless, but overwhelmingly the institutions' surveys did no outreach based on that data. Also, most institutions had little or no understanding of the retention policy at their institution or the resources outside the food pantry that were afforded within the community or institution. Social capital is lacking for most students that are in the housing crisis or have been homeless before college attendance. This lack of capital, according to Skobba, Meyers, and Tiller (2018), limits the ability of people to secure help or benefits via their connectedness to a person or network. The network and systems within an institution of higher education are complex for any student, with or without strong social capital or networks. Adding the layer of low support or an emergency to the mix can erode the ability for the student to stay in class and realize their potential, as their basic needs are not being met (Hallett & Crutchfield, 2017;Maslow, 1943). Integrating a student into the college fabric is a component of retention theory, but one that is hard to achieve without eliminating outside the classroom stressors. These "multiple threats to learning" are issues that are barriers to integration and challenge retention efforts surrounding academic success (Masten et al., 2014). Academic success and integration are the theoretical underpinnings of the Tinto (1987) retention theory. Research from this study shows barriers and perceptions exist which undermine the ability for a homeless student to succeed, despite resiliency.
Conclusions
Despite not having any formalized policy or procedure, staff and directors are navigating the systems in place at RU to find means to assist individual students. The interventions are not scaled to help everyone and are not grounded in data that can be accessed from the FAFSA. Academic success coaches in the focus group voiced their desire to find resources but had concerns about their caseload and referral methodology. The concept of advocacy and mentorship was clear within the focus group and supports resiliency concepts of forming relationships to foster a sense of belonging. This belonging generates an environment that supports student learning and retention (Kerby, 2015). Integration via student supports in the academic and social fabric of the institution is central to retention, especially those that are early in the students" academic career (Kalsbeek, 2013).
Formulating these supports is difficult, as shown by the data collection. Resources are not well defined, well-funded, or well known by staff at the institutions surveyed. High profile imitative such as the work of TRiO at RU and food pantries across those schools on the survey, were well known and easily referenced for a student in need. The lack of housing interventions or the knowledge of a housing issue was prevalent in the research. Most institutions did not think there was much of a problem around the concept of homelessness despite what the national statistics revealed. Furthermore, one concludes that this lack of awareness correlates with the lack of policy and procedure to intervene for the homeless.
Additionally, the perceptions of staff surrounding the attributes of a student that is homeless were evident in the data collection. These perceptions suggest that there is a misalignment between what is happening on campuses and perceived reality. Unfortunately, the stigma of homelessness does not encourage students to self-report or tell their stories unless the trust is gained (Ausikaitis et al., 2015). Providing support to students must be framed respectfully and privately and will need to be created, as the current, traditional supports do not address the homeless student experience (Chapalot et al., 2015). Data collection illustrates the need for supports and education throughout the institution to support students with housing challenges. Although only personnel from one institution were interviewed, the survey data provided further information to assess the campus climate and concerns that surround this population from fifty institutions. Using this information as a starting point to build interventions to support the homeless student will help financial aid professionals and campus partners to begin the conversation.
Implications for Practice
Based on the results of the study, institutions need to be intentional about creating policies and procedures for assisting students experiencing homelessness. Financial Aid Offices, in conjunction with student support services and advisors, need to evaluate how students navigate processes on campus and how referrals are made. Additionally, staff and faculty need professional development opportunities to understand the homeless student experience. Higher Education personnel need to create a comprehensive list of all resources available to students in times of housing crisis. Besides, the administration will need to enhance the ability of all staff to disseminate this information so that students do not have to go to multiple offices to tell their stories. Furthermore, each University needs to create a website for emergencies including housing issues for students to access anonymously if they do not wish to share their situation with staff or faculty | 2020-04-23T09:15:23.054Z | 2020-04-21T00:00:00.000 | {
"year": 2020,
"sha1": "9a4a73b60b6312e28f479e345c4670dd79e79c59",
"oa_license": "CCBY",
"oa_url": "http://www.ccsenet.org/journal/index.php/hes/article/download/0/0/42555/44674",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "9c8ac9e0a33899ea25221d0033194e63a70bb57f",
"s2fieldsofstudy": [
"Education",
"Sociology"
],
"extfieldsofstudy": [
"Sociology"
]
} |
21675815 | pes2o/s2orc | v3-fos-license | High levels of polyandry, but limited evidence for multiple paternity, in wild populations of the western rock lobster (Panulirus cygnus)
Abstract Polyandry, where multiple mating by females results in the temporal and spatial overlap of ejaculates from two or more males, is taxonomically widespread and occurs in varying frequencies within and among species. In decapods (crabs, lobsters, crayfish, and prawns), rates of polyandry are likely to be variable, but the extent to which patterns of multiple paternity reflect multiple mating, and thus are shaped by postmating processes that bias fertilization toward one or a subset of mated males, is unclear. Here, we use microsatellite markers to examine the frequency of multiple mating (the presence of spermatophores from two or more males) and patterns of paternity in wild populations of western rock lobster (Panulirus cygnus). Our data confirm that >45% of females had attached spermatophores arising from at least two males (i.e., confirming polyandry), but we found very limited evidence for multiple paternity; among 24 clutches sampled in this study, only two arose from fertilizations by two or more males. Single inferred paternal genotypes accounted for all remaining progeny genotypes in each clutch, including several instances when the mother had been shown to mate with two or more males. These findings highlight the need for further work to understand whether polyandry is adaptive and to uncover the mechanisms underlying postmating paternity biases in this system.
A common assumption in the literature in sexual selection is that polyandry will inevitably lead to multiple paternity. Indeed, multiple paternity, as estimated by assigning parentage of offspring from putatively multiply mated females, is commonly used to estimate the level of polyandry in natural populations (Taylor, Price, & Wedell, 2014). Yet, polyandry does not always translate into multiple paternity, as a number of postmating processes can ultimately determine which males are successful at fertilizing a female's eggs. For example, polyandry provides the scope for postmating episodes of sexual selection in the form of sperm competition (Parker, 1970) and/or cryptic choice (Eberhard, 1996;Thornhill, 1983), which have the potential to affect fertilization outcomes (Pizzari & Wedell, 2013). Sperm competition is the competition between sperm of different males to fertilize a female's eggs (Parker, 1970), whereas cryptic choice occurs when females influence the outcome of sperm competition (Eberhard, 1996;Thornhill, 1983).
Sperm competition and cryptic female choice play critical roles in postmating sexual selection and have important consequences at both population and individual levels (Birkhead & Pizzari, 2002).
The mating systems of decapod crustaceans are highly diverse and complex (Duffy & Thiel, 2007;Martin, Crandall, & Felder, 2016). In many species, reproduction is synchronized with the molt cycle, with females being receptive only for a limited time after molting (Duffy & Thiel, 2007). Females approaching their reproductive molt are often guarded by males for one to several days before copulation (Duffy & Thiel, 2007;Subramoniam, 2013). Precopulatory male guarding is considered an evolutionary response to time-limited opportunity for fertilization and to the need to protect recently molted females (Duffy & Thiel, 2007).
In species with external fertilization (e.g., lobsters), males attach their spermatophores on the sternal plates of the female's cephalothorax during mating (Phillips, Cobb, & George, 2012). After mating, postmating guarding by the male occurs in some species, presumably to reduce the risk that females will mate with other males (Duffy & Thiel, 2007).
The reproductive behavior and life cycle of P. cygnus is described in detail elsewhere (Chittleborough, 1976;Phillips, 2013). Briefly, the spawning season commences in early spring, when males attach their spermatophores (sperm packets, typically termed "tar spots") to the sternums of receptive females. Fertilization takes place when females extrude eggs and scratch the spermatophoric mass to release motile sperm. Remnants of the attached tar spots remain until they are either covered by a second mating or removed during molting. The life cycle of P. cygnus includes a long (~9-11 months) oceanic larval phase, during which planktonic phyllosoma larvae disperse as far as 1,500 km offshore. Helped by favorable winds and currents, the larvae subsequently return to the continental shelf where the final-stage larvae metamorphose into the puerulus (postlarvae) that swim toward the shore and settle in shallow reefs. The settled pueruli develop into juveniles and subsequently adults in 5-6 years.
Here we provide new insights into the mating systems and reproductive behavior of P. cygnus, which until now has been limited mainly F I G U R E 1 The western rock lobster (Panulirus cygnus). Photograph courtesy of the Western Australian Department of Primary Industry and Regional Development to observations conducted under laboratory controlled conditions (Chittleborough, 1974(Chittleborough, , 1976. Our recent work on wild populations of P. cygnus (J. Loo et al. unpubl. data) found evidence of high levels of polyandry in natural populations, with up to 52% of mated females at some locations carrying spermatophores from two or more males.
However, this previous study did not genotype fertilized eggs and therefore was unable to confirm whether multiple mating translated into multiple paternity. In this study, we use microsatellite markers to examine patterns of paternity in two wild populations of P. cygnus. By focusing on both singly and multiply mated females (i.e., females carrying spermatophores from one male or two or more males, respectively), we are able to test whether multiple mating leads to multiple paternity. In this way, our study combines data on multiple mating and offspring paternity to provide insights into the likely importance of postmating sexual selection in this system.
Sampling was conducted over 16 days in February 2015 by the West Australian Department of Primary Industries and Regional Development; Fisheries Division as part of their regular monitoring program. Lobsters were sampled at two locations ( Figure 2), using dive and pot-based survey techniques (Bellchambers et al., 2009).
For each lobster captured during these collections, the sex and carapace length (CL, measured to the nearest 0.1 mm using a dial caliper) were recorded. Tissue samples from pleopods were collected from all males with a CL >64.5 mm (the minimum CL of a mature male reported at Lancelin; and from females carrying spermatophores and/or eggs. A small piece of spermatophore and a cluster of eggs were removed from females. All tissue samples were preserved in 100% ethanol.
| DNA extraction
Total genomic DNA was extracted from spermatophores using DNeasy Blood and Tissue kit (QIAGEN) following the manufacturer's protocol. Total genomic DNA was extracted from pleopods and individual eggs using proteinase K digestion followed by a DNA extraction method using DNA binding plates (Pall Corporation), as F I G U R E 2 Map showing the sampling sites at Rottnest Island. The areas shaded green represent marine protection zones described in Ivanova, Dewaard, and Hebert (2006). The concentration and quality of the DNA of each sample was quantified using a NanoDrop ND-1000 spectrophotometer.
| Data analysis
The program MICRO-CHECKER (Van Oosterhout, Hutchinson, Wills, & Shipley, 2004) was used to detect genotyping or scoring errors, caused by null alleles, large allele dropout, or stutter peaks in the maternal genotypes. Duplicate samples were detected from the probability of genotype identity using GENALEX v. 6 (Peakall & Smouse, 2006). The probability of identity (PI, the average probability of different random individuals sharing the same genotype by chance) and a more conservative estimate of PI, PIsibs, which takes into account the presence of relatives, were also calculated using GENALEX. The same software was used to estimate the number of alleles and observed and expected heterozygosity for each locus from the maternal genotypes. Deviations from random mating were characterized using the F IS statistic (inbreeding coefficient). Positive and negative F IS values indicate a deficit or excess of heterozygotes relative to random mating, respectively. Linkage disequilibrium between each pair of loci was evaluated by testing the significance of association between genotypes. Inbreeding coefficient estimates were performed using FSTAT version 2.9.3 software package (Goudet, 2001). The program GENEPOP 3.1 (Raymond & Rousset, 1995) was used to assess conformity to Hardy-Weinberg equilibrium (HWE). Probability values for deviation from HWE were estimated using the Markov chain method with 10,000 iterations.
Paternity was investigated by genotyping ~20 fertilized eggs obtained from each of the sampled females (see Table 2 for sample sizes). This level of sampling was based on analytical methods for calculating statistical power to detect multiple in highly fecund decapods (Veliz, Duchesne, Rojas-Hernandez, & Pardo, 2017), although the number of females sampled in our study was below the recommended 50 females in that analysis (see Section 4). However, power analysis of sampling 20 eggs per female indicates that we had the ability to detect multiple spawning more than 99% of the time, if the contribution of sperm from two males was roughly equal. Even under the scenario of one male contributing the majority of sperm used to fertilize the egg mass (e.g., 90% of all sperm) our detection probability was still as high as 90%. Three different approaches were used to evaluate paternity: initial inference, the GERUD 2.0 software package (Jones, 2005), and the COLONY 2.0 software package (Wang, 2004;Wang & Santure, 2009). For the initial inference approach, paternal genotypes were inferred from nonmaternal alleles observed in the offspring. Multiple paternity was assumed only if more than two nonmaternal alleles occurred in more than one locus in the offspring, to allow for the possibility of mutation at one locus. We analyzed paternity with GERUD using it to reconstruct the minimum number of possible paternal genotypes. GERUD uses an exhaustive algorithm that takes into account information from patterns of Mendelian segregation and genotypic frequencies in the population. As GERUD does not accept missing data, the number of loci used in this study varied from 4 to 7. The parameter for the maximum number of fathers was set to four, and the runs were conducted with known maternal genotypes. Initial inference and GERUD assume that males are heterozygotes and that there is no allele sharing among fathers or between mother and father(s) and, consequently, they may be underestimating the number of fathers.
Lastly, we used COLONY to assign parentage based on a maximumlikelihood model. Unlike GERUD, this program accepts missing data.
We used the default setting and all runs were performed with known maternal genotypes. Inferred paternal genotypes were compared to the genotypes of all sampled males. Multiple paternity was inferred for a clutch if at least two of the three methods (initial inference, GERUD, COLONY) detected more than one father.
In addition to the paternity analysis, inferred paternal genotypes for each clutch were compared to the genotype of the spermatophore attached to the corresponding mother. Genotype matching was carried out using the genotype identity option in GENALEX.
| RE SULTS
A total of 25 females carrying eggs (15 from Armstrong Bay and 10 from Kingston Reef) were genotyped. Based on genotype identity, one female from Kingston Reef was sampled twice. Of the remaining 24 females, 11 (~46%) had attached spermatophores with genotypes consisting of more than two alleles at a locus, indicating the presence of DNA from more than one male. (Note that we have previously confirmed that spermatophores consisting of multiple genotypes are unlikely to result from genotyping errors or contamination of female DNA and are therefore likely to result from multiple mating; J. Loo et al., unpubl. data.) A further five females had spermatophores with genotypes from a single male that did not match the genotype of the inferred sire, suggesting that these females had also mated with two or more males during the reproductive season. The maternal genotypes showed no evidence of null alleles, and there were no significant deviations from HWE at any locus (p > .05 in all cases).
The probability of sampling identical maternal genotypes (PI) was 3.5 × 10 −7 , and a more conservative estimate of PI, which takes into account the presence of relatives, PIsibs, was 5.2 × 10 −3 . The number of alleles per locus ranged from 2 to 26, with observed heterozygosity ranging from 0.042 to 0.917 (Table 1).
Based on initial inference, only one of 24 clutches showed multiple paternity. According to initial inference and GERUD, the minimum number of sires per clutch was one in 22 cases, with two cases of multiple paternity detected (minimum number of sires of two and three). The analysis in COLONY suggested three instances of multiple paternity (Table 2). Following a consensus approach, multiple paternity was identified only in the two clutches where at least two of the three methods used detected more than one sire. Interestingly, none of the 489 males that were sampled in this study was identified by COLONY as being a putative father of the 24 clutches examined.
Inferred paternal genotypes (from fertilized eggs) were compared with the genotypes of the spermatophores collected from the corresponding egg-carrying females (Table 2). Of these, eight (33%) matched the genotype of the spermatophore attached to the mother. The remaining inferred paternal genotypes did not match the genotype of the spermatophore attached to the mother (17%) or could not be compared to the genotype of the spermatophore attached to the mother because the spermatophore contained ejaculates from more than one male.
| D ISCUSS I ON
Our study confirms that while multiple mating by female P. cygnus is relatively common, incidences of multiple paternity are extremely rare. We found that spermatophores attached to females often came from two or more males, confirming our previous evidence that polyandry is widespread in natural populations of P. cygnus (J. Loo et al., unpubl. data). Despite this evidence for female multiple mating, however, we found limited evidence of multiple paternity.
One simple explanation for the disparity between patterns of female multiple mating and the incidence of multiple paternity is that our sampling protocol may have resulted in low statistical power. Recently, Veliz et al. (2017) developed an analytical method that assessed the statistical power to detect multiple paternity in crabs. According to their analysis, sampling 20 eggs from n = 50 females yields very high statistical power to detect multiple paternity, even in highly fecund species with 1 × 10 6 eggs per clutch. In our study, we were restricted to approx. half this number of females, possibly restricting our ability to fully detect cases of multiple paternity. However, Veliz et al. (2017) also found that studies employing reduced levels of sampling (in terms of clutch size and number of females sampled) also had high power (~98%) to detect multiple paternity. In the present study, we suspect that even if we had improved our statistical power with greater levels of sampling, based on our power analysis, cases of multiple paternity would still have been rare and/or paternity would have been heavily skewed toward a single male in most cases.
A second possible explanation for the disparity between patterns of female multiple mating and the incidence of multiple paternity is that females mate consecutively with individual males each time they produce a batch of eggs, and that our observed patterns of (largely single) paternity reflect a pattern of serial monogamy over the course of the breeding season. As we note above, in P. cygnus mating entails the attachment of the male's spermatophore (tar spot) to the underside of the female, which is partially eroded by the female during fertilization and is only sloughed off in the following molting. Subsequent matings within the same reproductive season (molt cycle) involve a male depositing a fresh spermatophoric mass on top of the previously eroded (used) spermatophore . This can lead to the spermatophoric mass on a female being dominated by a single sire (by virtue of their positioning and numerical supremacy) while still containing the DNA from multiple sires. This is reflected by the high incidence of multiple mating and low occurrence of multiple paternity. However, when double spawning has been observed, it is more likely to occur in the larger females (Chittleborough, 1976;Chubb, 1991;. This pattern of larger females spawning twice in a season has also been observed in other spiny lobsters (Briones-Fourzán & Lozano-Alvarez, 1992;Gomez, Junio, & Bermas, 1994;Macfarlane & Moore, 1986). While these observations support the idea of serial monogamous matings, we have confirmed elsewhere that females carrying spermatophores from more than one male had a wide range of body sizes (carapace length 69.5-106.5 mm) and there was no evidence of higher rates of multiple mating in larger females (J. Loo et al., unpubl. data).
A final explanation for the high levels of polyandry observed in this study is that females mate with multiple males between fertilization events and sperm competition and/or female cryptic choice function to refine fertilization success in favor of a subset of mated males. This explanation also accounts for the disparity between patterns of multiple mating (high incidence) and multiple paternity (low incidence). In species where females store spermatophores TA B L E 1 Genetic variation at microsatellite loci used in this study Cryptic female choice has been proposed in decapods based on behavioral observations, including failed copulations (Bauer, 1996;Diesel, 1990;Ra'anan & Sagi, 1985) and delayed oviposition (Thiel & Hinojosa, 2003 POLY indicates cases of polyandry where the spermatophore consisted of more than one genotype (i.e., three or more alleles at least one locus). Criteria to determine multiple paternity: detection of a minimum of two sires per clutch by at least two of the three methods.
mating. This suggests that although females mated with more than one male, fertilization was attained by only a subset (one or two) of these males. We have yet to determine whether female multiple mating is adaptive (e.g., because it enables females to ensure that sperm from intrinsically "good" males win the race to fertilize their eggs; Curtsinger, 1991;Yasui, 1997) or is a by-product of accumulated matings that take place throughout the breeding season.
However, our observations of high levels of female multiple mating reveal the potential for postmating sexual selection to operate in this system. We eagerly await follow-up studies designed to elucidate such mechanisms and test for possible reproductive benefits of polyandry.
ACK N OWLED G M ENTS
We thank the School of Biological Sciences for funding and the Rottnest Island Authority (Department of Biodiversity, Conservation, and Attractions) for their in-kind support. We also thank Yvette Hitchen and Sherralee Lukehurst for technical advice in the laboratory and the Department of Primary Industries and Regional Development, Fisheries Division of Western Australia, for providing P. cygnus tissue samples.
AUTH O R CO NTR I B UTI O N S
JL, WK, JE, JH, and SL conceptualized and planned the study; JL, JH, and SL were responsible for field collections; JL conducted molecular work and carried paternity analyses; JL and WK were principally responsible for statistical analyses, with input from all authors; all authors were involved in writing and editing the manuscript.
CO N FLI C T O F I NTE R E S T
The authors declare no conflicts of interest. | 2018-05-21T21:28:04.234Z | 2018-04-10T00:00:00.000 | {
"year": 2018,
"sha1": "423a5984cb628044d7ecda1ca4029809fb4f67e1",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/ece3.3985",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "423a5984cb628044d7ecda1ca4029809fb4f67e1",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
2366179 | pes2o/s2orc | v3-fos-license | Non-Cardiogenic Pulmonary Edema in Salicylate Posioning
Salicylate-induced pulmonary edema (SIPE) can occur in both acute and chronic users of aspirin or salicylate products. The medical history, especially when it reveals the use of salicylates, is critical when considering this diagnosis. Unfortunately, the neurologic and systemic effects of salicylate toxicity may hinder the ability to obtain a reliable medical history. SIPE should be considered in patients who present with pulmonary edema and neurological changes, anion-gap metabolic acidosis, or possible sepsis. Some patients may be treated for “pseudosepsis” or other conditions, thereby delaying the diagnosis of salicylate intoxication. Misdiagnosis and possibly delayed diagnosis of SIPE can lead to a significant increase in morbidity and mortality. Serum and urine alkalinization by administration of intravenous sodium bicarbonate are commonly utilized therapeutic strategies. Finally, hemodialysis is a therapy, which should be considered early in the course of treatment. The objective of this case report and review is to emphasize the importance of rapid diagnosis and appropriate treatment in patients with SIPE, and summarize the current literature as it relates to the adult population. *Corresponding author: Fahad Aziz, Resident Internal Medicine, Jersey City Medical Center, NJ, USA, Tel: 347-461-6570; E-mail: fahadaziz.md@gmail.com Received November 02, 2011; Accepted February 21, 2012; Published February 25, 2012 Citation: Aziz F, Shah M, Alok A, Rao M (2012) Non-Cardiogenic Pulmonary Edema in Salicylate Posioning. J Clinic Case Reports 2:111. doi:10.4172/21657920.1000111 Copyright: © 2012 Aziz F, et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Introduction Salicylate-induced pulmonary edema (SIPE) is a complication of salicylate toxicity, which can be difficult to diagnose and treat. Significant mortality and morbidity may result from delayed diagnosis or misdiagnosis of SIPE. This case report signifies the early reorganization and treatment of this uncommon condition. Case Report 41 years old AAF with past medical history of Asthma and Polysubstance abuse, presented to the ER with shortness of breath. Patient was found to be in severe respiratory distress and only a limited history could be obtained. Patient was subsequently intubated in due to hypoxic respiratory failure. As per the limited history prior to intubation she denied any chest pain, recent fevers, cough, palpitations, headaches dizziness and any urinary or bowel complaints. She was on Albuterol inhaler. She also admitted to using cocaine on the day of presentation. On examination patient was a febrile, tachycardic and very trachypnic. Pupils were bilaterally equal, round and reactive to light. Neck was supple and Jugular venous pressure was not elevated. Minimal wheezing was heard on auscultation of lungs. Preliminary blood work done in the ER was significant for a Bicarbonate of 9 with Na of 136 and Chloride of 111 hence the anion gap being 26. She had normal renal and liver function tests. Her hematology was significant for hemoglobin of 9.4 with a mild leukocytosis of 13.2. Initial arterial blood gas was showing severe metabolic acidosis with respiratory alkalosis. Further, blood-work was sent to identify causes of anion gap acidosis and was significant for salicylate level of 56 and a normal alcohol, acetaminophen and lactic acid levels. Later patients family was contacted who found an empty bottle of ASA at home and believed she was using it for pain since the last few days. However they denied any suicidal ideation and possibility of acute ingestion. A CXR (Figure 1) done in ER showed bilateral patchy interstitial infiltrates suggestive of pulmonary edema. The paO2/FiO2 was less than 200 and patient was admitted to MICU with low tidal volumes and PEEP on ventilator settings as per the ARDS protocol (Figure 2). Considering salicylate poisoning with acidosis, bicarb drip was started. Later on chest X-ray was highly suggestive of pulmonary edema. Transthoracic echocardiogram showed a normal left ventricular function, indicating non-Cardiogenic origin of the pulmonary edema. Considering severe non-Cardiogenic pulmonary insetting of salicylate poisoning, decision was made for emergent hemodialysis. Patient responded very well to hemodialysis. Salicylate levels started trending down and patient was successfully extubated on day 3 of ICU stay. Discussion The pathogenesis of SIPE is uncertain and possibly multifactorial. It has been speculated that aspirin causes pulmonary edema by central nervous system “irritation” [1]. Hypothalamic stimulation leads to Initial CT scan also showed bilateral pulmonary edema. Figure 1: Bilateral pulmonary edema. Citation: Aziz F, Shah M, Alok A, Rao M (2012) Non-Cardiogenic Pulmonary Edema in Salicylate Posioning. J Clinic Case Reports 2:111. doi:10.4172/2165-7920.1000111
Introduction
Salicylate-induced pulmonary edema (SIPE) is a complication of salicylate toxicity, which can be difficult to diagnose and treat. Significant mortality and morbidity may result from delayed diagnosis or misdiagnosis of SIPE. This case report signifies the early reorganization and treatment of this uncommon condition.
Case Report 41 years old AAF with past medical history of Asthma and Polysubstance abuse, presented to the ER with shortness of breath. Patient was found to be in severe respiratory distress and only a limited history could be obtained. Patient was subsequently intubated in due to hypoxic respiratory failure. As per the limited history prior to intubation she denied any chest pain, recent fevers, cough, palpitations, headaches dizziness and any urinary or bowel complaints. She was on Albuterol inhaler. She also admitted to using cocaine on the day of presentation.
On examination patient was a febrile, tachycardic and very trachypnic. Pupils were bilaterally equal, round and reactive to light. Neck was supple and Jugular venous pressure was not elevated. Minimal wheezing was heard on auscultation of lungs.
Preliminary blood work done in the ER was significant for a Bicarbonate of 9 with Na of 136 and Chloride of 111 hence the anion gap being 26. She had normal renal and liver function tests. Her hematology was significant for hemoglobin of 9.4 with a mild leukocytosis of 13.2. Initial arterial blood gas was showing severe metabolic acidosis with respiratory alkalosis. Further, blood-work was sent to identify causes of anion gap acidosis and was significant for salicylate level of 56 and a normal alcohol, acetaminophen and lactic acid levels. Later patients family was contacted who found an empty bottle of ASA at home and believed she was using it for pain since the last few days. However they denied any suicidal ideation and possibility of acute ingestion.
A CXR (Figure 1) done in ER showed bilateral patchy interstitial infiltrates suggestive of pulmonary edema. The paO 2 /FiO 2 was less than 200 and patient was admitted to MICU with low tidal volumes and PEEP on ventilator settings as per the ARDS protocol ( Figure 2).
Considering salicylate poisoning with acidosis, bicarb drip was started. Later on chest X-ray was highly suggestive of pulmonary edema. Transthoracic echocardiogram showed a normal left ventricular function, indicating non-Cardiogenic origin of the pulmonary edema. Considering severe non-Cardiogenic pulmonary insetting of salicylate poisoning, decision was made for emergent hemodialysis. Patient responded very well to hemodialysis. Salicylate levels started trending down and patient was successfully extubated on day 3 of ICU stay.
neurogenically control adrenergic discharge, which results in increased venous return and increased left ventricular end-diastolic pressure [2]. However, Karliner [3] considered it unlikely that left ventricular failure from acute sympathetic discharge is an important mechanism in pulmonary edema and he emphasized widely divergent experimental and clinical observations relative to neurogenic pulmonary edema. The normal transthoracic echocardiogram in this case report precludes the particular mechanism. However, the central neurgenic theory should not be entirely discounted. Moss [4] has demonstrated that central nervous system damage can lead to "Shock Lung", a form of noncardiogenic pulmonary edema. It is postulated that central nervous system damage can lead to neurologically medicated pulmonary vascular constriction in the pulmonary circulation, resulting in pulmonary edema and normal wedge pressure.
While most types of non-cardiogenic pulmonary edema are due to direct physical or chemical damage, there are many other complex operative mechanisms, including antigen-antibody reactions, release of vasoactive substances (histamine, kinins, prostaglandins etc), disseminated intravascular coagulation and immunologic reactions to drugs [5]. SIPE might be medicated through one or more of these mechanisms since aspirin is a potent inhibitor of prostaglandin synthesis. It is convincible that SIPE may result from an imbalance of vasoconstrictive and vasodilatory prostaglandin production.
The commonly employed forms of treatment for salicylate intoxication, i.e. forced alkaline diuresis and hemodialysis, are both potentially hazardous for the lungs. Forced alkaline diuresis with crystalloid solutions has the potential to increase lung microvascular pressure and the transfer of fluid across the injured pulmonary vascular bed, and to decrease colloid oncotic pressure, a factor, which may be important in the pathogenesis of "non-cardiac pulmonary edema. It may worsen the pulmonary edema, if patient is already in noncardiogenic pulmonary edema due to direct injury to lungs by aspirin as in our patient.
No clear guidelines exist on when to use HD in salicylate intoxication or SIPE. However, several references suggest the use of HD specifically in salicylate-intoxicated patients with evidence of organ damage, such as pulmonary edema, CNS disturbances, and renal impairment [6,7]. As in our patient who was benefited from emergent hemodialysis. Many references have recommended use of HD in patients who have a very high salicylate concentration (>100 mg/dL) regardless of symptoms present [8]. However, patients on chronic salicylate may benefit from HD when implemented at lower serum concentrations, especially in the presence of clinical decline. Other benefits HD may offer include improvement in acid-base and electrolyte abnormalities. Given the pharmacokinetic properties of salicylate after intoxication (i.e. increased unbound fraction in serum) it seems appropriate to consider HD as a valuable therapeutic modality in salicylate induced pulmonary edema until additional morbidity and mortality data is available. At the present time, clinical judgment, not salicylate concentrations, should be the major guiding force in the decision of when to use HD. | 2019-01-23T16:49:48.196Z | 2012-02-12T00:00:00.000 | {
"year": 2012,
"sha1": "d76ebd2ecc8964fd70f1d0ea9b188f95c1967031",
"oa_license": "CCBY",
"oa_url": "https://www.omicsonline.org/open-access/non-cardiogenic-pulmonary-edema-in-salicylate-posioning-2165-7920.1000111.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "ce9cc3e34417d598415f06aeeb7957bd1f62ddb7",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
256980467 | pes2o/s2orc | v3-fos-license | Environmental DNA Assay for the Detection of the American Bullfrog (Lithobates catesbeianus) in the Early Stages of the Invasion in the Ebre Delta
Simple Summary The introduction of alien species is one of the major causes of biodiversity loss. The American bullfrog (Lithobates catesbeianus) is considered to be one of the most harmful invasive species which overall threatens native amphibians. The detection of an invasive species in the first stage of its arrival is critical to control its colonization and to avoid its establishment. This early detection requires tools with high sensitivity. Methods based on the analysis of free environmental DNA (eDNA) are promising. The present article develops an eDNA assay to monitor the early process of invasion of the American bullfrog in the Ebre Delta (Spain), in a scenario where the presence of bullfrog specimens is really low and scarcely detected by traditional methods. In 2018, the first bullfrog tadpoles were found for the first time in the Ebre Delta. Two years after, despite the species not being well established, our eDNA assay detected the presence of bullfrogs in several locations. This methodology proved to yield a higher sensitivity with a lower sampling effort than traditional methods. Based on our experience, we also provided solutions to face challenges associated with the use of eDNA. Developing a rapid and low-cost-effective protocol to use in the early stages of an invasion (as occurred with the American bullfrog in the Iberian Peninsula) is essential to facilitate the detection, control, and eradication of an invasive species in the early stages of the invasion process. Abstract The American bullfrog (Lithobates catesbeianus) is considered to be one of the most harmful invasive species. In the Iberian Peninsula, this species had been cited occasionally until the year 2018, when L. catesbeianus appeared in the Ebre Delta, and, for the first time, it started breeding in a territory of the Peninsula. Using environmental DNA (eDNA) analysis and visual surveys, the American bullfrog invasion in the Ebre Delta was monitored across two consecutive years (2019–2020). No specimens were observed in 2019, and results for the eDNA survey also failed to detect this species in the Delta. In 2020, two individuals were captured and, under the most conservative criteria to constrain the number of positive detections, eDNA analyses detected the presence of the American bullfrog in at least five locations. Performing an eDNA assay yielded a higher sensitivity with a lower sampling effort than traditional methods. Although the American bullfrog does not appear to still be well-established in the Ebre Delta, only a few bullfrog individuals could be enough for their establishment in suitable habitats. In this context, eDNA assays are essential tools to facilitate the detection, control, and eradication of this species in the first stage of the invasion process.
Introduction
The introduction of alien invasive species is considered to be one of the world's major causes of biodiversity loss [1,2]. Freshwater ecosystems are especially susceptible to such introductions, and amphibians are probably the most fragile species in these ecosystems.
The American Bullfrog
The American bullfrog (Lithobates catesbianus) is native to Eastern North America and it is one of the world's 100 worst invasive alien species in the Global Invasive Species Database [8]. It is also the most widely established alien amphibian species; it has been introduced in 59 regions throughout the world during the past two centuries [9]. American bullfrogs are large animals, reaching up to 900 g in weight and 20 cm in length. They may out-compete native species, acting as predators [10], or transmit exotic diseases such as Batrachochytrium dendrobatidis (Bd [11]) [12,13], and they have been responsible for the depletion and the extinction of many amphibian populations [14,15]. Both Spanish and European laws (Spanish Real Decreto 630/2013; UE Regulation 1143/2014) forbid the possession, transport, and trade of bullfrog specimens.
In Europe, the American bullfrog has been introduced in several regions and it has naturalized populations in France, Germany, Italy, Belgium, and Greece [16]. In the three first countries, the depletion of native amphibian species has been documented [17]. In the Iberian Peninsula, some sporadic bullfrog records were reported at the end of the twentieth century, mostly related to escapes from closed farms. At the beginning of this century, some specimens were occasionally captured in the Collserola Mountains (Barcelona) and in the Canary Islands, the latter being suspected of introduction through trade [18,19]. However, all of these records reported non-successful introductions, and the study from Ficetola et al. [16] considered this species to be eradicated in the Peninsula. More recently, in 2009, bullfrog tadpoles were detected in Barcelona among aquarium trade from Italy [20] and in the Ebre Delta, where a single specimen was recorded in 2012 [21]. Until 2018, American bullfrog records in the Iberian Peninsula have been limited to these occasional sightings. Unfortunately, this situation changed in June of that year, when several tadpoles were reported in a lagoon of the Ebre Delta (Cubeta 3, herein ground zero, Figure 1) and the reproduction of the bullfrog in the Iberian Peninsula was confirmed for the first time.
Environmental DNA (eDNA) for Monitoring the Process of the Invasion
The process of biological invasion in freshwater ecosystems takes place in several stages (introduction, establishment, and spreading to the new habitat), which starts with the transport of the specimens from native or invaded areas to a new location [22]. The early detection of the species in the first stage of introduction is critical to control its colonization and to avoid its establishment and the subsequent expansion [23,24]. This detection needs to be reliable even when densities are very low, so that it can significantly increase the chances of eradication and reduce the economic and ecological impacts [25,26]. The effective management of early-stage invaders requires tools with high sensitivity, which are not always available for aquatic ecosystems, where traditional surveillance methods are not reliable indicators of occurrence [27]. In recent years, non-invasive methods based on the analysis of free environmental DNA (eDNA) have been developed [28]. This analysis is especially useful in aquatic ecosystems, both in freshwater and marine environments [29][30][31]. One of the big advantages of eDNA is the detection of extant populations with a very small number of individuals [32], as it allows for the early detection of biological invasions [24]. In amphibians, eDNA allows for the detection of larval stages of species with taxonomical identification problems, and DNA-barcoding protocols have been used to monitor populations [30,33,34].
Environmental DNA (eDNA) for Monitoring the Process of the Invasion
The process of biological invasion in freshwater ecosystems takes place in several stages (introduction, establishment, and spreading to the new habitat), which starts with the transport of the specimens from native or invaded areas to a new location [22]. The early detection of the species in the first stage of introduction is critical to control its colonization and to avoid its establishment and the subsequent expansion [23,24]. This detection needs to be reliable even when densities are very low, so that it can significantly increase the chances of eradication and reduce the economic and ecological impacts [25,26]. The effective management of early-stage invaders requires tools with high sensitivity, which are not always available for aquatic ecosystems, where traditional surveillance methods are not reliable indicators of occurrence [27]. In recent years, non-invasive methods based on the analysis of free environmental DNA (eDNA) have been developed [28]. This analysis is especially useful in aquatic ecosystems, both in freshwater and marine environments [29][30][31]. One of the big advantages of eDNA is the detection of extant populations with a very small number of individuals [32], as it allows for the early detection of biological invasions [24]. In amphibians, eDNA allows for the detection of larval stages of species with taxonomical identification problems, and DNA-barcoding protocols have been used to monitor populations [30,33,34].
The design of the eDNA barcoding assay is focused on targeting specific DNA regions that can be amplified with a conventional polymerase chain reaction (PCR), and can thus detect the organism. For the American bullfrog, previous eDNA analyses of water samples have demonstrated the usefulness of this technique [29,35,36]. The study The design of the eDNA barcoding assay is focused on targeting specific DNA regions that can be amplified with a conventional polymerase chain reaction (PCR), and can thus detect the organism. For the American bullfrog, previous eDNA analyses of water samples have demonstrated the usefulness of this technique [29,35,36]. The study by Ficetola et al. [29] proved the high sensitivity of the method with positive detections in natural lagoons of 1000-10,000 m 2 , where just one or two adult specimens had been observed during visual surveys. Dejean et al. [35] confirmed the validity of this method for species detection at very low densities, by reporting up to five times more positive sites than with using diurnal and nocturnal surveys. However, all of these previous studies comprised regions (south-west France) where the species had been introduced, where there were stable populations, and where its identification by eDNA could be contrasted by traditional (visual) methods.
Our study developed a rapid and low-cost-effective protocol based on the use of eDNA barcoding to monitor the early stages of the American bullfrog invasion in the Ebre Delta. It deals with a completely different scenario compared to other similar studies, as American bullfrog specimens have been rarely observed in the studied region. The analyses were launched in 2019 and continue up to the present. This study aimed to compare traditional and molecular methods of detection, to discuss the first eDNA positive results (2020 survey), and to explain the potential errors and challenges of the eDNA assay.
Study Area and Bullfrog Detection
In June 2018, about 1000 tadpoles of the American bullfrog were found in three lagoons (DNA1, DNA2, and Cubeta 3 locations) in the north part of the Delta (northern hemidelta, Figure 1, Table 1), making this the first reported reproduction event of this species in a natural ecosystem of the Iberian Peninsula (Table 1). Since then, the Delta Natural Park implemented a rigorous plan to monitor and eradicate the species, including tadpole coop traps, adult terrestrial traps, terrestrial transects, and acoustic surveys. Surveys were more intensive in the area where tadpoles were found and its surroundings, but also occurred all over the northern hemidelta. Coop traps (fyke nets) were placed during the entire reproductive period in areas where tadpoles and adults had been found, specifically where the environmental and physicchemical conditions were more suitable to the bullfrog (eight traps in the ground zero area and six in the eDNA10 area). Traps were visited every three days. Adult terrestrial traps were built to capture adult bullfrogs specifically. They were made of metal mesh with bait inside, and with an only-entry door. Three adult traps were placed in the ground zero area. Terrestrial transects and acoustic surveys consisted of 17 routes previously designed along the north hemidelta, and were repeatedly made during the months of the American bullfrog activity. Acoustic surveys were made in several points of the routes for 15 min in the same point and between 22:00 and 24:00 o'clock, when the song of bullfrog males to attract females is usually listened to. (Table 1, and see details in chrome-extension://efaidnbmnnnibpcajpcglclefindmkaj/https://mediambient.gencat.cat/ web/.content/home/ambits_dactuacio/patrimoni_natural/especies_exotiques_medinatural/ llista_sp_catalogades/amfibis/granota-toro/Informe-granota-toro.-PNDE.-2020.pdf, accessed on 20 December 2022). About 1000 larvae were found in 2018 and at least three adult specimens were heard, all within ground zero. No specimens were found anywhere else. Fortunately, these three lagoons were artificial and water connection with the rest of the Delta could be easily locked. To contain the terrestrial expansion of the species, a six km metallic fence was built during July 2018 to isolate the ground zero area. After summer, some juvenile post-metamorphic individuals were captured (76), most within the fence but some just outside (Table 1). At this moment, the three lagoons were highly salinized with a lethal salt concentration (>30 g of salt/L) for all amphibians in the larval phase. Afterwards, no more specimens were seen nor heard in the Delta until June 2020, when two deceased specimens were found: one in the DNA10 site ( Figure 1), located seven km from the ground zero, and another one found in ground zero (Cubeta 3), a few days after the last eDNA survey.
eDNA Sampling and Extraction
In June 2019, the first eDNA survey was performed in eight locations: seven close to the ground zero, where the species had been seen the year before ( Figure 1), and an external location (DNA7) in the south of the Delta area (southern hemidelta). In 2020, two eDNA surveys were performed, in June and July, covering the whole breeding season of the American bullfrog. During the June survey, nine locations were sampled, eight already included in the 2019 survey (DNA3-9 and Cubeta 3), plus one more location close to the site where a new specimen was found in June 2020 (DNA10). In July, sampling was extended up to 23 locations distributed throughout the northern hemidelta. Sampling was designed according to results from previous surveys, in locations close to the ground zero area and to DNA10, and considering all locations that were connected with these areas via water channels ( Figure 1). Environmental conditions did not differ between sites. In all cases, ponds were of still water with moderate turbidity (Supplementary Figure S1). Temperature and salinity ranges were 21.8-24.3 • C and 0.9-2.41 g/L, respectively, in June 2019, and 23.2-26.2 and 0.8-1.3 g/L, respectively, in July 2019.
For each eDNA survey, samples from all locations were collected the same day with in situ water filtration to avoid contamination [37]. All non-disposable sampling equipment was cleaned with a 10% dilution of bleach after their use. A filter funnel (250 mL and 47 mm; Thermo Fisher Scientific, Melbourne, Australia) with a peristaltic pump and cellulose nitrate filters of 0.45µm diameter were used. A total volume of 500 mL of water was filtered in each location, in two separate filters (250 mL each) to avoid clogging. Each filter was subsequently folded and preserved in a 1.5 ml tube with 1 ml of ATL lysis buffer (Qiagen, Hilden, Germany). Negative control samples (nuclease-free water) were filtered every five locations. In the lab, the extraction process continued with a digestion with 100 µL of proteinase K added to each filter in ATL and with overnight incubation with shaking at 120 rpm and 37 • C. DNA was subsequently purified using the DNeasy Blood & Tissue Kit of Qiagen, increasing the AL lysis buffer and ethanol volumes up to 500 µL. The protocol was modified so that all of the filtered volume for a single location was transferred to the same spin column. After the final elution, eDNA extractions were diluted to 1:10, 1:100, and 1:1000 with nuclease-free water. Negative controls were processed simultaneously to eDNA filtering volumes to monitor putative contaminations during the sampling and extraction processes. DNA extraction and the subsequent amplifications were performed in separate rooms. A positive control was obtained by extracting DNA from American bullfrog muscular tissue using the same DNeasy Blood & Tissue Kit of Qiagen.
eDNA Amplification
PCRs were first performed with primers cytbF1 and cytbR1 [29], which amplified a fragment of 79 base pairs (bp) of the cytochrome b mitochondrial gene (cytb) ( Table 2). This set of primers was tested for the native amphibians living in the Ebre Delta, Bufo bufo (Linnaeus, 1758), Pelophylax perezi (López-Seoane, 1885), and Lissotriton sp (Bell, 1838), and for another alien species, the painted frog, Discoglossus pictus, and it yielded positive amplifications in Discoglossus pictus. This species is native to Mediterranean Africa and was introduced in France and north-east Catalonia [38]; therefore, its presence cannot be excluded in the Delta. Then, two additional American bullfrog primers (F2 and R2) were designed to be used in combination with the original ones. For this, the software Primer3 was used from multiple alignments of American bullfrog sequences in Geneious software, version 5.6 [39]. This new set of primers amplified a fragment of 200 bp of the cytb that included the region amplified by cytbF1 and cytbR1 and failed to be amplified in all native amphibians as well as in the painted frog. The basic local alignment search tool also showed that these primers do not match with high scores to any other sequences stored in GenBank. Then, amplifications of all primer combinations with these new primers (cytbF1 + cytbR2, cytbF2 + cytbR1) in tissue (muscle) and positive eDNA samples of American bullfrog were checked. The primer set cytbF1 + cytbR2 (150 bp) was selected as the optimal size combination for eDNA amplification, and cytbF2 + cytbR1 were kept as an alternative set.
The presence of native amphibians (Bufo bufo, Epidalea calamita (Laurenti, 1768), Lissotriton helveticus (Grigory Razumovsky, 1789), Pelobates cultripes (Cuvier, 1829) and Pelophylax perezi) is recorded all along the Delta [40], and hence in all sampled locations. Therefore, primers 16SA-L and 16SB-H, described in Vences et al. [41] as amphibian DNA barcoding, were used as positive controls to check for PCR inhibition in eDNA samples. Previously, the positive amplification with these primers was checked in tissue samples of the most abundant amphibian in the Delta (Bufo bufo and Pelophylax perezi). These primers amplified a fragment of 594 bp of the mitochondrial 16S rRNA gene and PCR conditions were the same as used by the American-bullfrog-specific PCRs ( Table 2). Table 2.
Primers used in this study for universal amphibian and American bullfrog DNA amplification. For all primer combinations, PCRs had a total volume of 30µL with two µL of eDNA extraction, three µL buffer (BIOLINE) 10 × 0.15 µL of Taq DNA polymerase (BIOLINE) 5 u/µL, 0.6 µL primer forward 10 µM, 0.6 µL primer reverse 10 µM, three µL dNTP MIX two mM, and 0.9 µL MgCl 2 50 Mm. For all reactions, thermal cycling conditions consisted of an initial denaturation step at 94 • C for three min, followed by 10 cycles of touch-down of 94 • C for 30 s, 65-55 • C (−1 • C per cycle) for 1.5 min, and 72 • C for 1.5 min, and 30 cycles of 94 • C for 30 s, 55 • C for 1.5 min, and 72 • C for 1.5 min, plus a final step of 72 • C for five minutes.
Universal Amphibian Primer Sequences References
All PCR reactions were conducted in a PCR UV chamber. In general, for each sampled location, two or three replicates of the 1:10 and the 1:100 dilutions of eDNA extraction were amplified with specific (cytbF1 + cytbR2) and universal primers (16SA-L + 16SB-H). Replicates of the original eDNA extraction (1:1) were also amplified for the 2020 survey samples. Additionally, for some locations of the 2020 survey, 1:1000 dilutions were also used. This distribution yielded at least 6 replicates per sample and per primer set. In all cases, negative controls of the extraction (filtered and processed nuclease-free water) and PCR negative controls, were included, as well as positive controls from a DNA extraction of American bullfrog tissue.
All PCR products were visualized in an electrophoresis gel at 2% agarose with GelRed TM in an Axygen Gel system. Electrophoresis ran for 45 min to obtain a clear distinction between the small fragment amplified by specific bullfrog primers and primer dimers (Supplementary Figure S2).
Design of a Pipeline to Validate Results in Front of the Possibility of False Positives and Negatives
Based on the extremely low density of American bullfrog specimens in the studied region, a low detection probability of our eDNA PCR assay was assumed. Consequently, three initial replicates in three diluted concentrations were set up, summing up to eight replicates per sample. Following this basal design, a pipeline ( Figure 2) to produce and validate the results considering the amplifications of a bullfrog-specific PCR (specific PCR) and an amphibian universal PCR (universal PCR) was established. If there was positive amplification in the universal PCR in at least two replicates but none of the eight replicates were amplified by the specific PCR, these samples were considered to be negative. Alternatively, if the universal PCR failed or when any of the eight replicates were amplified by the American bullfrog PCR, a second round of PCRs was performed to confirm results ( Figure 2).
2019 Survey
Despite tripling the sampling effort from 2018, traditional methods (trapping and visual and acoustic tracking) failed to detect the American bullfrog inside and outside of the ground zero area.
Universal amphibian primers (16SA-L + 16SB-H) were amplified in at least two out of six replicates in all sampled locations, discarding the presence of PCR inhibitors that could lead to false negative results in the specific amplifications (cytbF1 + cytbR2). No amplification of bullfrog DNA was observed in any water sample at any dilution in these 2019 samples.
2020 Surveys
One deceased specimen was found in the DNA10 location in June 2020, and sampling via traditional methods was intensified close to this location, in ground zero and surrounding areas. This increased effort detected only one other deceased individual in Figure 2. Pipeline designed to produce and validate results considering the amplifications of a bullfrog-specific PCR (specific PCR) and an amphibian universal PCR (universal PCR). Although not found in this study, if both universal and specific PCRs resulted in negative results, they could not be validated.
2019 Survey
Despite tripling the sampling effort from 2018, traditional methods (trapping and visual and acoustic tracking) failed to detect the American bullfrog inside and outside of the ground zero area.
Universal amphibian primers (16SA-L + 16SB-H) were amplified in at least two out of six replicates in all sampled locations, discarding the presence of PCR inhibitors that could lead to false negative results in the specific amplifications (cytbF1 + cytbR2). No amplification of bullfrog DNA was observed in any water sample at any dilution in these 2019 samples.
2020 Surveys
One deceased specimen was found in the DNA10 location in June 2020, and sampling via traditional methods was intensified close to this location, in ground zero and surrounding areas. This increased effort detected only one other deceased individual in Cubeta 3 (ground zero) four days after the eDNA survey, and another individual was also heard in Cubeta 3, in a acoustic survey at the end of July.
June eDNA Survey
Universal amphibian primers (16SA-L + 16SB-H) were not amplified in the 1:1 eDNA extractions, but the amplification was positive in at least two out of the six replicates of the 1:10 and 1:100 dilutions in eight locations (all locations except DNA4, DNA5, and DNA9). For these three sites, a second round of PCRs was performed, excluding the 1:1 eDNA extractions and including three replicates of the 1:1000 dilution. This second round of amplification was positive in two to five out of six replicates per sample.
Initially, American-bullfrog-specific PCRs were amplified only in one replicate for the 1:100 dilutions in the DNA9 sample (Table 3). Then, a second round of PCRs was performed for all of the locations, including three replicates of the 1:1000 dilutions. Amplification was then successful for three replicates in DNA5 and four in DNA9 (Figure 1). The overall amplification success in these samples for the June survey was then 3/16 in DNA5 and 5/16 in DNA9 (Table 3).
July eDNA Survey
Samples from the second survey (July 2020) had two-three PCR replicates of the 1:10, 1:100, and 1:1000 dilutions, with a total of eight replicates per sample (Table 3). Universal amphibian amplification was successful in all 23 samples with two to five positive replicates per sample, except in DNA22, DNA23, and DNA26. A second round of universal PCRs performed for these three locations yielded positive amplifications in four to five replicates per sample.
American bullfrog eDNA was successfully amplified in eight (out of the twenty-three) locations, with one to five positive replicates per sample. A second round of PCRs confirmed bullfrog eDNA in four of these eight positive locations, plus in two more locations (DNA5 and DNA9). However, cross-contamination in a negative control of the first round of the cytbF1 + cytbR2 PCR was detected. These results were excluded and the alternative primer set (cytbF2 + cytbR1) was used to confirm all positive results. The cytbF2 + cytbR1 combination amplified American bullfrog eDNA in five out of the twenty-three samples, which were also amplified in the first or second round of the PCR with the primer set cytbF1 + cytbR2 (Table 3).
In summary, the eDNA survey indicated the presence of the American bullfrog in 10 (at least one positive PCR) out of 23 locations. However, in some of the July 2020 samples (DNA1, 3, 9, and 19), bullfrog eDNA was only detected in one replicate, despite DNA9 being positive (5/16) in the June survey, and bullfrog detection was confirmed in DNA1, 3, and 19 samples with the primer set cytbF1 + cytbR2. In five of the ten positive samples (DNA1, 3, 4, 19, and Cubeta 3) amplifications of the American bullfrog, eDNA was positive with both tested primer sets (Table 3). Table 3. Number of positive replicates for the specific amplification of American bullfrog eDNA with respect to the total number of replicates, in samples from the 2020 surveys. NEG: no amplification. The number of replicates for each tested dilution (1:10, 1:100, 1:1000) is indicated between parenthesis. F1 + R2 means that PCRs were performed with the cytbF1 and cytbR2 primer set. F2 + R1 means that PCRs were performed with the alternative set of primers cytbF2 + cytbR1. All specific amplifications were validated with positivity in the amplifications with amphibian universal primers in the first or second round (samples that were amplified at the second round via universal PCRs are indicated with an asterisk (*)). Note: In the June survey, a second round of the specific PCR was performed also in DNA10, because it was in this location that the adult individual was found. In the July survey, we included DNA9 in the second round of the specific PCR because it resulted in a positive in the June survey.
Challenges of eDNA to Track Invasive Species
Because traditional surveys only detected the species occasionally in only one or two locations in the Delta, it is clear from our results that eDNA reported higher sensibility with a low catch effort. eDNA-based methods to monitor invasive species in aquatic ecosystems have been designed and applied successfully to detect these species even at very low densities [35,42,43]. In our study, this methodology allowed us to infer a more complete picture of the extent of the invasion process in the first stage, as has been reported in other previous studies [29,35,44]. Moreover, the logistical requirement for eDNA sampling and the persistence of eDNA in the environment beyond the presence of the species are also arguments strengthening the use of eDNA methods. However, similarly to other monitoring methods, eDNA methodology also has several critical points along the whole process, from sampling to the interpretation of the data [45].
Environmental DNA Capture
A first challenge of eDNA assays is water filtration with cellulose nitrate filters. Filters with small pore size are strongly recommended for eDNA sampling as they optimize eDNA capture at low concentrations [37,46]. However, in turbid water bodies with a lot of organic matter or suspended sediment, filters clog quickly and the filtering rate is so slow that it is impossible to filter an optimal volume (at least 500 mL), which is especially required when eDNA is scarce. Several alternatives have been suggested: increasing pore size, pre-filtering samples, or reducing the water volume of samples. All of them lead to lower yields of target DNA [37,47]. Alternatively, Hunter et al. [47] increased the filtered volume and obtained a higher DNA yield by combining several filters in a single Phenol-Chloroform-Isoamyl DNA extraction. This alternative solution was adapted by using the DNeasy Blood & Tissue Kit of Qiagen (Hilden, Germany), which is recommended for eDNA [48]. The total desired volume was filtered using as many filters as necessary (two in our case) which were then preserved and processed separately until the eDNA was transferred to the spin columns of the kit. Thus, the digestion volume of the two filters belonging to the same location was collected in a single DNeasy mini spin column. This modification in the DNA extraction protocol allowed us to recover the eDNA from a total 500 mL volume, avoiding problems of filter clogging.
PCR and False Positive/Negative Results
The design of primers may be related to false positive and false negative results of the eDNA protocols. In this sense, the length of the fragment might be a critical issue. On the one hand, if the fragment is very short, the risk of amplification artefacts and sporadic contamination (both causing false positive results) is higher. On the other hand, very long DNA sequences are prone to false negatives as long templates do not persist in the environment [49]. Therefore, most published papers with water sampling have diagnostic PCR fragment sizes shorter than 150 bp [35,50,51].
Although we took precautions to reduce the risk of contamination (DNA extraction in a separate room, the PCR in a UV-chamber, and amplification of negative and positive controls), cross-contamination detected in some negative controls made us exclude some 'positive' results and use alternative primer sets for the specific PCR.
False positive results could also be due to the persistence of eDNA when the species had already disappeared from the water body. Dejean et al. [35] proved that bullfrog eDNA persisted in freshwater ecosystems for a maximum of two weeks after animal removal.
Nevertheless, the main problem to face was to avoid false negatives by the presence of PCR inhibitors. This is particularly concerning when sampling turbid waters such as those from wetlands in the Ebre Delta. Several protocols have been proposed to improve eDNA yield, such as adding chemical compounds or performing mechanical processes to remove inhibitors during DNA extraction [47,52]. An alternative solution is the dilution of eDNA extractions to reduce inhibitors. This easy method does not have the economic cost of removing potential PCR inhibitors. However, this approach can be problematic when DNA concentrations are low, because diluting the extractions also reduces DNA concentration and hence the sensitivity of the PCR assays [52,53]. The negative effect of inhibitors can also be assessed using a second PCR with universal primers. In our case, a set of unspecific amphibian primers (16SA-L and 16SB-H) was used to amplify all eDNA extractions. As other species of amphibians (mainly Pelophylax perezi) were expected in all of the sampled locations, negative results were indicative of inhibition. Our positive results from a universal PCR show that dilution avoided inhibitor effects in all of the samples. Previous results from McKee et al. [52] show that a 10-fold dilution is enough to reduce qPCR inhibition effectively. In most of our samples, a 1:100 dilution was necessary to avoid the effects of inhibitory compounds. As discussed just below, this 1:100 dilution did not compromise the overall sensitivity of the PCR assay because several replicates were simultaneously amplified (at least two for each dilution). Therefore, the possibility of false negatives due to the random and unequal distribution of the very few DNA molecules in the dilutions was avoided.
Replicates and Threshold of Positivity
The last critical point when using eDNA is the suitable number of replicates and the threshold of positive tests to consider the presence of the species to be certain. The detection of alien invasive species relying solely on DNA-based methods has been controversial, especially when such detection can result in costly management implications [45]. In this context, performing an optimal number of replicates to avoid missed detections and setting the minimum number of positive replicates to avoid false positives are both strictly necessary. To assess these parameters, previous studies have calculated the detection probabilities of eDNA analyses. However, this is only possible when eDNA results can be compared with traditional methods of detection outcomes or with experiments in controlled conditions [29,35]. In our case, since the eDNA studies started, visual and acoustic surveys detected only a couple of specimens in a very localized area. Therefore, the detection probability of our PCR assay could be compared to traditional methods but it was not able to be tested empirically.
In general, the detection probability is not high when the density of the species is low. Ficetola et al. [54] recommend at least eight PCR replicates to avoid false negatives when the detection probability is lower than 0.5. Goldberg et al. [53] conducted controlled experiments with different densities of the invasive New Zealand mudsnail (Potamopyrgus antipodarum) and used three replicates for each density treatment to reach the detection of even one individual in 1.5 L of water. In these scenarios with several replicates per sample, it is important to consider the possibility of crossed or sporadic contamination. Thus, to avoid false positives, Taberlet et al. [55] recorded an allele only if it was observed in at least two out of ten replicates when they analyzed samples with little DNA. More recently, Ficetola et al. [54] suggested the same strategy in eDNA metabarcoding studies, remarking that a sufficient number of replicates was necessary to avoid false negatives with low detection probabilities.
According to the pipeline described in methods to validate results and considering all of the replicates together, 14-16 specific PCRs per location were performed in the 2020 survey. Of these 16 amplifications, negative results in most of the 1:10 dilutions suggested the presence of PCR inhibitors in this dilution. Accordingly, replicates of 1:10 dilutions were abandoned and the analysis was performed with 12 replicates instead. Following the recommendations from previous studies [54], positive identification was made when the specific PCR amplified at least two of the twelve probes. Under these criteria, there were two positive locations in the June 2020 survey and seven in the July survey.
Moreover, Sepulveda et al. [56] suggested using different PCRs targeting different genomic locations. This would increase the reliability of positive detections. In our case, an alternative primer set of the specific PCR (cytbF2 + cytbR1) was used to confirm positive amplifications of American bullfrog eDNA in samples of the most recent survey (July 2020). This approach confirmed the detection of the American bullfrog in five locations (Table 3).
Early Invasion of the American Bullfrog in the Ebre Delta Revealed by eDNA
According to the arguments discussed so far, several restrictions should be and have been applied in the interpretation of the undertaken American bullfrog eDNA assay in the Ebre Delta. Even under the most conservative scenario, the presence of this species in at least five locations can be confirmed in the last survey (July 2020, Figure 1). The detection of the species in several locations through eDNA analyses contrasts with a very local detection of only two individuals through visual or acoustic surveys. This suggests an early first stage of the invasive process [24], and it shows once again that eDNA assays improved detection sensibility with respect to the traditional methods, with a much lower sampling effort [35].
Interestingly, the highest number of positive replicates was found in the ground zero area, where the first bullfrog tadpoles were observed in June 2018 (Table 1 and Figure 1). However, the eDNA sampling of the 2019 year was negative, and the presence of this species in the Delta was not reported again until June 2020, in a place seven kilometers from ground zero. In 2020, a single deceased specimen was found in the DNA10 location, and the eDNA analyses confirmed the incipient introduction of this species again. Curiously, the eDNA survey of June 2020 failed to detect the species in the DNA10 location, but it was detected in DNA9, where waters from the DNA10 region were collected. Within the ground zero area, a single deceased specimen was found and another individual was heard in July 2020, after the last eDNA survey (Table 1). These history records and the eDNA results might suggest two alternative hypotheses regarding the American bullfrog invasion in the Ebre Delta. First, it is possible that the eradication plan carried out in 2018 in the ground zero area was sufficient to eliminate tadpoles, but some post-metamorphic terrestrial individuals survived and escaped from this area before the construction of the metallic fence was completed (end of July 2018). Then, if few individuals survived but they were not established, the bullfrog density in 2019 could have not been enough to be detected even by eDNA analyses (either because the concentration of eDNA was too low or because the sampling was not extensive enough). Detections in 2020 should then be attributed to these survivor individuals. The concentration of positive detections in the ground zero area in the 2020 survey could be suggesting that two years after the first introduction, the invasive American bullfrog persisted in the first site of detection. Alternatively, it could be possible that at least some individuals came back to the original site of introduction to reproduce. The fact that 29 post-metamorphic juveniles (>200 g weight) were captured in autumn 2018 outside of the fence surrounding the ground zero area supports this hypothesis. The second invasion focus that seemed to appear in 2020, close to the place where a dead specimen was found, could have had an origin in individuals that spread out from the ground zero area.
Alternatively, it is possible that the 2018 eradication plan was successful and that the American bullfrog was eradicated. Therefore, the two specimens found and the positive eDNA results in 2020 would correspond to a new invasion process, or possibly multiple introductions in the two invasion focuses. In other countries, reiterated intentional releases of the American bullfrog have been documented [16,57], and the same pattern could be taking place in the Ebre Delta. European legislation prohibits introductions of the American bullfrog and its commercial farming is completely forbidden in the Iberian Peninsula (Royal Decree 630/2013, 2 August). However, this legislation could change if the species was already established. For instance, the Autonomous Government of Catalonia (SRM/1/2019, 17 May 2019) has changed its law and recently allowed the commercialization of the blue crab Callinectes sapidus (Rathbun, 1896), another invasive species in the Ebre Delta [58]. Therefore, a hypothetical premeditated release of bullfrogs could be related to the gastronomic and economic potential of the American bullfrog commercialization and the possibility of a law change if the species is successfully introduced.
The failed attempts of American bullfrog establishment in the Delta should not lower our guard. Blackburn et al. [59] link the success of the three stages of the invasion process (introduction, establishment, and spread) to several critical aspects. Specifically, the number of invasive specimens plays an important role in the first stage of the invasion, because a larger number of transported individuals will increase the probability of success. The number of arrived individuals at the second stage is also important, as a small number of individuals leads to reduced genetic diversity that can compromise the process of adaptation to the new habitat. For the American bullfrog, Ficetola et al. [60] suggested that an extremely low number of founder specimens can be enough for a successful invasion process. For instance, it is estimated that the Italian invasive population descended only from two females and one male introduced in 1930. The success at the second stage also requires that the new habitat has similar conditions to those of the native one [61]. Ficetola et al. [62] suggests that certain environmental factors (mainly related to the climate) are critical to determine the probability of the establishment of the American bullfrog. The projection of the environmental suitability for the bullfrog made by these authors (see Figure 3 in [62]) indicates the south and the west of the Iberian Peninsula as being less suitable for bullfrog establishment. This could explain that despite some individuals having been reported in these regions, they have never become invasive [18,19,62]. However, the situation is quite different in the northeast (including the Ebre Delta region), where the environmental suitability for bullfrog establishment reaches up to 50/100, according to Ficetola et al. [62].
The monitoring of the American bullfrog in the Ebre Delta through classical and molecular tools is expected to continue in the upcoming years, at least for some years after both of the methodologies fail to detect the species. We have started extending sampling to never-surveyed areas and the preliminary results were negative, but more of a sampling effort is still necessary.
Conclusions
Our eDNA analyses allowed us to delimitate the extent of the invasion of the American bullfrog in the Delta, yielding a higher sensitivity with a lower sampling effort than traditional methods. In this context, eDNA assays are essential tools to facilitate the detection, control, and eradication of the species in the first stage of the invasion process in the Ebre Delta. Even at a low population density, the American bullfrog may represent a high level of risk for the conservation of biodiversity in the Ebre Delta ecosystem, a fragile ecosystem already endangered by climate change and the establishment of other invasive species [58,63,64]. In such a situation, Darling and Mahon [45] stated that, despite controversial arguments, DNA-based methods might be the only tool to promote management actions prior to the assumption of unacceptable invasion risks.
Supplementary Materials: The following supporting information can be downloaded at: https:// www.mdpi.com/article/10.3390/ani13040683/s1, Figure S1: Picture of one of the sampling locations in the Ebre Delta; Figure S2: Example of an agarose gel of American bullfrog PCR with positive locations (DNA1, 3 and 4). Negative control of extraction (-), positive control (+), molecular weight (PM) and negative control of PCR (-) are included. Note: bands with the most quick mobility correspond to primer dimers and they appear everywhere but in the positive control. Data Availability Statement: All raw data will be freely available upon request to the authors. | 2023-02-18T16:04:07.221Z | 2023-02-01T00:00:00.000 | {
"year": 2023,
"sha1": "c0ea391de8ed4b15d1d6c3f208241f7b10fc4d81",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "466e8f7a81ef7f6b0f6522d388e070ec454a92f1",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": []
} |
257596771 | pes2o/s2orc | v3-fos-license | Hydroxyethyl Starch–Bovine Hemoglobin Conjugate as an Effective Oxygen Carrier with the Ability to Expand Plasma
Hemorrhagic shock leads to intravasal volume deficiency, tissue hypoxia, and cellular anaerobic metabolism. Hemoglobin (Hb) could deliver oxygen for hypoxic tissues but is unable to expand plasma. Hydroxyethyl starch (HES) could compensate for the intravasal volume deficiency but cannot deliver oxygen. Thus, bovine Hb (bHb) was conjugated with HES (130 kDa and 200 kDa) to develop an oxygen carrier with the ability to expand plasma. Conjugation with HES increased the hydrodynamic volume, colloidal osmotic pressure, and viscosity of bHb. It slightly perturbed the quaternary structure and heme environment of bHb. The partial oxygen pressures at 50% saturation (P50) of the two conjugates (bHb-HES130 and bHb-HES200) were 15.1 and 13.9 mmHg, respectively. The two conjugates showed no apparent side effects on the morphology and rigidity, hemolysis, and platelet aggregation of red blood cells of Wistar rats. Thus, bHb-HES130 and bHb-HES200 were expected to function as an effective oxygen carrier with the ability to expand plasma.
INTRODUCTION
Hemorrhagic shock can lead to tissue hypoxia, cellular respiration into anaerobic metabolism, and intravasal volume deficiency, which alter the normal physiological function. 1,2 Blood transfusion is an effective method to alleviate the hemorrhagic shock by its ability to deliver oxygen, expand the blood volume, and improve the microcirculation. 3,4 However, blood transfusion suffers from several disadvantages, such as cross-matching, pathogen-borne diseases (e.g., AIDS and hepatitis C), allergic reactions, and the shortage of blood supply. 5 Plasma expanders have been used in emergent transfusion for serious blood and fluid loss, circulation stabilization, and blood dilution. 6,7 Crystalloid solutions (e.g., stroke-physiological saline solution) and colloid solutions (e.g., dextran, gelatin, albumin, and hydroxyethyl starch (HES)) have been clinically used as plasma expanders. 8,9 Compared with crystalloid solutions, colloid solutions effectively maintain the functional capillary density and expand the blood volume and osmotic pressure thereby improving the microcirculation of tissues in hemorrhagic shock. 10,11 HES is a highly branched and semi-synthetic amylopectin consisting of ether-linked hydroxyethyl groups. 11,12 As a colloid solution, HES is an effective plasma expander with a wide application in clinical therapy of intravasal volume deficiency. 13,14 However, clinical therapy with HES results in allergic reactions, bleeding defects, and platelet damage. 15 The clinical defects of HES depend on the M w and the degree of substitutes. For example, HES450 (450 kDa) and HES700 (700 kDa) lead to bleeding complications and pruritus. 16 HES70 (70 kDa) shows a low volume expansion ability for its rapid renal elimination, whereas HES130 (130 kDa) and HES200 (200 kDa) induces severe tissue hypoxia. 17 Moreover, hypoxic shock is not fully alleviated by transfusion of plasma expanders (e.g., HES) due to the fact that they cannot provide oxygenation for the hypoxic tissues.
Hemoglobin (Hb), a tetrameric protein, could deliver and release oxygen in a cooperative manner. 18 Hb in red blood cells (RBCs) could act as an oxygen carrier to alleviate tissue hypoxia brought by hemorrhagic shock. 19 However, Hb suffers from tetrameric dissociation, renal toxicity, and a short plasma retention time in vivo due to its relatively small molecular size and tetrameric dissociation. 20,21 Hb-based oxygen carriers (HBOCs) overcome the disadvantages of Hb for their large molecular size and could deliver oxygen to hypoxic tissues. 22,23 Several HBOCs have been developed for surgery or emergency transfusion, including polymerized Hb, PEGylated Hb, and dextran−Hb conjugates (dex-Hb). 24 Polymerized Hb was prepared by cross-linking tetrameric Hb with glutaraldehyde. PEGylated Hb and dex-Hb were prepared by conjugation of Hb with polyethylene glycol (PEG) and dextran. 25 In order to treat hemorrhagic shock that induces tissue hypoxia and intravasal volume deficiency, a solution with the ability to deliver oxygen and expand the blood volume was highly desired. However, some HBOCs (e.g., polymerized Hb) could not expand the blood volume for their low colloid osmotic pressure (COP). 26,27 The PEGylated Hb and dex-Hb could act as an oxygen carrier and plasma expander for their ability to deliver oxygen and expand the blood volume with the high COP. 28,29 However, the high COP of the PEGylated Hb (>100 mmHg at 40 mg/mL) might lead to intracellular fluid loss and hypertonic dehydration. 29 Moreover, PEGylation of proteins could elicit anti-PEG immunity. 30,31 In addition, dex-Hb may lead to allergies and renal failure. 32 Conjugation of HES with Hb is an effective method to solve the problems of HES and Hb. 33,34 It is expected that the conjugate can deliver and release oxygen to treat the hypoxic tissues due to the presence of Hb. The conjugate can also compensate for the intravasal volume deficiency with the presence of HES. Bovine Hb (bHb) shows 85% homology to human adult Hb (HbA) and has no quantity constraints with its ample resource and controllable quality. 35 HES130 and HES200 both have a moderate M w and less side effects.
In the present study, two bHb-HES conjugates (bHb-HES130 and bHb-HES200) were prepared by conjugation of bovine Hb (bHb) with HES130 and HES200, respectively. The structure, heme environment, and oxygen delivery properties of the conjugates were measured to evaluate their effectiveness as oxygen carriers. The COP and viscosity of the conjugates were measured to evaluate their effectiveness as plasma expanders. The effects of the conjugates on the morphology, hemolysis, and platelet aggregation of red blood cells of Wistar rats were determined.
Purification of bHb.
Bovine blood erythrocytes were obtained freshly from a local slaughterhouse. The blood was centrifuged at 10,000g for 5 min at 4°C to remove the serum. The pellet was resuspended in PBS buffer (pH 7.4) at the original volume. This process was repeated three times to remove as much serum as possible. The remaining red blood cells were then lysed overnight at 4°C in an equal volume of distilled water. The solution was then centrifuged at 10,000g for 60 min at 4°C to remove cell debris. The solution was dialyzed against 50 mM Tris−HCl buffer (pH 8.5) and loaded on a Q Sepharose High Performance column (2.6 cm × 20 cm, GE Healthcare, USA). 36 The column was equilibrated with five column volumes (CVs) of 50 mM Tris−HCl buffer (pH 8.5) and eluted by a pH gradient (pH 8.5−6.5) in 50 mM Tris− HCl buffer. The peak corresponding to bHb was fractionated.
Preparation and
Purification of the Conjugates. HES130 (50 mg/mL) and HES200 (50 mg/mL) were oxidized by 20 mM sodium meta-periodate in 20 mM sodium acetate buffer (pH 5.6). The mixtures were incubated at room temperature for 30 min in the dark and terminated by addition of excessive ethylene glycol followed by extensive dialysis against PBS buffer (pH 7.4). bHb was incubated with the oxidized HES130 and NaCNBH 3 at a molar ratio of 4:3:300 at 4°C for overnight. Glycine was added at a glycine−Hb molar ratio of 20:1 to terminate the reaction and obtain the bHb-HES130 conjugate (bHb-HES130). The bHb-HES200 conjugate (bHb-HES200) was prepared essentially in the same way as bHb-HES130 except that HES200 was used.
A Superdex 200 column (2.6 cm × 60 cm, GE Healthcare, USA) was used to purify the conjugates based on size exclusion chromatography (SEC). The column was equilibrated and eluted by PBS buffer (pH 7.4) at a flow rate of 2.0 mL/min. The effluent was monitored at 280 nm. Due to the difference in the size of the conjugates, Hb, and HES, the three components in the reaction mixtures were well separated by the column. The peaks corresponding to the two conjugates were fractionated.
2.5. Quantitative Assay. The concentrations of oxy-, deoxy-, and methemoglobin were calculated from their absorbance at three wavelengths (630, 576, and 560 nm). 37 The total bHb concentration was obtained from the summation of the concentrations of the three components. The HES contents of the conjugates were determined by the phenol-sulfuric acid method. 38 The bHb/HES molar ratios of the conjugates were thus calculated by comparison of the bHb and HES contents.
2.6. Size Exclusion Chromatography Analysis. bHb-HES130 and bHb-HES200 were analyzed by an analytical Superose 6 column (1 cm × 30 cm, GE Healthcare, USA) at room temperature. The column was extensively equilibrated and eluted by PBS buffer (pH 7.4) at a constant flow rate of 0.5 mL/min. The effluent was detected at 280 nm.
2.7. Thiol Reactivity. The thiol reactivities of bHb-HES130 and bHb-HES200 were estimated by measuring the conversion of 4-PDS to 4-thiopyridone at 324 nm as a function of incubation time. 39 The thiol groups of Cys-93(β) in bHb-HES130 and bHb-HES200 were calculated by evaluation of the 4-thiopyridone content.
2.8. Circular Dichroism Spectroscopy. A J-810 spectropolarimeter (Jasco, Japan) was used to record the circular dichroism (CD) spectra of bHb-HES130 and bHb-HES200 at 25°C. For the near-UV spectra (480−260 nm), bHb-HES130 and bHb-HES200 were both at a protein concentration of 2.0 mg/mL in 20 mM sodium phosphate buffer (pH 7.4). The spectra were obtained with an average of three repeated scans using a cuvette with a 1 mm path length. The molar ellipticity (θ) was expressed in degree square-centimeter per decimole on a heme basis.
2.9. Dynamic Light Scattering. The molecular sizes of bHb-HES130 and bHb-HES200 were determined by dynamic light scattering based on a Zetasizer Nano ZS (Malvern Panalytical, UK). bHb-HES130 and bHb-HES200 were both at a protein concentration of 1.0 mg/mL in 20 mM sodium phosphate buffer (pH 7.4).
2.10. Extrinsic Fluorescence Measurement. bHb-HES130 and bHb-HES200 were mixed with 10-fold molar ANS at a final protein concentration of 0.1 mg/mL in 20 mM sodium phosphate buffer (pH 7.4). The resultant samples were determined by extrinsic fluorescence spectroscopy using an F-4500 fluorescence spectropolarimeter (Hitachi, Japan). The emission spectra were excited at 350 nm and recorded from 400 to 650 nm. The excitation and emission slit widths were 10 and 20 nm, respectively.
2.11. Oxygen Affinity. Oxygen equilibrium curves of bHb-HES130 and bHb-HES200 were recorded with a Hemox analyzer (TCS Scientific, USA) at 37°C, as described elsewhere. 40 bHb, bHb-HES130, and bHb-HES200 were all at a protein concentration of 1.5 mg/mL in the Hemox buffer. The P 50 values were obtained directly from the curves. The Hill coefficient (n) was calculated from the Hill plot (log Y/(1 − Y) vs log P) where Y was the fractional saturation of Hb with oxygen and P was the oxygen pressure in millimeters of mercury (mmHg).
2.12. Bohr Effect. The oxygen equilibrium curves of bHb-HES130 and bHb-HES200 were measured at a pH range of 7.2−7.6. The P 50 and Hill coefficient of bHb-HES130 and bHb-HES200 were obtained to evaluate their Bohr effects.
2.14. FT-IR Spectroscopy. bHb-HES130 and bHb-HES200 were dialyzed against water followed by lyophilization. The lyophilized samples were determined by a Nicolet iS50 FT-IR spectrometer (Thermo Fisher, USA) using KBr discs. The FT-IR spectra were recorded from 4000 to 400 cm −1 , and the interferograms were presented as transmittance.
Colloidal Osmotic Pressure Measurements.
The colloidal osmotic pressures (COPs) of bHb-HES130 and bHb-HES200 at different protein concentrations (5−25 mg/mL) were measured by a Wescor 4420 Colloidal Osmometer at room temperature. The samples were dissolved in PBS buffer (pH 7.4). Each sample was measured three times. The instrument was calibrated by Osmocoll reference standards (Wescor).
2.16. Animals. Healthy male Wistar rats (220−260 g; Vital River, Beijing, China) were treated with ad libitum access to food and water. The rats were anesthetized by intraperitoneal injection of 50 mg/kg of pentobarbital sodium (Chinese Medicine Group Chemical Agent, Beijing, China) and placed in the supine position on a warming pad (TMS-202, Softron Biotechnology, Beijing, China) at 37 ± 0.1°C. Heparin (400 U/kg; Chinese Medicine Group Chemical Agent, Beijing, China) was administered via the carotid artery to inhibit coagulation. All experimental procedures were approved by the Laboratory Animal Center of the Academy of Military Medical Sciences (IACUC-DWZX-2022−631, Beijing, China). The research protocol adhered to the institutional guidelines for the care and use of laboratory animals. The next studies were not performed in Wistar rats, but rather Wistar rat blood was used in vitro.
2.17. Viscosity. bHb, bHb-HES130, and bHb-HES200 were mixed with whole blood from the Wistar rats at a ratio of 1:5 (v/v) at room temperature, respectively. The supernatant was obtained by centrifugation of the mixtures at 2000 rpm for 10 min. The mixtures (500 μL) and the corresponding supernatants (500 μL) were used for the viscosity measurement. bHb, bHb-HES130, and bHb-HES200 were all in the range of 5−15 mg bHb/mL (500 μL) in 20 mM sodium phosphate buffer (pH 7.4). The viscosity was measured at a shear rate range of 50−200 s −1 at 37°C using a rheometer (Brookfield Engineering, USA). Each sample was measured three times.
2.18. Index of Rigidity. The effect of bHb-HES130 and bHb-HES200 on the deformability of red blood cells could be reflected by the index of rigidity (IR). 41 IR was calculated by the following formula: IR = (η h − η p )/η p × 1/Hct. η h was the viscosity of the whole blood. η p was the viscosity of the mixture of the samples and whole blood at a ratio of 1:5 (v/v). The viscosity was measured at a shear rate of 200 s −1 . The mean value of Hct in whole blood was 41.6%. The Hct values of other groups were calculated according to the mixing ratio of the volume.
2.19. Platelet Aggregation. Two female Wistar rats (∼300 g) were anesthetized by intraperitoneal injection of 2.5% pentobarbital sodium solution. Blood was collected and placed in a tube containing 3.2% trisodium citrate. All experimental procedures were approved by the Laboratory Animal Center of the Academy of Military Medical Sciences (IACUC-DWZX-2022-631, Beijing, China). The platelet-rich plasma (PRP) was obtained by centrifuging one aliquot of the whole blood at 100g for 10 min. The platelet-poor plasma (PPP) was acquired by centrifugation of one aliquot at 2000g for 5 min.
bHb, bHb-HES130, and bHb-HES200 (10 μg bHb/μL, 15 μL) were mixed with PRP (210 μL) or PPP (235 μL) in a cuvette. The mixtures were incubated with constant shaking at 100 rpm for 15 min at room temperature. All the samples were placed in a thermostatic well of a platelet aggregometer (Helena AggRAM, USA) and incubated at 37°C for 15 min. Distilled water was used to calibrate the instrument. The mixtures containing PPP were measured directly. The mixtures containing PRP were measured with addition of 50 μM adenosine diphosphate (ADP, 25 μL). The spectra of the aggregation percentage were recorded by HemoRam software (Version 1.3). The maximal aggregation percentage was directly obtained from the spectra, which reflected the aggregation rate of platelets. 42 2.20. Hemolysis Rate. The red blood cells (RBCs) from the Wistar rats were washed three times using normal saline solution and centrifuged at 2000g for 10 min. bHb-HES130 and bHb-HES200 (30 μL) were mixed with RBCs (300 μL) at 30% hematocrit (Hct). The mixtures were incubated for 1 h at 37°C to obtain the cell suspension. Each suspension (150 μL) was added with normal saline (NS) solution (400 μL). The total Hb concentration (ctHb) was measured by a BC-500 veterinary whole blood analyzer (Mindary, China). The mixtures were centrifuged at 2000g for 10 min to obtain the supernatant. Each supernatant was mixed with a chromogenic reagent and incubated for 20 min at 37°C using a free hemoglobin assay kit. Distilled water was used as the control, and the absorbance at 510 nm was determined. Each measurement was repeated three times. The hemolysis rate was calculated by the following formula: Hemolysis rate = Free Hb concentration × (1 − Hct)/ctHb.
Blood Cell Morphology.
The morphology of RBCs in the presence of bHb-HES130 and bHb-HES200 was measured using an inverted optical microscope (RVL-100-G, ECHO, USA). bHb, bHb-HES130, and bHb-HES200 (0.3 mg bHb) were incubated with BRCs at 0.6% Hct for 30 min. The specimens were prepared by dropping the mixtures (20 μL) onto the slide followed by covering with the cover glass. The specimens were observed under the inverted optical microscope. The magnification was adjusted to 40× to take a photo.
SDS-PAGE Analysis.
HES130 and HES200 were conjugated with bHb to generate bHb-HES130 and bHb-HES200, respectively. As shown in Figure 1a, bHb (Lane 2) displayed a single band corresponding to one globin of bHb (16 kDa). This was due to the dissociation of the tetrameric bHb (64 kDa) to one globin under the electrophoresis condition. bHb-HES130 (Lane 3) and bHb-HES200 (Lane 4) both exhibited a band with much lower mobility than bHb corresponding to a molecular weight of over 200 kDa. The oxidized HES contains multiple aldehyde groups, and bHb contains multiple amino groups. Multiple aldehyde groups of one HES molecule may react with the amino groups of Hb subunits. Thus, the four subunits of Hb may be intramolecularly cross-linked by the conjugated HES, which can prevent the dissociation of bHb subunits.
Dynamic Light Scattering Analysis.
The molecular radii of bHb-HES130 and bHb-HES200 were determined by dynamic light scattering. As shown in Figure 1b, the molecular radius of bHb-HES130 (8.66 nm) was higher than that of bHb (2.81 nm, P < 0.05) and lower than that of bHb-HES200 (9.25 nm, P < 0.05). This revealed that the molecular radius of bHb could be significantly enhanced by conjugation with HES.
3.3. Quantitative Assay. The Hb and HES contents of the two conjugates were measured. The Hb/HES molar ratios of bHb-HES130 and bHb-HES200 were calculated to be 3.3:1 and 3.5:1, respectively. Thus, one HES molecule was conjugated with 3−4 bHb molecules in one entity. This suggested that HES could be conjugated with multiple bHb molecules and the bHb amount was comparable in the two conjugates.
3.4. Size Exclusion Chromatography Analysis. bHb-HES130 and bHb-HES200 were both analyzed by an analytical Superose 6 column (1 cm × 30 cm). As shown in Figure 1c, bHb was eluted as a single peak at 17.5 mL. In contrast, bHb-HES130 was eluted as a wide and asymmetric peak at 15.7− 16.3 mL that was left-shifted compared to bHb due to the wide molecular-weight distribution of the conjugated HES130. Moreover, bHb-HES200 was eluted as a wide peak at 13.7− 14.3 mL, which was left-shifted compared to bHb-HES130. This was due to the conjugation of HES200 with a larger molecular size.
Thiol Reactivity.
The thiol reactivity of Cys-93(β) in the oxy state of Hb was an indicator of a structural change at the α1β2 interface of Hb. 43 Thiol reactivities of Cys-93(β) in the conjugates were measured by titration with 4-PDS. As shown in Figure 1d, the thiol reactivity of bHb-HES130 was slightly lower than that of bHb and higher than that of bHb-HES200. Thus, conjugation with HES resulted in slight structural perturbation at the α1β2 interface of bHb. Moreover, the thiol groups of the three samples were close to 2.0, indicating that the two thiol groups were essentially maintained in the conjugates.
3.6. Circular Dichroism Spectroscopy. The structures of the conjugates were investigated by CD spectroscopy. The L band (∼260 nm) was sensitive to the interaction between the heme and the surrounding globin, being influenced by the ligand interactions. As shown in Figure 2a, the L band intensities of the conjugates were higher than that of bHb. In contrast, the intensity of bHb-HES130 was slightly lower than that of bHb-HES200. This indicated that conjugation with HES altered the interaction of oxygen and the heme of bHb. 44 A molar ellipticity of approximately 285 nm was indicative of the transition from the R (relax) state to the T (tense) state and sensitivity to the quaternary structure of Hb at the α1β2 interface. 45 As shown in Figure 2a, the ellipticity of approximately 285 nm of bHb-HES130 was slightly lower than that of bHb and slightly higher than that of bHb-HES200. This indicated that conjugation of HES could slightly perturb the structural transition of bHb from the R state to the T state and alter the oxygen delivery and unloading of bHb.
The Soret band of Hb reflected the interactions of the heme prosthetic group with the surrounding aromatic residues and modifications in the spatial orientation of these amino acids with respect to the heme, affecting porphyrin transitions and π−π* transitions in the surrounding aromatic residues. 46 As shown in Figure 2a, the conjugates both showed higher ellipticity than bHb in the Soret band region along with no shift in the maximal wavelength of the Soret band. In addition, the ellipticity of bHb-HES130 was slightly lower than that of bHb-HES200. This suggested that conjugation with HES could slightly perturb the heme environment of bHb.
Extrinsic Fluorescence Analysis.
The hydrophobicity of the conjugates was determined by extrinsic fluorescence spectroscopy using ANS as the probe. 47 As shown in Figure 2b, the spectrum of bHb-HES130 was almost superimposed on that of bHb-HES200. The fluorescence intensity of bHb was lower than those of the conjugates. This suggested that conjugation with HES altered the hydrophobicity of bHb.
Moreover, bHb and the two conjugates all showed the maximum wavelength at 473 nm.
3.8. UV−Vis Spectroscopy. UV−vis spectroscopy was used to analyze the conjugates. As shown in Figure 2c, bHb showed three characteristic peaks at 410, 540, and 576 nm. The spectra of the conjugates were almost superimposed on that of bHb. The three characteristic peaks indicated that the conjugates were in the form of full oxygenation. In addition, no peak was observed at 630 nm in the spectra, which could reflect the presence of the methemoglobin. Moreover, the methemoglobin contents of the two conjugates were calculated to be zero with the absorbance at 560, 576, and 630 nm. 37 3.9. FT-IR Spectroscopy. FT-IR spectroscopy was used to characterize the conjugates. As shown in Figure 2d, the FT-IR spectrum of bHb showed the characteristic peaks at 3300 cm −1 (N−H stretching), 1650 cm −1 (C�O stretching), and 1540 cm −1 (N−H wagging). The spectra of HES130 and HES200 showed the characteristic peaks at 3300 cm −1 (O−H stretching), 2925 cm −1 (−CH 2 asymmetric stretching), and 2851 cm −1 (−CH 2 symmetric stretching). In particular, the peaks at 2925 and 2851 cm −1 were ascribed to the hydroxyethyl moieties of HES. The spectra of the conjugates showed the characteristic peaks at 3300, 2925, 2851, 1650, and 1540 cm −1 . This indicated that the two spectra contained the signals of HES and bHb. Moreover, the intensities of the conjugates at 1540 cm −1 were stronger than bHb due to the formation of secondary amine groups between HES and bHb. The intensities of the conjugates at 1650 cm −1 were significantly stronger than bHb, indicating that the conjugates still maintained the classical α-helix of bHb.
Oxygen Affinity Measurement.
The P 50 values of bHb-HES130 (15.1 mmHg) and bHb-HES200 (13.9 mmHg) were both lower than that of bHb (30.2 mmHg) under physiological conditions (pH 7.4). Thus, HES conjugation could decrease the P 50 values of bHb as a function of the HES size. However, the P 50 values of the conjugates indicated that they still exhibited certain ability for oxygen delivery and unloading. On the other hand, the Hill coefficients (n) of bHb-HES130 (1.74) and bHb-HES200 (1.67) were lower than that of bHb (2.79), indicating that the subunit cooperativity of bHb was decreased upon conjugation with HES. Although the P 50 and Hill coefficients decreased, the two conjugates could still provide oxygen supply to the hypoxic tissues.
3.11. Bohr Effect. The Bohr effect of bHb could be reflected by P 50 values at the different pH conditions. The P 50 values of bHb at pH 7.2 and 7.6 were 30.2 and 23.2, respectively, which increased as the pH decreased. In contrast, the P 50 values of bHb-HES130 at pH 7.2 and 7.6 were 19.5 and 12.8, respectively. The P 50 values of bHb-HES200 at pH 7.2 and 7.6 were 15.1 and 11.6, respectively. Thus, the P 50 sensitivity of bHb-HES130 and bHb-HES200 to low pH was not altered. This indicated that the protonation effect on the oxygen affinity of bHb was not altered by HES conjugation. Thus, the Bohr effect of bHb was not altered by conjugation with HES.
3.12. Colloidal Osmotic Pressure. The COP is important to maintain the water balance between inside and outside blood vessels, which reflects the ability of a plasma expander to expand the blood volume. As shown in Figure 3a, the COP values of the conjugates increased as a function of the bHb concentration. Moreover, the COP of bHb-HES130 was higher than that of bHb and lower than that of bHb-HES200. In particular, the COP values of bHb-HES130 and bHb-HES200 at 25 mg/mL were 16.5 and 26.5 mmHg, respectively. Thus, it was expected that the conjugates could expand the blood volume and improve the blood circulation to the hypoxic tissues.
3.13. Viscosity. As shown in Figure 3b, the viscosity of bHb-HES130 was higher than that of bHb and lower than that of bHb-HES200 at 5−15 mg/mL. Thus, conjugation with HES could significantly increase the viscosity of bHb. The viscosity of bHb slightly and linearly increased as the protein concentration increased. In contrast, the viscosities of the conjugates both exhibited a non-linear dependence on the protein concentration.
As shown in Table 1, the blood viscosities of bHb, bHb-HES130, and bHb-HES200 gradually decreased as the shear rate increased. In contrast, the plasma viscosities of the samples slightly decreased as the shear rate increased. Compared with the control, dilution with NS and bHb slightly decreased the blood viscosity and essentially maintained the plasma viscosity. In contrast, dilution with the conjugates significantly decreased the blood viscosity and slightly increased the plasma viscosity.
3.14. Index of Rigidity. The deformability of RBCs is conducive to pass through capillaries and improve the microcirculation. The IR is an indicator to evaluate the deformability of RBCs, and a high IR indicated the poor deformation capacity of RBCs. The IR value of RBCs was 12.22 ± 0.15. In contrast, the IR values of RBCs incubated with NS, bHb, bHb-HES130, and bHb-HES200 were 8.71 ± 0.10, 8.74 ± 0.10, 5.74 ± 0.07, and 5.19 ± 0.06, respectively. Thus, the conjugates showed low IR values.
3.15. Hemolysis Rate. As a parameter to characterize the hemocompatibility, the hemolysis rates of the conjugates were measured. As shown in Table 2, the hemolysis rates of HES130, HES200, bHb-HES130, and bHb-HES200 were all
ACS Omega
http://pubs.acs.org/journal/acsodf Article slightly lower than that of bHb. This indicated that conjugation of HES could slightly decrease the hemolysis rate of bHb. Moreover, the hemolysis rates of the conjugate groups were close to that of the NS group.
Platelet Aggregation.
ADP is a main platelet activator and could process the platelet aggregation. As shown in Table 2, the platelet aggregation rates of the bHb (80.4%), HES130 (84.7%), and HES200 (80.6%) groups were close to that of NS (80.7%). In contrast, the platelet aggregation rates of the bHb-HES130 (92.8%) and bHb-HES200 (91.7%) groups were higher than that of NS (80.7%). This indicated that the conjugates could promote the ADP-induced platelet aggregation. Thus, conjugation with HES could maintain the normal hemostatic mechanism of RBC.
3.17. Morphology of RBCs. RBCs exhibited a double concave disc shape in the normal vessels. The appearance of RBCs may be altered to spinous and spherical erythrocytes by extra factors. As shown in Figure 4, there are essentially no differences among images a−f. The erythrocytes containing NS, HES, and the two conjugates were all intact and their morphological integrity was maintained. However, RBCs were attached and showed some rouleaux-like structure at the right top side in Figure 4e. The RBC adhesion of the bHb-HES130 group may be due to the fact that the sample was not fully mixed before addition to the slide. In our pre-experiment, the adhesion of RBCs in the presence of bHb-HES was observed with the extension of incubation time (0, 30, and 60 min). The increase in the bHb-HES volume did not alter this situation.
DISCUSSION
In the present study, HES and bHb were covalently conjugated to develop an effective oxygen carrier and plasma expander.
The oxygen affinity, Bohr effect, and structure of the conjugates (bHb-HES130 and bHb-HES200) were measured to evaluate their effectiveness as an oxygen carrier. The COP and viscosity of the conjugates were investigated to evaluate their ability to compensate for the intravasal volume deficiency. The physiological effects of the conjugates on RBCs were also investigated.
HES130 and HES200 induce severe hypoxia in the tissue due to the fact that they cannot deliver and release the oxygen. In contrast, Hb could alleviate the hypoxia in the tissues for its oxygen-delivery ability. Thus, these two molecules were conjugated with Hb to achieve this objective. Previously, Sakai et al. 34 used HES with a M w of 70 kDa to conjugate with Hb. HES was activated by cyanogen bromide followed by conjugation with Hb at harsh conditions (pH 10.8). In the present study, HES with a higher M w (130 and 200 kDa) was oxidized by sodium periodate to obtain the aldehyde groups. The aldehyde groups of HES could react with the ε-amino groups of lysine residues and the α-amino groups of Nterminal valine residues of bHb under mild conditions (pH 7.4) to obtain the bHb-HES conjugate.
Some lysine residues (e.g., Lys-40(α)) played an important role in cooperative oxygen binding with Hb. Conjugation at these sites could perturb the T state and the heme environment of bHb thereby altering the oxygen delivery and unloading. In addition, the conjugated HES could create a large hydrated layer around bHb by binding bulky water molecules, which restricted the R state-to-T state transition of bHb. The oxy conformational state of bHb with more water molecules was thus favored over the deoxy state with less water molecules. Thus, the low P 50 values of bHb-HES130 and bHb-HES200 could either be a direct consequence of covalent conjugation with bHb, or the conjugated HES itself, or a combination of the two. However, the P 50 values of bHb-HES130 (15.12 mmHg) and bHb-HES200 (13.89 mmHg) were close to or in the range of 15−20 mm Hg, which could achieve adequate oxygen delivery in vivo to the tissues and alleviate tissue hypoxia. 48 Typically, intravenous infusion of a large volume of protein solution may alter the solution properties of the blood. The COP was related to the colloidal volume-expanding efficacy and facilitated the blood flow recovery in resuscitation. 49 Previously, Hu et al. prepared a PEGylated Hb to act as an oxygen carrier and plasma expander using aldehyde chemistry. 50 The PEGylated Hb at 20 mg/mL showed a COP value of ∼30 mmHg, which was slightly higher than that of bHb-HES200 (∼22 mmHg at 20 mg/mL). Thus, conjugation of bHb with HES was expected to increase the plasma volume and the molecular volume and reduce the in vivo extravasation rates of bHb and HES.
Viscosity is an important factor for putative plasma expanders. The colloid solution exerted on the endothelial cells by flowing blood induces wall shear stress and triggers flow-induced dilation. 34 Typically, transfusion with a plasma expander could improve the blood fluidity from the interstitium and lower the whole blood viscosity and the shear stress. Interestingly, bHb-HES200 at 10 mg/mL showed a viscosity value of ∼2.4 cp, which was much higher than that of the PEGylated Hb at 10 mg/mL (∼1.2 cp). 50 Thus, the conjugate was expected to expand the blood volume and improve the microcirculation. These improved hemorheological properties can alleviate the state of anoxic tissues.
The physiological effect of bHb-HES130 and bHb-HES200 on RBCs was evaluated by measuring the RBCs morphology and rigidity, hemolysis rate, and platelet aggregation. The morphology and deformability of RBCs were maintained upon transfusion of the conjugates, which was of physiological significance for volume expansion. and bHb-HES130 and bHb-HES200 both showed good blood compatibility as reflected by the low hemolysis rates. bHb-HES130 and bHb-HES200 could maintain the normal hemostatic mechanism of RBCs. Thus, bHb-HES130 and bHb-HES200 did not display any apparent side effect on the physiological aspects of RBCs.
Significant differences in platelet aggregation of bHb-HES130/bHb-HES200 and the NS groups have been observed in our study. Previous study suggested that the HES solution had little impact on platelet aggregation. 51 In contrast, dextran sulfate triggers platelet aggregation via direct activation of PEAR1. 52 Thus, the next study should be focused on the mechanism for the conjugates to induce a strong aggregation in connection to ADP.
bHb-HES130 and bHb-HES200 both displayed unaltered Bohr effects and slight structural changes. In contrast, the P 50 of bHb-HES130 (15.1 mmHg) was slightly higher than that of bHb-HES200 (13.9 mmHg). The COP and viscosity of bHb-HES200 were higher than those of bHb-HES130, indicating the higher ability of bHb-HES200 to compensate for the intravasal volume deficiency. Thus, bHb-HES130 showed higher ability than bHb-HES200 to act as an oxygen carrier but exhibited lower effectiveness than bHb-HES200 to work as a plasma expander.
In summary, the quaternary structure and heme environment of bHb were slightly perturbed upon conjugation with HES. The two conjugates (bHb-HES130 and bHb-HES200) could effectively deliver and release oxygen without alteration in the Bohr effect. The COP and viscosity suggested that the conjugates were an effective plasma expander. The conjugates showed no apparent side effects on the red blood cell morphology, rigidity, and hemolysis. Thus, bHb-HES130 and bHb-HES200 were expected to function as a potential oxygen carrier to alleviate tissue hypoxia and as an effective plasma expander to compensate for the intravasal volume deficiency. | 2023-03-18T15:08:17.552Z | 2023-03-15T00:00:00.000 | {
"year": 2023,
"sha1": "2bd3ad544fbe1e1a9cbcf26ea82881b573868544",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1021/acsomega.3c00275",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4cd3a1149903126227d96b5afd4affa765f548bb",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": []
} |
233671810 | pes2o/s2orc | v3-fos-license | A Bio-Conjugated Fullerene as a Subcellular-Targeted and Multifaceted Phototheranostic Agent
Fullerenes are candidates for theranostic applications because of their high photodynamic activity and intrinsic multimodal imaging contrast. However, fullerenes suffer from low solubility in aqueous media, poor biocompatibility, cell toxicity, and a tendency to aggregate. C 70 @lysozyme is introduced herein as a novel bioconjugate that is harmless to a cellular environment, yet is also photoactive and has excellent optical and optoacoustic contrast for tracking cellular uptake and intracellular localization. The formation, water-solubility, photoactivity, and unperturbed structure of C 70 @lysozyme are confirmed using UV-visible and 2D 1 H, 15 N NMR spectroscopy. The excellent imaging contrast of C 70 @lysozyme in optoacoustic and third harmonic generation microscopy is exploited to monitor its uptake in HeLa cells and lysosomal trafficking. Last, the photoactivity of C 70 @lysozyme and its ability to initiate cell death by means of singlet oxygen ( 1 O 2 ) production upon exposure to low levels of white light irradiation is demonstrated. This study introduces C 70 @lysozyme and other fullerene-protein conjugates as potential candidates for theranostic applications.
Introduction
Phototheranostics, [1,2] based on the integration of light-induced imaging and therapeutic modalities, has attracted considerable attention in recent years as a highly promising approach to non-invasive cancer treatment via photodynamic therapy (PDT). [3,4] PDT is a clinically approved phototherapeutic procedure, which uses light and specific molecules with photosensitizing properties [5] to induce a selective cytotoxic activity towards malignant cells. However, the design of phototheranostic agents (PTAs) is hindered by several conflicting needs and unmet requirements. The desired stimuliresponsive agents, typically referred to as photosensitizers (PS), should on the one hand generate high concentrations of reactive oxygen species (ROS) upon irradiation with light to initiate cell death in a targeted manner, while on the other hand display negligible toxicity in dark conditions. They must also have high solubility in biological systems and the ability to target and accumulate in specific cells, as expected from 3rd generation PSs. [6] Last, these agents should provide imaging contrast to allow monitoring of their bio-distribution or treatment efficacy and possibly reveal mechanisms of action during research and development. [7] A PTA may either be a single-entity species or a nanomaterial conjugated and/or loaded with different imaging and therapeutic agents. [3,4] The single-entity approach reduces the synthetic burden and thus the variability of phototheranostic performance. PTAs can be classified into inorganic or organic materials. [8] Among organic materials, porphyrinoid derivatives and precursors are the most commonly used PTAs, especially for clinical applications. [9] These molecules can produce significant amounts of ROS upon excitation because of their intense absorption in the visible range. Their main limitations are photobleaching and aggregation, which lead to a steady decrease in their efficacy as both ROS producers and contrast agents. In contrast, inorganic PTAs typically display higher photostability than their organic counterparts. Widely used gold nanostructures afford versatile probes for both luminescent [10] and optoacoustic (OA, also termed photoacoustic) [11,12] imaging applications; nevertheless, they do not adequately perform as PS. Gold, as well as other noble metal-based nanoparticles, are often expensive, non-degradable, and require toxic surfactants for their syntheses, which must be avoided for clinical translation. [8,13] Quantum dots are also suitable probes for high-contrast in vitro and in vivo imaging thanks to their strong fluorescence emission upon visible light or NIR light excitation. [14] However, their quantum yield of ROS production is low compared to standard PSs. [6,15] Metal-oxide semiconducting nanomaterials (i.e., TiO 2 and ZnO) have been widely applied as UV light-triggered PSs because of their large bandgap, good chemical stability, low cost, and nontoxicity advantages. [16][17][18] However, despite their outstanding performance as PSs, their application is limited by low solubility in physiological environments, high cytotoxicity, and the poor depth penetration of the UV light required for their excitation.
Graphitic carbon-based molecules and nanomaterials have shown promise for imaging and therapeutic applications. [19] Fullerenes in particular have unique electronic and physical properties that make them attractive candidates for theranostics. [20][21][22][23][24][25][26] Fullerenes are characterized by high photostability, [27][28][29][30] broad UV-visible light absorption, and efficient light stimulated production of ROS by both type I and II mechanisms [27][28][29][30] or heat. [31][32][33] They are excellent electron acceptors, which also enable oxygen-independent photo-killing. [34] Beyond their PS properties, fullerenes also produce ROS upon ultrasound wave irradiation. [35,36] Besides their PDT functionalities, fullerenes feature excellent contrast for both OA and third-harmonic generation (THG) imaging techniques. [31,33,37] OA imaging relies on the detection of acoustic waves that are generated upon non-radiative deexcitation after the absorption of light energy. Fullerenes are ideal OA imaging agents because they exhibit broad absorption across the UV-visible spectral range with negligible quantum yields. THG imaging is sensitive to local differences in third-order nonlinear properties and changes in refractive index. Fullerenes afford excellent THG contrast by greatly altering the local refractive index [38][39][40] due to the delocalization of their π-electron conjugated systems and their hollow structures.
However, using fullerenes in theranostics or other biological applications has in many cases been impeded by their hydrophobicity, which makes them prone to aggregation in aqueous media, reduces their bio-compatibility, [41] leads to quenching of the triplet excited state, [42][43][44] and decreases the active surface area available for ROS production. One means of effectively dispersing fullerenes in aqueous media, while preventing modification of their molecular structures, is through the use of supramolecular solubilizing agents. [45][46][47] Proteins and peptides can be used as biocompatible hosts for fullerenes, allowing the dispersion of fullerenes as single molecules in physiological environments to prevent aggregation phenomena. [48][49][50][51][52][53] In particular, lysozyme binds and disperses C 60 in water, and the resulting C 60 @lysozyme complex is still photoactive, generating significant quantities of ROS upon visible light irradiation, while remaining biocompatible in dark conditions. [48] It was shown that irradiation of C 60 @lysozyme with visible light significantly reduced the viability of HeLa cells in vitro, making it a candidate for PDT. [54] However, the localization, uptake mechanism, and cell targeting properties of this complex have not been investigated.
We hypothesized that the high intrinsic OA and THG contrasts of fullerenes would enable tracking of the uptake and localization of fullerene-protein complexes in cancer cells, thus enabling studies into their theranostic applications. To test this hypothesis, we chose to synthesize and investigate a lysozyme complex comprising C 70 , as this fullerene has a broader absorption profile than C 60 (required for OA imaging, Section S1, Supporting Information) and high photodynamic activity. [55][56][57][58] Herein, we present this new C 70 @lysozyme complex, along with a thorough study of its biocompatibility, cellular uptake, and photodynamic activity. Using 2D 1 H, 15 N NMR we evaluate whether C 70 @lysozyme maintains the structure of the native protein, and thus its ability to cross the cell membrane and to accumulate in subcellular organelles. [59,60] Such accumulation is necessary to enhance both PDT efficacy and imaging contrast. We exploit the multimodal imaging contrast of C 70 to monitor the uptake of C 70 @lysozyme in cancer cells using co-registered and simultaneous OA and THG microscopy. In particular, we employ OA imaging to monitor the trafficking of the C 70 @lysozyme into living cells. Furthermore, we employ fluorescent labels and twophoton excitation fluorescence (2PEF) microscopy, in parallel with OA and THG microscopy, to validate the localization of C 70 @lysozyme at the subcellular level, and in particular its accumulation within the lysosomes. Finally, we confirm the photoactivity of C 70 @lysozyme and its ability to initiate cell death upon irradiation with white light by means of intracellular singlet oxygen ( 1 O 2 ) production. This study not only introduces C 70 @lysozyme as a novel and selective PTA, but also showcases the powerful multimodal contrast of biocompatible fullerene-protein bioconjugates.
Results and Discussion
We prepared monodispersed C 70 by host-guest interaction with lysozyme in the form of a C 70 @lysozyme bioconjugate, constituting a non-covalent supramolecular complex. The UV-visible absorption spectrum of the C 70 @lysozyme in water ( Figure 1A) revealed features that belong to both components of the bioconjugate, demonstrating the attainment of water solubility of C 70 . Comparing the photophysical data reported by Ke et al. for dispersion of C 70 in water (Section S2, Figure S2.1, Supporting Information), [61] the herein found absorbance is consistent with a concentration of solubilized C 70 similar to that of lysozyme, supporting the formation of a stable stoichiometric 1:1 complex of C 70 @lysozyme. [49] In addition, the colloid stability of C 70 @lysozyme was tested by UV-visible kinetic analysis (Section S2, Figure S2.2, Supporting Information). The results demonstrate that the complex is stable for at least three hours in static conditions, corresponding to the incubation time for the in vitro PDT assay.
Chemical shift perturbation analysis of the 2D 1 H, 15 N NMR spectra confirmed that, upon C 70 binding, lysozyme retains its 3D folding (Sections S3 and S4, Supporting Information), similar to the observations for the binding of C 60 . [49,58] Compared to C 60 @lysozyme, [49] the chemical shift perturbations induced upon C 70 binding involve more amino acid residues, indicating that the bulkier C 70 site, or that protein-protein interactions occur that shield the exposed surface area of the fullerene from the polar environment.
Atomic force microscopy (AFM) of C 70 @lysozyme (Section S5, Figure S5, Supporting Information) showed that, although a monomolecular dispersion of the bioconjugates over the mica surface is evident, oligomeric structures are also present. This is in agreement with previous characterizations of lysozyme as a protein with a strong tendency to selfassociate in aqueous solution, forming dimeric and trimeric structures, [62] in addition to effects induced by dehydration. However, AFM investigations performed on multiple random areas on the mica surface, both at high and low magnification, did not show any presence of nanoaggregates. The combination of UV-visible, 2D-NMR, and AFM analysis (Sections S1-S4, Supporting Information) demonstrates that the fullerenes are dispersed by lysozyme in aqueous media in a stoichiometric manner, as single molecules, rather than as NPs.
Upon visible light illumination, C 70 sensitizes the production of singlet oxygen. The production of singlet oxygen can be measured either directly, by its radiative decay at 1270 nm (phosphorescence), or indirectly using a singlet oxygen fluorescent probe (e.g., SOSG). [63,64] In order to evaluate quantitatively the photosensitizing ability of C 70 @lysozyme, the production of singlet oxygen ( 1 O 2 ) in water upon visible light irradiation was herein measured using the phosphorescence of 1 O 2 at 1270 nm. The quantum yield of 1 O 2 generation by C 70 @ lysozyme was determined by comparing it with that obtained by a standard PS (Rose Bengal (RB), Φ Δ = 0.76 in D 2 O solution). [65] The phosphorescence spectra of isoabsorbing solutions of C 70 @lysozyme and RB, at the excitation wavelength (λ exc = 514 nm), are shown in Figure S6, Section S6, Supporting Information. C 70 @lysozyme was found to have a Φ Δ of 0.60. More importantly, when the experiment was repeated in water, the RB did not produce a detectable quantity of 1 O 2 , while C 70 @lysozyme produced significant amounts (Φ Δ of 0.31, compared to Φ Δ of RB in D 2 O). This effect might be due to the confinement of C 70 in the lysozyme binding pocket, with the hydrophobic protein enabling generation of 1 O 2 by shielding the sensitizing C 70 chromophores from quenching by water molecules. [66] The ability to produce ROS in water is crucial for the effective application of PDT in real physiological environments. In addition, C 70 @lysozyme generates 1 O 2 with excitation throughout the visible range ( Figure S6, Supporting Information), which enlarges its potential for practical applications.
Due to the insolubility of C 70 in water and water-miscible solvents, standard methodologies for determining binding energy, such as ITC or fluorescence titration analysis, [67] cannot be performed for C 70 and lysozyme. Nevertheless, we determined the binding affinity between the two interacting systems using a computational protocol, which was recently validated to calculate the binding energy between proteins and fullerenes (Section S7, Supporting Information). The binding affinity (ΔG binding ) between C 70 and lysozyme is −19.9 kcal mol −1 . In addition, we provide an accurate analysis of the thermodynamic of binding between C 70 and lysozyme: the total binding energy was decomposed into their binding components and the contribution to the binding of each amino acid was calculated (Section S7, Supporting Information).
We assessed the cytotoxicity and phototoxicity of C 70 @ lysozyme at different concentrations upon photoexcitation with visible light (white-light LED) by testing its ability to inhibit growth and induce death in HeLa cells. Figure 2 shows that a small reduction in cell viability or a potential inhibition of cell growth was observed in darkness only at a high concentration of C 70 @lysozyme (>5 µm). In contrast, irradiation of HeLa cells with visible light at ultralow light power (irradiance 2 mW cm −2 ) for 10 min in the presence of C 70 @lysozyme caused an increase in the cell mortality in a dose-dependent We next tested the optical and OA characteristics of C 70 @ lysozyme to investigate its potential as a contrast agent for phototheranostics. Using a custom-built hybrid microscopy system, [68][69][70][71][72][73] which combines optical-resolution OA, THG, and 2PEF microscopy, we clearly demonstrated that C 70 generates significant THG response ( Figure S9A, Supporting Information) as well as intense OA signal ( Figure S9B, Section S9, Supporting Information) when dispersed on a microscope slide.
Having confirmed the strong OA and THG contrast generated by C 70 @lysozyme, we investigated its uptake and subcellular distribution in living cells. This was done by performing time-course studies on HeLa cells in vitro under physiological conditions for real cell live imaging (i.e., 37 °C, 5% CO 2 , and 80% humidity) using a stage top incubator. We imaged cultured cells in time steps of 15 min for up to 4 h of incubation with OA only, as parallel THG measurements would increase the potential of inducing cell death via PDT or photothermal ablation over long time exposure. Figure 3 shows that no significant uptake is observed during the first hour, while sub-resolution point-like signals are detected for OA after approximately 1.5 h. The application of particle-analysis methods showed that the uptake-trajectories peaked in terms of the number of signals and total area coverage at 3 h ( Figure 3D). Furthermore, the mean value (≈ 200 a.u.) of the signals remained constant, which indicates high regularity of C 70 @lysozyme trafficking driven by the intrinsic cellular behavior. Both the constant amplitude and the subcellular spatial extent of the signal suggest lysosomal accumulation of C 70 @lysozyme and the conservation of the "biological identity" of the carrier protein. [59,60] Brightfield images were taken before and after the experiment to confirm the healthy state of the cells during the measurements (Section S10, Supporting Information).
Finally, we tested the lysosomal localization of C 70 @ lysozyme by fluorescently labeling lysosomes (LysoTracker) for 2PEF readings (excitation wavelength 521.5 nm), which were performed with the identical microscope used for OA and THG imaging. We were able to co-localize the majority of the C 70 @lysozyme into the lysosomes after 3 h post washout (Figure 4). Both the THG and OA signals were resolved, which revealed a co-localized pattern that offers the possibility to detect fullerene distribution without immunolabeling [74] and without attaching imaging tags to its cage, [75] which could perturb the properties of fullerene and its real distribution. The high spatial correlation between OA and THG, calculated using the Pearson Correlation Coefficient (PCC), confirms its dual-modal contrast in subcellular compartments. The non-vanishing, but relatively low, PCC to 2PEF readings suggest that while there is abundant lysosomal localization of C 70 @lysozyme, there are also some lysosomes that are labeled but agent-empty, as well as some unspecific fluorescence signal evoked from the cell body. Hence, it can be assumed that all C 70 @lysozyme is trafficked to lysosomes and all lysosomes are labeled, but not all lysosomes contain C 70 @lysozyme. Analogous imaging experiments of HeLa cells with C 70 @BSA showed no significant signal patterns in the THG and OA inside the cells, which suggests that C 70 @BSA undergoes neither endocytic uptake nor trafficking to the lysosomes (Section S11, Supporting Information). The local accumulation of C 70 @lysozyme inside cells is a significant advantage over C 70 @BSA, since such an accumulation enhances both OA and THG imaging and C 70 's ability to generate ROS intracellularly.
Whereas THG and OA signals of C 70 @lysozyme are only detectable within an intracellular compartment upon high local accumulation and sufficient size (>300 nm) due to sensitivity limitations of these modalities, the 2PEF signals can also arise from unbound fluorescent labels. Furthermore, whereas LysoTracker Deep Red is expected to label all lysosomes, we do not expect all lysosomes to contain sufficient C 70 @lysozyme for THG and OA detection.
Finally, in order to test the photoactivity of bioconjugated C 70 @lysozyme, we monitored the fluorescence of singlet oxygen sensor green (SOSG), whose fluorescence intensity increases in the presence of singlet oxygen (Figure 5). The measurement was performed at 20 min intervals (Section S12, Supporting Information), with either darkness or white light illumination between measurements. We used a high-power LED Module to excite the PS accumulated inside the cells, with a white light temperature of 850 lm.
The intensity of the fluorescence of SOSG increases by a factor of >4.2 when incubated together with C 70 @lysozyme and white light illumination, which suggests significant production of singlet oxygen. Cells incubated only with the SOSG and illuminated with white light showed an increase of ≈2.7 due to the photoactivity of the sensor itself. When not illuminated with white light, SOSG-incubated cells with and without C 70 @lysozyme increased their fluorescence by a factor of ≈1.5, due to the photoactivity induced by the laser used to excite the fluorescence of the SOSG sensor. THG and OA modalities reveal a co-localized patterns with a PCC of 0.597 ± 0.036. THG and 2PEF modalities reveal a co-localized pattern with a PCC of 0.393 ± 0.056. OA and 2PEF modalities reveal a co-localized pattern with a PCC of 0.397 ± 0.028.
Conclusion
It is challenging to meet the many competing requirements for the development of a PTA, such as low toxicity, good contrast, targeted accumulation, and high ROS production capability. Herein, we demonstrate that C 70 @lysozyme efficiently combines the optical and OA contrast and photosensitizing ability of C 70 with the high solubility and monodispersity of lysozyme, overcoming previous limitations of using fullerenes in nanomedicine. Taking advantage of the high OA and THG contrast, we showed that C 70 @lysozyme accumulates in lysosomes in cancer cells, which increases both imaging contrast and targeted cell-killing ability upon irradiation with visible light.
This work fully elaborates the capabilities of fullerene@protein complexes as image-guided PDT agents. As such complexes are highly adaptable, future work could aim at their functionalization both with tumor-targeting tags to improve the cancer cell selectivity and promote the cellular uptake of the photosensitizing agent, [76] and with light-harvesting molecular antennae [45,[77][78][79] to improve both therapeutic efficiency and treatment depth in PDT.
In the future, we foresee C 70 @lysozyme being employed as a PTA in vivo to enable both intracellular generation of singlet oxygen for targeted PDT and monitoring with mesoscopic or macroscopic imaging technology.
Supporting Information
Supporting Information is available from the Wiley Online Library or from the author. | 2021-05-05T00:09:56.765Z | 2021-03-09T00:00:00.000 | {
"year": 2021,
"sha1": "b53a162a118b3dc5d176f552ee01e0f3b885b283",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/adfm.202101527",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "57d0827e5ab5a99f71681c9af259c687655d71d0",
"s2fieldsofstudy": [
"Medicine",
"Chemistry"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
267137176 | pes2o/s2orc | v3-fos-license | Implementation of Know Your Customer Principle in Banking Practices at Bank BNI 46 Bima Branch
This article is a legal research that analyzes the application of the principles get to know customers (know your customer ) in banking practice, this is very important to do to be able to identify transactions early suspicious, and minimize various risks, such as operational risk, legal risk, risk of concentrated transactions, and reputation risk. The method used in this research is an empirical legal research method, by taking the locus at BNI 46 Bima branch. novelty What is found from this research is a violation of the principle of confidentiality where the bank is allowed to know the identity of the customer related to the profile and the character of the Customer's transaction. The results and discussion show that BNI has 46 Bima branches apply Know Your Customer Principles. Know Your Customer Rules as set forth in PBI Number 3/10/PBI/2001 which was later amended by Bank Indonesia Concerning Application of Principles with Bank Indonesia Regulation Number 5/21/PBI/2003 Concerning the Second Amendment to Bank Indonesia Regulation Number 3/10/PBI/ 2001 concerning Application of Know Your Customer Principles . In implementing the principle of knowing your customer, banks can suspect transactions that are suspected of originating from the proceeds of crime, for example money smuggled, bribery, corruption crimes, labor smuggling, banking crimes. In addition, financial transactions that deviate from the profiles, characteristics or habits of the company's customer transaction patterns are also suspect. In conclusion, the principle of knowing your customer (know your customer principle) is one of the important principles in the world of banking and finance, which has been implemented by Bank BNI 46 Bima Branch referring to the policies and procedures implemented by financial institutions to ensure the correct identity and characteristics of their customers before provide financial services to them. However, this is precisely the opposite of the tradition of secrecy between the bank and its customers, which is the main pillar for building a trusting relationship between the bank and its customers.
INTRODUCTION
The world of banking in Indonesia is growing very rapidly because of the various services and banking products offered.There are several banks in Indonesia, including commercial banks, Islamic banks and development banks.Commercial bank is a type of bank engaged in traditional banking, ie.offer loans and accept deposits from customers.Commercial banks also offer services such as wire transfers, bill payments and currency purchases.
Meanwhile, Islamic banks are banks that carry out their banking operations based on sharia principles, for example using mudharabah and musyarakah contracts in lending.
Currently banking services in Indonesia are also increasingly digitized due to internet and mobile banking banking that allows customers to do banking online.This will make banking services more accessible and efficient.However, the Indonesian banking world is still facing several problems such as rampant banking crimes such as data theft and online fraud.In addition, the problems are unfavorable credit levels and less stringent banking supervision.
Therefore, Bank Indonesia as the banking regulator will continue to improve banking supervision and regulation to provide better protection to customers.(Sumantri, 2014) The banking world recognizes the principle of " Know your Customer " (KYC) which is one of the most important principles in the world of banking and finance.This principle refers to the policies and procedures implemented by financial institutions to ensure the correct identity and characteristics of their customers before offering financial services to them.The principle objective of KYC is to protect financial institutions from the risks of fraud, money laundering or terrorist financing.By ensuring that each customer has gone through a proper identity verification process, financial institutions can build a good image and increase public trust in their financial services.Therefore, financial institutions must ensure that the KYC system implemented complies with applicable regulations and is able to detect potential risks and take the necessary precautions to combat these risks.(Iryana et al., 2017) This know your customer principle allows banks to find out details about bank customers.The bank is also given the power to decide whether the customer intends to enter into a legal relationship with the bank.Banks are also entitled to obtain information so that banks can obtain an overview of their customers.Banks can also monitor customer transactions and suspicious activity can be reported immediately.3. Financial transactions that are carried out or canceled are carried out using assets that are suspected of originating from the proceeds of crime.(Ariana, 2016) The implementation of the anti-money laundering program begins with the preparation of guidelines and standard practices for implementing Know Your Customer principles which are a prerequisite for BPRs to support the program.The guidelines stipulated by Bank Indonesia in the application of Know Your Customer Principles cover at least the principles of acceptance and identification of prospective customers, practices for controlling customer accounts and transactions, and risk management practices.The decree also states that each bank is required to establish a special work unit to implement anti-money laundering and eradicate the financing of terrorism programs, namely the work unit for Application of Know Your Customer Principles (UKPN).Functional work units must ensure the implementation of correct, errorfree and efficient internal controls and ensure that all functional work unit employees have received sufficient training so that each employee has the same understanding of money laundering and terrorism financing.(Fitriyani, 2021) Banks must prioritize public money in collecting customer money so that it benefits both the bank itself and the customer which automatically leads to the security of public money.
By applying this know
Careful consideration of public money held by banks enables banks to conduct their business with great care.Because no one can deny that banking is basically about people's money, which is none other than banking.The relationship between the bank and the depositor is a contractual relationship between the debtor and the creditor based on the precautionary principle.(Katili, 2013) Based on the description above, it can be understood that the application of the know your customer principle in the banking world is very important to maintain the stability of the banking situation.Along with developments in technology and information, as well as increasingly complex banking products and processes, the risks faced by banks have also increased.This increase in risk must be compensated for by improving the quality of risk management.Regulations for implementing the know your customer principle have also been further developed based on international standards with the new term " customer" .due diligence " and " enhanced due diligence ".This shows how important it is to apply this principle in banking to avoid increasingly complex risks, which will eventually lead to healthy trust between customers and banks.(Fitriyani, 2021) With the increasing complexity of bank products, functions and information technology, it is feared that it will increase the opportunities for irresponsible parties to use bank products / services to support their crimes, thereby minimizing the use of banks as a means to reduce money laundering. .and financing is complete., banks must play a bigger role than before, namely by implementing optimal and effective APU and PPT programs.The implementation of APU and PPT programs by banks is important not only to prevent money laundering, but also to support the implementation of a sound banking system that can protect banks from various potential risks, including legal risk, reputation risk and operational risk.(Fitriyani, 2021) .
As we know, laws and regulations governing Indonesian banking business practices tend to be administrative in nature, with emphasis on procedural aspects.Indonesian banking practices are based on legal public and private regulations.Several laws and regulations, including but not limited to so-called laws and regulations, are in the form of Bank Indonesia laws and regulations.This is regulated in laws and regulations related to banking, for example: regarding banking principles.(Rozali, 2011)
RESEARCH METHOD
Empirical legal research is a legal research method that allows law to be recognized in its true sense and how it functions in society.Therefore, the empirical legal research method can also be referred to as the sociological legal research method.(Arrasyid, 2021) The approach used in this study uses a statutory approach (statute approach), case approach (case approach) and analysis approach (analysis approach) .This approach is basically carried out by examining all laws and regulations that are related to the problems (legal issues) that are being faced.(Nasution, 2019) The data to be used is primary data, namely legal material.Conducting structured interviews with the research bank, the researcher gave research approval and attached a number of questions for the bank to answer.The data analysis technique used in this study uses quantitative analysis methods, namely qualitative descriptions of data in sentences that are orderly, consistent, logical, non-overlapping and effective, which facilitate interpretation and analysis of information and data to obtain answers.to the problem of this research.
RESULTS & DISCUSSION
The In the United States since 1970 there have been laws that require banks to voluntarily report suspicions of transactions made by their customers.The regulation is Bank Secrecy of 1970 whose content is contrary to the tradition of secrecy of bank relations with its customers which is the main pillar for building trust relationships between banks and their customers.The customer in this case is the party that uses banking services.This means that the customers who become the object are debtor customers and creditor customers of the bank concerned.(Ariana, 2016) With this know your customer principle, banking institutions are given the authority to know the ins and outs related to bank customers.The bank is also given the authority to find out if there is a customer's intention in carrying out a legal relationship with the bank.Banks also have the right to obtain information so that banks get an overview of customers.Banks can also monitor customer transaction activities and also activities that are considered suspicious can be reported immediately.As is known, the existence of a bank as a financial institution is very dependent on customer funds in the bank.This happens because the bank's profit depends on how the bank can manage customer funds by receiving customer funds and issuing them again in the form of credit.The positive difference or positive spread is the bank's profit that can maintain the existence of the banking institution.
Departing from this, it can be understood that public trust in banking institutions is the most essential thing for the sustainability of these banks.In such a situation where banks really need funds from customers, funds from anywhere will certainly be received by banking institutions because those funds are very dominant in upholding the existence of banking institutions.The implementation of the know your customer principle by the bank must really be applied by the bank in a very careful manner, this is due to the fact that as previously described the implementation of the know your customer principle is something that can assist banks in implementing the precautionary principle but on the other hand that implementation of the know your customer principle by banking institutions should not complicate customer activities which in the end will provoke customer complaints and dissatisfaction with bank services.
If this happens it will be an unprofitable thing for banking institutions because customers will no longer trust banks and its form is withdrawing funds from banking institutions and if this happens it will threaten the health of the bank itself.Thus the implementation of the know your customer principle in a more careful manner without creating the impression of making it difficult for the customer will greatly assist in the implementation of this principle, so that banking institution human resources who know the essence of applying this principle, namely in the interest of the banking institution, are essential.
Dissemination of the principle of knowing your customer for the customers themselves is something that must be done so that customers do not feel they are objects of interest to banking institutions.So that the impression that the customer is a party that should be suspected in the use of bank services is something that must be avoided semantically .impure or not clean anymore (Ariana, 2016) If it is in cash and the money is not counterfeit or counterfeit money, then the money is considered legal tender.So dirty money doesn't mean money, it means possession of money or process of possession of money that the bank needs to know.In this case, if the process of owning the money is the result of an activity that is considered legitimate, then the money is referred to as dirty money.Actions that attempt to disguise or disguise illegal ownership processes as something legal is known as "money laundering".In this case, the bank automatically becomes a money laundering institution, because the bank's duties also include the payment function.
The crime was committed by disguising the source of wealth resulting from acts prohibited by the state, hiding illicit money using various existing means, including using a bank as a place to store it.For example, the Bank is a party that is very conducive and has the potential to be involved in the crime of money laundering.The definition is that the crime of money laundering is according to the provisions in article 3 "Law of the Republic of Indonesia Number 15 of 2002 Concerning Money Laundering Crimes.(Ariana, 2016) This explanation It is understandable that the application of the know your customer principle is very important in the banking industry in order to maintain the stability of the soundness of the bank.Along with the development of technology and information, the more complex banking products and activities, the risks faced by banks will also increase.This increase in risk must be balanced with an increase in the quality of risk management.
Regulations for the application of the know your customer principle have also been refined based on international standards using the new terms customer due diligence and enhanced due diligence .This indicates how important it is to apply this principle in banking in order to avoid increasingly sophisticated risks which in the end is expected to create healthy bank and customer trust.
Along with the development of products, activities and bank information technology
The more complex it is feared that it can increase opportunities for parties who do not responsible for using bank products/services in assisting their crimes.For this reason, in order to minimize the use of banks as a means of money laundering and funding, a bigger role for banks is needed than before by implementing an optimal and effective APU and PPT program.
The implementation of the APU and PPT programs by banks is not only important for eradicating money laundering, but also for supporting the implementation of prudential banking which can protect banks from various risks that may arise, including legal risk, reputation risk, operational risk.(Fitriyani, 2021) In the banking world in Indonesia, since the issuance of the Pakto in 1988, the growth of banks in Indonesia has been very rapid.Unfortunately the bank's growth was not followed by good management and quality and performance.The government revoked the operating licenses of 16 (sixteen) national private banks in 1997 because they were deemed to be no longer viable.This was done with the aim of creating sound banking conditions in Indonesia.It is true that banking development after Pakto 1988 was very rapid but poorly controlled, giving rise to various problems in practice , and the principles of Prudent Banking were completely ignored.
Attention to the application of the precautionary principle with the aim of maintaining the health of banks at this time is needed because of the national banking tragedy in on.The rise of the banking world is a manifestation of the development of regulatory instruments in the banking sector, as an instrument of the government in carrying out policies in the economic sector in pursuit of economic growth.(Katili, 2013) The prudential principle is a principle which states that financial institutions that carry out their functions and business activities must (Katili, 2013) apply the precautionary function by knowing customers in order to protect public funds entrusted to them by the public.The application of the precautionary principle can be seen in an in-depth analysis of lending using the 5C principle (the five C principles) , which includes character , capital , capacity , condition of economy , collateral.(Monulandi et al., 2016) The application of the 5C principle is intended so that the bank is not harmed by debtors who later commit defaults, such as non-performing loans.Non-performing loans are loans that are classified as substandard loans, doubtful loans, and bad loans.This situation can disrupt the smooth return of credit in accordance with a predetermined time.The term non-performing loans has been used by Indonesian Banking as a translation of problem loans , which are terms that are commonly used internationally.Another term in English is nonperforming loan in where The credit quality is classified as substandard, doubtful or loss collectibility.Meanwhile, the banking law contains rules regarding violations that do not implement banking principles which are included in an offense that can be subject to criminal witnesses and can be called a criminal offense in banking.(Guntara & Griadhi, 2019) Given that the application of Know Your Customer Principles is an important factor, it is necessary for Securities Companies to apply Know Your Customer Principles more effectively.The strategic role of a Securities Company as an investment manager is highly dependent on the extent to which the public places their trust in the Securities Company that will manage investors' funds.Trust (trust) from its service users.Inadequate application of Know Your Customer Principles can lead to suspicious transactions.Such circumstances will make it easier for money laundering perpetrators to use legitimate economic means to hide or disguise their activities and to facilitate the speed of transfer of proceeds of crime with the aim of avoiding investigations by law enforcement officials.(Utami, 2013) Banking practices in the Republic of Indonesia, as in many other countries, require a lot of regulation ( most heavily regulated industries ).This is because banking practices have special characteristics as follows: " The banking industry has special characteristics.First, as one of the subsystems of the financial services industry, the banking industry is often regarded as the heart and driving force of a country's economy .In this regard, Lovett stated: " Banks and financial institutions collect money and deposits from all elements of society and invest these funds in loans, securities and various other productive assets ".From what has been stated, it can be said that without the existence of a banking industry, it is difficult to imagine the accumulation of money from the public to be channeled in the form of credit to various industries.
The second characteristic is that the banking industry is an industry that relies heavily on the "trust" ( fiduciary ) of the public who have money to save.Public trust for the banking industry is everything".Banking law that regulates banking practices and is used as a legal reference.(Rozali, 2011) If in conventional banks it is known as bank interest, which can be interpreted as remuneration provided by banks based on conventional principles to customers who buy or sell their products.Interest can also be interpreted as a price to depositors (who have deposits) and creditors (customers who obtain loans) that must be paid to the bank.However, bank interest, which in this case is interest that does not include usury or can be said to be profit-sharing according to Islamic law (shari'ah banking), has become an important part of the economic system of the Arab nation as well as the economic system in other (non-Islamic) countries.-Muslim).In fact, interest has been considered essential for the successful operation of the existing economic system for society.But Islam considers the flower as an evil that spreads misery in life.(Dan, nd) In essence, Banking Law contains the following meanings: Overall legal principles and legal principles governing banking governance covering aspects of banking operations, supervision and relations between banks and customers and other related institutions.Referring the definition of banking law, the scope of banking law includes four aspects, namely: 1. Principles and principles of banking law; 2. Banking governance as a financial institution; The application of this principle is not only limited to information regarding customer personal data, but also regarding transactions carried out by customers and also if there are suspicious transactions it is mandatory to report them.This aims to prevent crime, protect financial institutions from the risk of fraud, money laundering, or the financing of terrorism and other banking crimes.By ensuring that each customer has gone through the proper identity verification process, financial institutions can build a good image and increase the level of public trust in their financial services.b.The application of the know your customer principle must be applied very carefully, because the application of the know your customer principle can help the bank, but on the other hand, the application of the know your customer principle must not make it difficult for the customer to act, which in turn can lead to customer complaints and dissatisfaction with banking services.c.Bank BNI 46 Bima Branch needs to improve the security system for protecting customer data, due to leakage of customer data through misuse of implementation ( know your customer principle ) the potential for crimes against customers.
your customer principle, banks can report suspicious transactions.According to "Bank Indonesia Regulation Number 5/21/PBI/2003 concerning the Second Amendment to Bank Indonesia Regulation Number 3/10/PBI/2001 concerning Application of Know Your Customer Principles" means suspicious financial transactions are: 1. Financial transactions that deviate from the profile, characteristics or habit of the transaction pattern of the customer concerned; 2. transactions by customers that are reasonably suspected to have been carried out with the aim of avoiding reporting of the transactions in question which must be carried out by the bank in accordance with the provisions in Law Number 15 of 2002 concerning the Crime of Laundering as amended by Law Number 25 of 2003 ; or
3.
Legal relationship between banks and individual and corporate customers; 4. Legal relationship between the bank and other related institutions.Example: Government, BI, OJK, other banks, and other financial institutions; And 5. Banking supervision and sanctions imposed for violations of banking regulations.Aspects; The legal aspects regulated in banking law are as follows: 1. Banking Legal Principles as values or principles in Banking; 2. The rules or norms contained in the laws and regulations related to banking that regulate: a.Bank operational activities; b.Position, Duties and Authorities of Commissioners, Directors and ranks in the banking structure; c.Banking Risk Analysis and Management; d.Assessment of bank health level; And e. Banking internal and external supervision; f.Criminal acts within the scope of banking; and g.Dispute resolution.(Yusmad, 2018) CLOSING 1.Conclusion know your customer principle is one of the important principles in the world of banking and finance, which has been implemented by Bank BNI 46 Bima Branch referring to the policies and procedures implemented by financial institutions to ensure the correct identity and characteristics of their customers before providing services.finance to them.
BNI 46 Bima Branch needs to strictly and wisely apply the know your customer principle with reference to Bank Indonesia Regulation No. 5/21/PBI/2003 concerning the Second Amendment to Bank Indonesia Decree No. 3/10/PBI/2001 concerning the application of "Know Your Customer Principles", Regulation of the Minister of Finance Number 143/PMK.010/2009concerning Application of Know Your Customer Principles because this directly relates to customer data which should be protected by banks. | 2024-01-24T17:34:09.779Z | 2023-12-31T00:00:00.000 | {
"year": 2023,
"sha1": "5fa679fb707cb02991dcad42e8bfc1328dc6cbc3",
"oa_license": "CCBY",
"oa_url": "https://journals2.ums.ac.id/index.php/laj/article/download/2135/975",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "35122866a000f8481be7ebb42d0b848bb94bd270",
"s2fieldsofstudy": [],
"extfieldsofstudy": []
} |
255843377 | pes2o/s2orc | v3-fos-license | Do changes in income and social networks influence self-rated oral health trajectories among civil servants in Brazil? Evidence from the longitudinal Pró-Saúde study
Social factors are important determinants of health. However, evidence from longitudinal studies on the possible role of changes in socioeconomic circumstances on adult’s oral health is scarce. This study aimed to test whether changes in income and changes in social networks of family members and friends were associated with trajectories of self-rated oral health (SROH) among adults over a 13-year period. A prospective cohort study (Pro-Saude Study) was conducted involving non-faculty civil servants at university campi in Rio de Janeiro, Brazil. Individual data was collected through self-completed questionnaires in four waves (1999, 2001, 2007 and 2012). SROH trajectories between 2001 and 2012 were “Good-stable SROH”, “Changed SROH”, “Poor-stable SROH”. Per capita family income and social networks of family members and friends data obtained in 1999 and 2012 were grouped into “High stable”, “Increase”, “Decrease”, “Low stable”. Ordinal logistic regression using complete data of 2118 participants was used to estimate odds ratio (OR) and 95% CIs of changes in income and changes in social networks with SROH trajectories, adjusted for age, sex, skin colour and marital status. Participants in the low income-stable and small social networks-stable groups showed 2.44 (95% CI 1.68–3.55) and 1.98 (95% CI 1.38–2.85) higher odds for worst trajectory of SRHO than those in the respective high-stable groups. Those in the decrease income group and decrease social networks group were 78% (95% CI 1.25–2.54) and 58% (95% CI 1.07–2.34) more likely to worst trajectory of SRHO than those in the high income-stable and high social networks-stable groups. Adults reporting low income and low social networks of family members and friends over 13 years and those with income and social networks decrease during the study period were at higher risk of having worsened their self-rated oral health.
Background
The possible influence of social determinants on oral health acknowledges that individuals who persistently experience social disadvantage or economic obstacles have worse oral health than those from more advantaged socioeconomic groups. Oral health disparities refer to the social patterning of health resulted from the uneven distribution of diseases across different social strata in a population [1]. There is voluminous literature demonstrating a consistent stepwise relationship between socioeconomic status and the severity of oral conditions, suggesting oral health disparities are socially patterned [2,3]. However, oral health inequalities are predominantly supported by cross-sectional studies and evidence from cohort studies is increasing more recently [2].
The social mobility hypothesis combines sensitive periods and accumulation hypotheses, depending on whether individuals remain or change between different categories of the socioeconomic strata during the life course [4]. Longitudinal analyses have shown that early life circumstances and social mobility can negatively influence oral health during adulthood [5][6][7][8][9]. Children who grew up in high socioeconomic status families, assessed through parent's occupational status or family income had lower likelihood to have unsound tooth (a filled tooth with dental caries or a decayed or missing tooth), dental caries, periodontal disease, and tooth loss in adulthood than those who experienced a decline in their social and economic circumstances over time (downward social mobility) [5][6][7][8][9]. Remaining in less advantageous social groups from birth to adulthood was also related to poor oral health when compared with those who persistently were in the higher social groups [6][7][8][9]. Few studies investigated the role of social mobility on oral health during adulthood [8][9][10]. Adults who experienced downward social mobility showed higher odds of worse self-rated oral health and were less likely to retain functional dentition than those who remained with high income [8][9][10].
Oral health outcomes have been associated with social networks among different age groups [11][12][13][14]. Social networks is a broad term referring to social ties originated from the structural social arrangements that shape the resources available to individuals, influencing their behavioural and emotional responses [15]. The concept of social networks adopted in this study refers to the 'web' of social relationships with whom the individual maintains close social bond and mutual trust [16]. This definition acknowledges the importance of intimate contacts surrounding the individual in the determination of health status, including the number of social ties with friends and relatives [16]. Social networks assessed considering the involvement with different social groups were associated with self-reported number of remaining teeth in Japanese elderly [11,12]. Also, better oral healthrelated quality of life and better dental status were predicted by larger support network size and greater social support lower among and adolescents and post-partum women [13,14].
Preliminary analyses of a prospective cohort study involving civil servants in Rio de Janeiro, Brazil, showed that lower social position and weak social ties at baseline were associated with tooth loss and self-rated oral health after 13 years of follow-up [17]. Evidence on the association between social mobility during adulthood and oral health is scarce [8][9][10]. In addition, previous studies on this topic did not consider changes in oral health measures as outcomes [8][9][10]. As far as the authors are aware, the possible relationship between changes in social networks and adult's oral health trajectories was not examined prospectively [11][12][13][14].
The aim of the present study was to investigate the influence of changes in income and changes in the number of family members and friends in social networks of on self-rated oral health trajectories in adults over a 13-year follow up period. We hypothesised that adults experiencing a downward social mobility and a decrease in the number of family members and friends in the social networks over the study period are more likely to report a worsening of self-rated oral health than those with stable high income and large stable number of family members and friends in the social networks. It was also conjectured that a worsening of self-rated oral health was associated with low stable income and small stable number of family members and friends in the social networks over the 13 years period among adults.
Methods
The Pro-Saude study is a prospective longitudinal study conducted at several university campuses in the State of Rio de Janeiro, Brazil, involving non-faculty civil servants. All technical and administrative permanent staff members were invited for the study. The exclusion criteria were current non-medical leave of absence and working relocation to another institution.
Trained personnel using self-administered multidimensional questionnaires collected data at participants' workplaces up [18]. Pro-Saude study data collection was carried out in 1999, 2001, 2007 and 2012, representing waves 1, 2, 3 and 4, respectively. Wave 1 of data collection was conducted in 1999 (N = 4030; response rate = 90.4%). Follow-up data collection was carried out in 2001, 2007 and 2012; characterizing waves 2, 3 and 4, respectively. Wave 2 and wave 4 included 3574 (response rate = 80.2%) and 3058 participants (response rate = 68.6%), characterising a 13-year interval period. Data from wave 3 was not relevant for the present study since neither social networks nor self-rated oral health were evaluated in this wave. Participants with incomplete data were excluded from the analysis resulting in a final analytic sample of 2118 adults (69.3% of wave 4).
Self-rated oral health
Self-rated oral health was assessed by the question "In general, how would you rate your oral health status?". The following response options were used: very good, good, fair, poor/bad, very poor/very bad) [19,20]. The outcome was a three-point categorical variable representing SRHO trajectories developed by combining self-rated oral health (SROH) measures collected in waves 2 and 4 as follows: "Good-stable SROH": very good/good/fair at waves 2 and 4; "Changed SROH": poor/very poor at wave 2 and very good/good/fair at wave 4 or very good/good/ fair at wave 2 and poor/very poor at wave 4; "Poor-stable SROH": poor/very poor at waves 2 and 4.
Income changes
Changes in per capita monthly income and changes in social networks of family members and friends were the main exposures. The per capita monthly income was assessed according to the total earnings of the residents in the household and categorized as < 3 Brazilian minimal wages [BMW]; 3-6 BMW; > 6 BMW. One BMW was US$57.17 and US$303.42 in 1999 (wave 1) and 2012 (wave 4), respectively. Per capita monthly income was then categorised as low income (≤ 3 BMW) and high (> 3 BMW) as this seems a reasonable cut-off between lower and upper social classes in Brazil. The two upper income categories used in this study were considered soundly akin according to previous research since social inequalities in health follow a 'bottom inequity' pattern in Brazil [7,10]. These two categories of per capita monthly income were used to generate the four groups of changes in income: "high income-stable": high income at waves 1 and 4; "increase income": low income at wave 1 and high income at wave 4; "decrease income": high income at wave 1 and low income at wave 4; "low income-stable": low income at waves 1 and 4.
Social networks changes
Number of family members and friends in social networks were measured in waves 1 and 4 using the same questions utilised in the Whitehall study [21]: "How many family members/friends do you feel comfortable with and can talk about almost everything?" [22] Participants were classified into four groups: "Large social networks stable": ≥ 3 family members and friends at waves 1 and 4; "Increased social networks size": ≤ 2 at wave 1 and ≥ 3 at wave 4; "Decreased social networks size": ≥ 3 at wave 1 and at ≤ 2 at wave 4; "Small social networks stable": ≤ 2 family members and friends at waves 1 and 4. Large and small social networks were represented by greater and lower number of social relationships between the participant and their family members and friends, respectively.
Covariates
Demographic and socioeconomic characteristics assessed at wave 1 (1999) were analysed as confounders on the influence of income changes and social network changes on SROH trajectories according to a proposed theoretical model previously published [17]. The covariates included age, sex (male; female), self-reported skin colour (white; brown/pardo; black; other), marital status (single; married; divorced; widowed), and educational attainment (≤ 10 years; 11-15 years; ≥ 16 years).
Pilot study
A pilot study involving 1120 temporary civil servants at the same university campuses who were not eligible to participate in the main study was conducted to assess the temporal reliability of the instruments. Kappa coefficients and intra-class correlation (ICC) coefficients were used to assess reliability for the test-retest categorical responses to the SROH question and for the number of social networks of family members and friends, respectively. SROH showed very good test-retest reliability (Kappa = 0.80; 95% CI = 0.69-0.89) [20]. ICC coefficients for social networks of family members and friends were 0.70 (95% CI = 0.62-0.77) and 0.77 (95% CI = 0.70-0.82), indicating moderate and good reliability, respectively [23].
Statistical analysis
All variables were compared between participants with missing data (N = 940) and those with complete data (N = 2118) using t-test and Pearson Chi-square test for continuous and categorical variables, respectively. The distribution of demographic and socioeconomic characteristics, income groups and social networks groups were presented according to SROH trajectories groups using means (SD) and proportions for continuous and categorical variables, respectively.
Ordinal logistic regression was carried out to assess the influence of changes in income and changes in the number of family members and friends in social networks of on self-rated oral health trajectories over the study period. Odds ratios (ORs) and 95% confidence intervals (CIs) were estimated for the independent variables using the logit function. "Good-stable SROH" was the reference category for the outcome variable. The reference category for change in income and change in social networks of family members and friends were "high income-stable" and "large social networks stable", respectively. Substantial correlations using Spearman's coefficients were observed between income in 1999 and educational attainment (ρ = 0.570), income in 2012 and educational attainment (ρ = 0.543), number of social networks in 1999 and educational attainment (ρ = 0.164), and number of social networks in 2012 and educational attainment (ρ = 0.183). Four statistical models were tested. Model 1 assessed the crude association of income groups and social networks of family members and friends groups with SROH trajectories. In Model 2, income groups and social networks variables were adjusted for each other. Model 3 included demographic variables (age, gender and self-reported skin colour). Socioeconomic variable marital status was inserted in Model 4. Educational attainment was not considered in the regression models due to collinearity with the exposures. All analyses were carried out using IBM SPSS Statistics 25.0 (SPSS, Chicago, IL, USA).
Ethic aspects
The present research was approved by the Research Ethics Committee of the Institute of Social Medicine, State of University of Rio de Janeiro (CAAE 0041.0.259.000-11). Informed consent was obtained before data collection.
Results
Demographic and socioeconomic characteristics, income groups, number of family members and friends in social network, and self-reported oral health of excluded participants due to missing data (N = 940) and those with complete data (analysed sample) (N = 2118) are presented in Table 1. Participants with complete data were younger, experienced greater educational attainment, and had higher income than those with missing data. Number of family members and friends in the social networks and self-reported oral health did not differ between participants with and without complete data.
The mean age of the sample at wave 1 data collection was 39.1 years, ranging from 22 to 67 years. Participants were predominantly females (57.4%). Of the sample, 41.3% had education for at least 16 years and 56.2% had per capita family income up to six Brazilian minimum wages. Most participants experienced a high stable income (35.6%) and large stable number of family members and friends in the social networks (62%) during the study period. The majority of the sample (85.6%) was in the Good SROH group (Table 1). Table 2 presents the distribution of demographic and socioeconomic characteristics, income groups and number of family members and friends in the social networks between SROH trajectories groups. Younger participants, females, those with greater education attainment, and in the high income-stable and high social networks-stable groups prevailed in the good-stable SROH group than their counterparts.
Ordinal logistic regression models estimated the association of income groups, number of family members and friends in the social networks with SROH trajectory groups (Table 3). In the crude analysis (Model 1), all categories of income groups and social networks of family members and friends predicted worse SROH over a 13-year period. Income groups and social networks of family members and friends remained associated with worse SROH after mutual adjustment (Model 2) and for demographics (Model 3). In the final model (Model 4), adults in the increase income, decrease income and low income-stable groups showed 2.65 (95% CI 1.17-4.38), 1.78 (95% CI 1.25-2.54) and 2.44 (95% CI 1.68-3.55) higher odds of worse SROH than those in the high income-stable group. In addition, adults with decrease social networks and small stable social networks of family members and friend were 58% (OR = 1.58, 95% CI 1.07-2.34) and 98% (OR = 1.98, 95% CI 1.38-2.85) more likely to report worse SROH than those in the large stable social networks of family members group.
Discussion
The present longitudinal study confirmed the hypothesis that downward social mobility and experiencing lowincome during adulthood increase the risk of worsening of oral health over time. Furthermore, the hypothesis that decrease on the social networks of family members and friends and persistent small stable number of social networks during adulthood predicts worsening of oral health was confirmed. Thus, enduring low income and small social networks seems to negatively influence SROH over time in adults.
Our findings support the social mobility hypothesis as one of the life course models in dental research that suggest that health inequalities are influenced by different social trajectories [4]. Overall, the present results are in accordance with previous longitudinal analysis on the impact of downward social mobility on oral health inequalities in adults [5][6][7][8]. However, there is no consensus on the influence of upward mobility and stable low social mobility on oral health during adulthood [8][9][10]. A recent systematic review on the influence of social mobility on tooth loss concluded that individuals in the upwardly mobile, downwardly mobile, and persistently low socioeconomic group were more likely to have tooth loss than those with persistent high social status [3]. Existing dental literature has also shown the relationship between income decrease over three years follow-up and poor SROH as well as the association of downward and stable low social mobility with number of teeth during adulthood [8,10]. It is important to emphasize that oral health outcomes were assessed only at the end of the follow up period in the previous studies [8,10]. The use of different oral health outcomes, different periods of follow up and distinct measures of socioeconomic position might also explain the discrepancies.
This study brings original evidence on the long-term importance of the number of social networks on adult's oral health trajectories since participants experiencing decrease on social networks and persistent small social networks over the 13 years period of study reported worse SROH trajectories. Our findings may suggest that persistent small social networks could gradually accumulate over time and impact on self-rated oral health. This Table 1 Demographic and socioeconomic characteristics, income groups, number of family members and friends in the social networks, and self-rated oral health between excluded participants due to missing data and those with complete data mechanism considers the amount and duration of exposures and proposes that 'wear-and-tear' adds up over time to affect health [21]. Previous cross-sectional studies showed the relationship between social networks and subjective oral health outcomes [11][12][13][14]24], but others failed to report such association [25]. Similar to our findings, the number of close ties was associated with poor self-rated oral health among English adults aged 50 years or older [23]. Moreover, social network measures, including frequency of meeting friends and participation in sports and hobby clubs, was associated with selfreported number of teeth among elderly people [11,12]. Social network of friends was also inversely associated with poor self-rated health in pregnant and postpartum women [14]. However, a study involving older American adults did not find association between social networks and self-rated oral health [25]. Methodological discrepancies, including study design, measurements of social networks and subjective oral health, and differences of demographic characteristics of participants, might explain the discrepancies between the study's findings.
The present study has some limitations that should be considered. Using a single question to assess SROH may have resulted in an unspecific outcome measure. The adoption of multi-items questionnaires to assess SROH is recommended in future research since they are considered more sensitive measures. Social networks were assessed according to the participant's perception on the number of family members and friends they had close social ties. Thus, the quality of the social networks was not considered in this study. In addition, nearly 30% of the participants in the wave 4 were excluded from the analysis due to missing data. Although the variables SROH and number of social networks did not differ between participants with missing data and the final analytic sample, the latter included a greater proportion of adults in the reference category ('high-income-stable' group) in the 'income groups' . Thus, selection bias might have underestimated some of the reported associations between income groups and SROH. This seems to be the first longitudinal study on social mobility and oral health involving a cohort of employed adults from one Brazilian university. Although this resulted in high retention rate of nearly 70% after 13 years, our findings should not be generalised to other populations. Most participants (85.6%) reported goodstable SROH, which may also impose some restrictions to extrapolate our findings. The possible reason for a high proportion of participants reporting stable SROH is the moderate and high the socioeconomic status of the studied sample, since nearly 73% and 76% of the participants reported per capita monthly income ≥ 3 Brazilian minimal wages (US$ 171.51/month) and 11 or more years of education at baseline, respectively [17]. A previous study using data from a nationally representative sample of adults living in Brazil showed that poor self-reported oral health measures were strongly associated with lower income and lower schooling [26].
The strengths of the present study were the assessment of income changes and social networks changes over more than one decade of follow up during adulthood and the evaluation of SROH trajectories in the same cohort. In addition, the regression models were fitted according to a theoretical model encompassing the study hypotheses [17]. Despite the above-mentioned critiques related to SROH measure used in this study, SROH is considered a comprehensive, reliable and valid measure in epidemiologic research that correlates with dental clinical measures [19]. In addition, self-perceived oral health measures capture the individual perceptions of subjective components related to health, including quality of life, well-being, and the oral impacts on physical and social functions [27].
The potential mechanisms by which income and number of social networks may influence adult's perceived oral health over time include the behavioural and psychosocial explanations [28]. Individuals from lower socioeconomic groups and those with small social networks tend to engage in health-damaging oral health behaviours. For instance, social position and social ties indirectly predicted adult's SROH via psychological distress and smoking when data from the present cohort was analysed [17]. Frequency of dental visits also mediated the link between social position and SROH in a previous study [17]. Yet, the relationship between social ties and SROH was not mediated by dental visits. A recent study revealed that young and middle-aged male adults with more close social ties were more likely to use preventive dental care [29]. The authors emphasized that social relationships may lead to higher compliance with health norms. It is interesting to note some important variations in per capita family income and in the number of social networks over the 13 years period, despite the fact that this was an adult population at the same workplace throughout the study period. Around 30.1% and 11.8% of the participants experienced a decline in income and in the number of social networks, respectively. Another relevant aspect is the fact that nearly 25% of the participants were in the low income-stable group, and 12.1% of the sample reported small stable number of social networks at baseline and at 13-years follow-up. The influence of income changes and social networks changes on SROH draws attention and should prompt initiatives to tackle income-related oral health inequalities and to enhance adult's social ties due to potential negative impact on physical, and mental health [30]. Different types of social network interventions have been proposed, including enhancing existing network linkages, developing new social network linkages, or enhancing networks through community capacity building and problem solving [31].
Future studies should consider longer follow up period to evaluate the effects of social mobility and social networks changes during working life as well as during retirement on people's oral health. Although self-rated oral health is considered a valid and comprehensive measure of oral health status [19], future studies should combine subjective and dental clinical measures as outcomes.
Conclusion
To the best of our knowledge, this is the first study that longitudinally evaluated the influence of income changes and changes in the number of social networks on SROH trajectories among adult workers. The present findings highlight the long-term influence of persistent poor income and downward social mobility on SROH trajectories during adulthood. Moreover, adults with a decrease on social networks of family members and friends, and those with small number of social networks over 13 years were at higher risk of reporting worse SROH during the study period. Possible strategies to tackle income-related health inequalities and to enhance the number of social networks from existing network linkages and/or developing new social networks should be considered to enhance adult's oral health. | 2023-01-16T14:53:00.548Z | 2022-04-29T00:00:00.000 | {
"year": 2022,
"sha1": "5abca2c39b80d519be5de357bf7ca17e49220b0e",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1186/s12903-022-02191-5",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "5abca2c39b80d519be5de357bf7ca17e49220b0e",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": []
} |
212575006 | pes2o/s2orc | v3-fos-license | Gut Microbiome and Human Health
The human gut microbiota is given as an especially complicated microbial community that are formed to possess a significant impact on human physiology. Additionally, comparative analysis of individual human gut microbiota has discovered numerous methods that the microbiota use to regulate to the enteric surroundings. Infections of the alimentary canal are a significant pathological state for each adults and children worldwide. Alterations within the traditional human gut microflora lead to the event of enteric upsets. Infective bacterium alters the enteric biology and enteric organization resistance. A healthy gastrointestinal microbiota forms a barrier against invasive organisms. Traditional enteric microbes and a few probiotic bacterium will enhance the host’s defense mechanisms against pathogens. They’ll additionally improve enteric immunity by adhering to the enteric tissue layer and stimulating native immune responses. The upkeep of a balanced enteric biology improves the ability to preserve enteric integrity. The cancer patient microbiota is completely different from healthy one, conjointly the chemotherapy received by the cancer patient have an effect on the microbiota and may cause different sickness.
Introduction
A large diversity of microorganisms be within the mammalian gastrointestinal tract, with their variety around ten times bigger than the whole number of mammalian somatic and germ cells [1] these sizable amounts ensuing from the mammalian completely different niches with distinct physicochemical conditions [2].
The microbiota inhabiting the GI tract constitutes a fancy scheme and plays a important role in maintaining host physiological physiological state [3]. A large body of analysis has investigated the gut flora composition in humans [4] and disclosed its relationship to diseases [5]. However, most of the samples utilized in these research were from faecal matter [6] or primarily through hospitalbased endoscopic biopsies [7]. Major functions of the gut microflora include metabolic activities that lead to salvage of energy and absorbed nutrients, necessary biological process effects on intestinal epithelia and on immune structure and performance, and protection of the colonized host against invasion by alien microbes.
Gut flora may also be an important factor in certain pathological disorders, as well as multisystem organ failure, carcinoma, and inflammatory intestinal diseases. Even so, bacteria also are helpful in promotion of human health. Probiotics and prebiotics are best-known to possess a role in the prevention or treatment of some diseases [5]. Due to limitations in human research, murine models as rat became crucial in studies of the gut microbiota designed to get mechanistic insights into its completely different anatomical regions. As a long-standing model in biomedical analysis, rats have recently been utilized in various studies exploring the correlations between intestinal microorganism biota and numerous types of diseases with the comparative characterization of the traditional rat microbiota landscape [8][9][10]. The rat's large intestine houses a lot of complicated micro-ecosystem, that was additionally supported by earlier research indicating that the amount of microorganism species of rat fecal matter was 2-3 times over that of human feces at the identical sequencing effort [11].
Lactobacillus, a typical commensal organism, sometimes highly abundant within the laboratory rodent gut, it's predominate within the stomach and higher part of the small intestine and related to keratinized cells of the nonglandular portion of the stomach that controlled the population levels of different bacterial species [12]. Within the lower part of the small intestine, another lactate- The saccharolytic bacterium residing within the outer mucus layer might digest mucin glycans [13] and facilitate build polypeptides additional accessible for proteolytic bacterium, enhancing syntrophic interactions. Staphylococci isolated from a conventional rat colonized specifically the keratinized cells of the nonsecreting epithelial tissue of the stomach when the rats were free from lactobacilli. This colonization wasn't discovered when inoculation of into the rats [14].
Chemotherapy and microbiota
The gut microbiota of the cancer patient is affected that confirmed by the study of [15]. The intestinal microbiota clostridium leptum and C. coccoid is habitually cited as a possible aetiological consider colorectal cancer initiation and progression and are considerably altered in both colorectal cancer and polypectomized subjects compared with controls [15]. Additionally, throughout chemotherapy treatment the gut microbiota is affected that defend the body from the infection which confirmed by the study of [16] analyzed the impact of thirty six chemotherapy cycles treatment
Conclusion
The gut microbiota consists of many trillion of bacterial cells, that exceed the amount of somatic and germ cells present within the human body by a factor of ten. These bacteria serve as a barrier against pathogens and are crucial for the development of the host immune system. Different interesting data that may be obtained through this study strategy consideration the impact of chemotherapy on the human gut microbiota. Finally, it'll become potential to identify microbiota, which will be used as biomarkers for health, for condition of sickness and for treatment. Finally, the identification of the gut microbiota in patients are going to be important so as to elucidate treatment of disease. | 2020-03-07T16:00:59.063Z | 2019-01-03T00:00:00.000 | {
"year": 2019,
"sha1": "a417a845b9245c1f28cfcfa49adcfc91d4873a5c",
"oa_license": "CCBY",
"oa_url": "https://lupinepublishers.com/complementary-alternative-medicine-journal/pdf/OAJCAM.MS.ID.000116.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "8c667347b51ad07154b7221c3be930c8935f20e2",
"s2fieldsofstudy": [
"Medicine",
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology"
]
} |
1156717 | pes2o/s2orc | v3-fos-license | Standardized Loads Acting in Knee Implants
The loads acting in knee joints must be known for improving joint replacement, surgical procedures, physiotherapy, biomechanical computer simulations, and to advise patients with osteoarthritis or fractures about what activities to avoid. Such data would also allow verification of test standards for knee implants. This work analyzes data from 8 subjects with instrumented knee implants, which allowed measuring the contact forces and moments acting in the joint. The implants were powered inductively and the loads transmitted at radio frequency. The time courses of forces and moments during walking, stair climbing, and 6 more activities were averaged for subjects with I) average body weight and average load levels and II) high body weight and high load levels. During all investigated activities except jogging, the high force levels reached 3,372–4,218N. During slow jogging, they were up to 5,165N. The peak torque around the implant stem during walking was 10.5 Nm, which was higher than during all other activities including jogging. The transverse forces and the moments varied greatly between the subjects, especially during non-cyclic activities. The high load levels measured were mostly above those defined in the wear test ISO 14243. The loads defined in the ISO test standard should be adapted to the levels reported here. The new data will allow realistic investigations and improvements of joint replacement, surgical procedures for tendon repair, treatment of fractures, and others. Computer models of the load conditions in the lower extremities will become more realistic if the new data is used as a gold standard. However, due to the extreme individual variations of some load components, even the reported average load profiles can most likely not explain every failure of an implant or a surgical procedure.
Introduction
Why are standard loads needed?
Knowledge of contact forces and moments acting in the tibiofemoral joint is needed for testing wear, fatigue, or strength of implants, for analyses of strain distribution and remodeling at the fixation area, and for other purposes. Reliable data can also serve as a 'gold standard' for the verification of analytical musculoskeletal models. Realistic finite element models of natural knee joints including the surrounding soft tissues permit the calculation of the mechanical situation in structures such as cartilage, ligaments, or menisci, for example in cases of injuries, or permit the investigation of the biomechanical consequences of surgical interventions.
Loading of the knee joint primarily depends on the physical activity. It is also determined by body weight (BW), but individually differs greatly, even between subjects with the same BW [1]. This raises the question of which loads are appropriate to use for mechanical tests or analyses. For wear and fatigue those activities are most decisive which cause very high loads and additionally act most frequently. For static strength and fixation stability, even rarely acting extreme loads may additionally be important.
One could determine the load-time patterns during the most strenuous and frequent activities of daily living (ADL) as they act on average in subjects with an average body weight. These activities are walking and climbing stairs [2]. However, the median loads will then be higher in 50% of subjects and 50% of loading cycles, and this would not be adequate for use in strength or wear tests. A more justified approach would be to take data from subjects with a high BW and joint loads which are, relative to the BW, higher than in most other subjects. However, this may cause other problems because such high loads could lead to failures of small implants.
Calculation of knee contact loads
Contact loads in the knee joint can either be calculated or measured. To calculate the joint forces, kinematic data as well as ground reaction forces serve as input for inverse dynamic musculoskeletal models. However, substantial variations in the calculated forces exist. In most studies, contact forces of 200-400%BW (percent of the body weight) were calculated for level walking [3][4][5][6][7][8], but forces of 450%BW [9] and even up to 670%BW [10] have also been reported. Potential sources of error for such models are non-validated optimization criteria, insufficient modeling of muscles, and antagonistic muscle activities, amongst others.
Measurement of knee contact loads
Instrumented implants allow access to the joint contact forces in vivo. In previous studies, forces were measured in a distal femur replacement and transformed to the knee joint [11][12][13]. Peak axial forces of 220-250%BW were reported for level walking and 280%BW for descending stairs.
To measure the tibio-femoral contact force directly, instrumented knee implants were also developed by others. An initial design measured the axial force and the center of pressure [14], and a second design enabled the measurement of all six force and moment components [15]. Load data was reported for 1-3 subjects. During walking, forces between 180 and 280%BW were measured [16]. With respect to daily activities, the highest forces, approximately 350%BW, occurred during stair ascending and descending [17]. During all investigated activities, the shear forces were substantially lower than the axial forces [18]. Peak anterior shear forces of 30%BW were observed during walking.
The instrumented knee implant, developed by us, measures the tibio-femoral contact forces and moments in vivo [19]. The electronics in the tibial component are powered inductively and transmit the six load components telemetrically at radio frequency with a measuring error below 2%. During the measurements, the patient's activities are video-taped and recorded together with the loads. Additionally, gait data can also be captured. Synchronous load and video data from many activities can be accessed from the free public database www.OrthoLoad.com, including selected data from this study.
The instrumented implant is based on the INNEX knee (Zimmer GmbH, Winterthur, Switzerland), has an ultracongruent tibial insert, and requires sacrificing the cruciate ligaments. It therefore also transfers load components which are taken up by the ligaments in cruciate ligament retaining implants or in the natural knee. If such implants or the native joint are to be tested or analyzed, they have to be modeled by finite elements and compared to models of the instrumented implant, applying the same loads. This would allow separating the fractions of loads transferred by the soft tissues and by the tibial-femoral contact areas.
Wear test standard ISO 14243
The test standard ISO 14243-1 [20] defines loads for testing wear in knee implants. The axial force, a/p force, and rotation torque can be compared to the load components F z , F y , and M z now measured in vivo. ISO only describes the loads during walking. They were obtained 43 to 25 years ago from analytical musculoskeletal models and gait data [3,9] and were edited for the test purpose in 2000 [21]. Because the mathematical modeling has much advanced since then, it can be expected that the new in vivo data deviate from the ISO loads. This expectation is supported by a comparison of the axial ISO force with the resultant forces during walking, obtained analytically as well as measured in our patients [22]. During the first 60% of the stance phase both loads differed markedly.
Goals of this study
The goal of this study was to standardize forces and moments acting in knee implants, based on in vivo data. These loads should be suitable as a realistic basis for experimental or analytical studies on wear, fatigue, strength, fixation stability, bone remodeling, or soft tissue loading around the implant. Different classes of loads should be defined as: average loads, high loads, and extreme loads of single force or moment components. Furthermore, the loads defined in the wear test standard ISO 14243 should be compared to the measured values. Based on previous measurements, we hypothesized that the ISO loads are much lower than the measured loads.
Ethics Statement
The study was approved by the Charité Ethics committee (EA4/069/06) and registered at the 'German Clinical Trials Register' (DRKS00000606). All patients gave written informed consent prior to participating in this study.
Coordinate system and measured loads The coordinate system used is fixed relative to a right-sided implant. Its origin is located in the middle of the tibial plateau at the height of the lowest part of the polyethylene insert [1]. The positive force components F x and F y act in lateral and anterior directions, respectively. The axial force component is reported here as -F z (with a negative sign) and always acts distally in the direction of the implant shaft. Positive moments M x , M y , and M z turn clockwise around their axes during flexion, abduction, and outer rotation of the tibia, respectively. Positive values of M x /M y can be caused not only by frictional torque but also by a posterior/ lateral shift of the axial force -F z . The resultant force F res and the resultant moment M res are calculated from their respective components.
If load components have to be transformed from the implantbased system, used here, to a tibia-based system, the slope of the implants must be respected (Table 1). Relative to the long axis of the tibia, the implants are rotated backwards (positively) around the x-axis by the listed slope angles.
In the following sections, the terms ''peak'' force, ''peak'' component, etc. denote absolute or relative minima or maxima and can be positive or negative. The term ''load'' either indicates a force, a moment, or a combination of force and moment.
Measurements
8 subjects with instrumented knee implants participated in this study ( Table 1). All subjects obtained the implant due to gonarthrosis and had regained good walking abilities at the time the measurements were taken. Measurements during 7 ADL were performed at 2 postoperative dates ( Table 2). The step height of the staircase was 20 cm and the seat height was 45 cm (50 cm for subject K6 L). The subjects walked at a self-selected speed of approximately 4 km/h. Data from jogging at 6 km/h on a treadmill were also collected in the 3 subjects willing to perform this exercise. The jogging data does not allow statistic evaluations, but can serve as a basis for judgment of the severity of the loads during the ADL. Kinematic data was synchronously recorded by 12 cameras (Vicon, Oxford, UK) on the first postoperative date only (Table 1). More trials from the second postoperative date were added to broaden the data basis when searching for the trials with the absolute highest extreme values of F res (PEAK100, see below) or of single components (EXTREME100).
For evaluation of the loads during walking, single steps were separated, which started and ended with foot contact. Stair climbing cycles were separated at the force minima during the swing phase. Cycles from all other activities were separated with additional time intervals at the beginning and end of the exercise. Evaluation of data is described in the following sections as performed on the forces. Analogue procedures were applied when analyzing the moments.
Average and high body weight
An average and a high BW were defined, based on data from large studies conducted on the American [23] and German [24] populations. The given BWs of subjects between 60 and 69 years of age were averaged between the females and males of both studies. The average BW was 74.7 kg and 2.3% of the population had a BW above 101.5 kg. For our study, we defined an average BW of 75 kg and a high BW of 100 kg.
The 3 force and 3 moment components were measured in %BW and %BWm (percent of body weight times meter), respectively. These loads were multiplied by 7.36 (9.81 * 75/ 100) to convert them to N and Nm, respectively, for subjects with an average BW and by 9.81 for those with a high BW. If average/ high loads in subjects with a BW of X kg instead of 75/100 kg need to be known, the data given in N or Nm must be multiplied by X/75 or X/100, respectively.
Basic averaging method
The basic averaging procedure combined n loading cycles ( Table 2). Averaging started on the resultant force F res using the following 'time warping' procedure [25] (the software can be downloaded from www.OrthoLoad.com). First, all n cycle durations were standardized to '100% cycle' and an average cycle time T c was determined. Then, the time scales of all of the cycles were deformed non-uniformly in such a way that the squared differences between all of the n time-deformed functions of F res , summed over the whole cycle time, became a minimum. The obtained deformation of the time scale of each single cycle is called its 'warping path'. The arithmetic mean pattern of F res was finally calculated from the deformed patterns of all of the cycles and named the 'average' pattern. This method minimizes the sum of the squared differences of F res between the cycles evenly over the whole cycle time and preserves the typical characteristics of the analyzed patterns as their extreme values. If, for example, a relative force maximum occurs in only 50% of the n cycles, but at strongly varying times, half of its average height will be present at an average time in the final curve.
Determination of the warping paths by analysis of F res was chosen because the characteristics of all 3 force components, as relative extrema, and their locations within the loading cycles are inherent in the force-time pattern of F res .
The warping path of each cycle, obtained by the described analysis of F res , was then applied to the belonging 6 load components so that they maintained their synchronization. From the time-deformed components of the n cycles, their arithmetic mean patterns were calculated. This averaging process was performed on load data which had been normalized to each subject's individual body weight.
Average loads 'AVER75' for subjects with average body weight
The resultant forces F res from several loading cycles of each subject were first averaged intra-individually (curves S1 to S3 in Figure 1A). The cycles obtained from the 8 subjects were then averaged inter-individually in %BW (curve Sa with the peak value P1 in Figure 1A) and the obtained loads were finally re-calculated for a BW of 75 kg by multiplication with 7.36 (9.81*75/100; curve with the peak value P4 in Figure 1B). This procedure delivered the force pattern AVER75, which represents the average force in subjects with a BW of 75 kg. Identical procedures were applied to all force and moment components.
High loads HIGH100 for subjects with high body weight
The AVER75 pattern (curve with the peak value P4 in Figure 1B) was multiplied by 1.33 * F H . The factor 1.33 increased the BW to the high value of 100 kg. The additional factor F H was the quotient between the highest intra-individual average found in any of the subjects (P2 in Figure 1A) and the inter-individual average of all of the subjects (P1 in Figure 1A). The obtained HIGH100 loads can act in subjects with a BW of 100 kg (e.g. in 1 out of 8 subjects in our study). All factors were applied in the same way on all load components in the AVER75 data. The HIGH100 loads acted on average in 1 out of 8 investigated subjects. This indicates that such high loads are common in reality. Therefore presentation and discussion of the loads is focused on the HIGH100 loads. The AVER75 pattern can be obtained from the HIGH100 pattern by multiplication with the factor C aver . A low C aver value indicates a high variation in F res between the investigated subjects. A C aver value of 50%, for example, would indicate that, for the same activity, the peak value of F res in one of the investigated subjects was twice as high as the average of all investigated subjects.
Peak loads 'PEAK100' for subjects with high body weight
In the AVER75 patterns of F res , obtained from all the investigated subjects and all the loading cycles, that single trial was identified (T3 in Figure 1A) which had the absolute highest peak value P3. The load components from this trial were multiplied by 1.33 * F P ( Figure 1B). F P was the quotient between the highest peak value of any trial (P3 in Figure 1A) and the interindividual average of all subjects (P1 in Figure 1A). The obtained pattern was named 'PEAK100' and represents the absolute highest force F res that could act during occasional trials in subjects with a BW of 100 kg. A high factor C peak between the HIGH100 and the PEAK100 loads indicates that the variation of the HIGH100 loads from trial to trial is large.
Extreme load components 'EXTREME100' for subjects with high body weight The procedures described above, used to define the standardized average, high and peak loads, solely depend on the analysis of the resultant force F res and its peak values. Therefore, all load components in the AVER75/PEAK100 data only differ by the factors C aver /C peak from the same components in the HIGH100 data. This means that the load directions during the whole loading cycle are the same for each of the 3 load levels. When testing wear or strength of implants, the load directions in addition to the load magnitudes influence the results. A smaller force can be more detrimental than a higher force when it acts in a different direction, for example.
The peak values of some components vary intra-individually much more than F res . This indicates that the resultant force and/ or moment acts in directions which can deviate greatly from the directions determined by the average components. Such effects cannot be detected when only analyzing the average force and moment components. Therefore, selected relative minima maxima in the time courses of the 6 load components were specified and their lowest/highest values were determined from the data of all subjects and all single trials. Included in this analysis were the data generated from both measurement sessions (Table 2), to increase the number of evaluated trials. The obtained values were named the 'EXTREME100' load components. Extreme values of single components may be suited for analyzing the mechanical reasons of untypical implant failures due to loosening, excessive wear, breakage or other factors.
Knee flexion angle
The 3D kinematics of each subject's lower limbs were measured using reflective markers attached to the skin and tracked at 120 Hz using a 12-camera motion capture system (Vicon, Oxford, UK). The marker set consisted of 46 markers placed on the subjects' legs and pelvis [26]. The method used for determining the skeletal kinematics has been described in detail previously [27].
The same warping paths, obtained when averaging the resultant force F res from single cycles or subjects, were applied to the synchronously measured knee flexion angle. The obtained flexiontime patterns are valid for all standardized loads (AVER75, HIGH100, and PEAK100).
Results
All values of the load components and their resultants, stated in the following sections, refer to the HIGH100 loads. The HIGH100 data, collected during the different activities, are charted in the diagrams of Figures 2 to 5 with the left scales. Additional right scales allow reading the AVER75 data from the same diagrams. The C aver and C peak values, required for Schematic illustration with fictive data from 3 subjects. Top (A): S1 to S3 = intra-individual averages in %BW. Curve with P1 = inter-individual average of S1 to S3. Curve with P2 = highest intra-individual average of any of the subjects. F H = multiplication factor between P2 and P1 for calculation of HIGH100 from AVER75 values. T1 to T3 = 3 single trials with highest peak values. Curve with P3 = trial with the highest peak value ever measured. F P = multiplication factor between P3 and P1 for calculation of PEAK100 from AVER75 values. Bottom (B): curve Sa (in %BW!) from the top diagram. Curve with P4 = AVER75 = average load in N for the BW = 75 kg. Curve with P5 = HIGH100 = high force in N for the BW = 100 kg. Curve with P6 = PEAK100 = peak force in N for the BW = 100 kg. F H and F P = factors for calculation of HIGH100 and PEAK100 values from AVER75 values. C aver and C peak = multiplication factors for calculation of AVER75 and PEAK100 values from HIGH100 values. doi:10.1371/journal.pone.0086035.g001 calculation of the AVER75 and PEAK100 loads from the HIGH100 loads, are listed in Table 2 and indicated in Figures 2 to 5. Table 2 also lists the average cycle times T c from the data collected at the second postoperative date.
Resultant force F res and axial force component -F z (upper diagrams in Figures 2 to 5)
Because the negative axial component -F z always nearly equals F res , the data and findings for F res can approximately be transferred to -F z . When comparing the highest forces from all investigated activities except jogging, it becomes obvious that their peak values are very close together, encompassing a range of 3,372-4,218N ( Figure 6).
During walking and ascending or descending stairs, F res always had two maxima during each loading cycle. During walking, the second peak, which occurred at the instant of contralateral heel strike (3,372N), was larger than the first peak, at the instant of contralateral toe off (2,848N). During ascending or descending stairs, both peaks were higher than the peaks that occurred during walking. Their magnitudes had all similar values between 3,718 and 4,218N. During the one-legged stance, F res reached a height similar to that of the second peak during walking.
The peaks of F res during exercises with 2-leg support did not deviate much from the peaks that occurred when only one leg temporarily supported the whole BW. Rising from a chair with a maximum knee flexion angle (KF) of 94u or sitting down (94u KF) caused nearly the same peak values (3, 792 and 3,697N, respectively). During the knee bend exercise, the peak was lower (3,407N) than the peak that occurred during the rising from a chair exercise, although the knee was flexed slightly more (98u KF).
During jogging, only one force maximum was observed. The peak force of 5,165N was 53% higher than the maximum force which acted during walking.
When the AVER75 forces F res were expressed in %BW, we obtained 226/267%BW for the 1./2. peak during walking, 311/ 305%BW (1./2. peak) when ascending stairs, and 280%BW (maximum) when rising from a chair. The forces F z had nearly the same values.
Transverse forces F x and F y (upper diagrams in Figures 2 to 5)
The medial-lateral force F x was small during all investigated activities. Except for jogging, the forces in the medial direction (F x ,0) were always smaller than 100N. Force values higher than 100N in the lateral direction (F x .0) were only observed when ascending stairs (167N) or jogging (246N).
The peak values of the anterior-posterior force F y were always larger than those of F x . During walking, ascending and descending stairs, as well as during the one-legged stance, peak values of F y nearly always acted in the posterior direction (F y ,0). With a range of 2255N to 2326N, the peak values had similar magnitudes for all 4 activities. The highest force recorded in the posterior direction was 2699N and occurred during jogging.
The forces recorded in the anterior direction (F y .0) were generally much smaller than those acting in the posterior Table 3. C aver = factor used to convert all HIGH100 load components to AVER75 components. C peak = factor used to convert all HIGH100 load components to PEAK100 components. Tc = average cycle time. Data averaged for 8 subjects and all trials. Jogging data from only 3 subjects. Because -F z is nearly identical to F res , the curve of -F z is mostly invisible. doi:10.1371/journal.pone.0086035.g002 Standardized Loads Acting in Knee Implants PLOS ONE | www.plosone.org direction. Forces between 102N and 137N were recorded during walking and during ascending or descending stairs. The highest values, up to 189N, were measured during jogging. Although the flexion angles during knee bends and when sitting down or standing up were higher than during the other activities (Figure 6), the positive forces F y stayed very low and did not exceed 94N. Alternating directions of F y within the same loading cycle and values above 100N were only found during walking, climbing stairs, and jogging.
Torsional moment M z (lower diagrams in Figures 2 to 5)
High M z values, due to an outwards rotation of the tibia (M z .0), were only found during walking at the instant of contralateral toe off. Throughout the entire loading cycle of all of the other activities, M z was close to zero or negative, even during jogging. The tibia then rotates or tries to rotate inwards. During all activities except the one-legged stance, the peak values of M z were between 27.0 and 210.5 Nm. The largest negative torque was measured during walking at the instant of contralateral heel strike, and it was even higher than the torque measured during jogging. Walking was the only activity during which a moment M z of non-negligible magnitude acted in alternating directions.
Transverse moments M x and M y (lower diagrams in Figures 2 to 5)
Although the knee movement changes between flexion and extension during all activities except standing, the moment M x in the sagittal plane was always positive or close to zero. Small, negative values were recorded shortly before heel strike during jogging only. Positive values of M x during extension phases cannot be caused by friction, but are the result of a posterior shift of -F z . This shift causes a moment that counteracts and exceeds the friction moment. The positive patterns of M x in the extension phases therefore indicate that such a posterior shift of the axial force occurs during all activities. Except for descending stairs, the peak values of M x lay between 17 and 27 Nm. If friction around the x-axis is neglected, this corresponds to backwards shifts of -F z by about 5 to 10 mm. If friction is realistically taken into account, the shift would be even larger. While descending stairs, the highest peak values (34 Nm) were measured.
While ascending or descending stairs and during the one-legged stance, the abduction moment M y was negative throughout the whole loading cycle or at least most parts of it. This negative moment indicates an adduction of the tibia or a medial shift of -F z . The magnitudes of M y were close to -40 Nm, corresponding to a shift of -F z of approximately 10 mm if friction is neglected. Small, positive values of M y were found during the extension phases of walking and jogging, but the highest magnitudes of M y were then also negative, with values of 238 and 247 Nm, respectively. Alternating directions of M y were measured during knee bends and when standing up or sitting down. When standing up, M y was 2.7 times higher than when sitting down. AVER75 and PEAK100 loads (Table 2) The multiplication factors C aver or C peak have to be applied to the HIGH100 loads to obtain the AVER75 or PEAK100 data. The AVER75 loads are much smaller than the HIGH100 loads. Depending on the activity, the AVER75 load values are only 53-60% of the HIGH100 loads. This indicates that the loads vary strongly inter-individually. The values of C peak were between 1.02 and 1.09, i.e., the PEAK100 loads are no more than 9% higher than the HIGH100 loads.
Inter-individual variations of load patterns (Figure 7) Only examples of the variation of the load components between the investigated subjects can be given here. Data from all activities and subjects is accessible from www.OrthoLoad.com (menu Test Loads).
The time courses of F z (and therefore also of F res ) from the different subjects were relatively uniform for all activities, but there were large differences observed in the magnitudes. This difference in magnitudes can also be seen indirectly from the low values of C aver (Table 2). For the cyclic activities of walking and jogging, the patterns of all of the components except F x were relatively uniform. For all other activities, the time courses of F x , F y , and, to a lesser extent, the components M x and M y were extremely different between the subjects. The most pronounced inter-individual variations were found during the non-cyclic activities: standing, knee bends, and ascending and descending stairs.
Extreme load components EXTREME100 (Table 3) Selected peak values of all load components were analyzed with respect to their extreme magnitudes, using data from all trials, all subjects, and from the two postoperative measurement sessions. The selected extrema are indicated and numbered in Figures 2 to 5. Because of the described inter-individual variations in the load patterns, the ranges of the selected peak values were sometimes difficult to determine (Figure 7). The average of a certain peak value (''A'' in Figure 7) can be positive or negative. But in some subjects, the same peak value had an opposite sign (''S'' in Figure 7), or did not even exist in others (''N'' in Figure 7). These cases were excluded in the determination of the extreme peak values. The highest values of the relative maxima and the lowest values of the relative minima (''L'' in Figure 7) are listed in Table 3.
The inter-individual variations of single load components can be estimated by comparing their ranges with the peak values indicated on the component curves in Figures 2 to 5. Three examples are given here: A) peak ''2'' of F x during walking ( Figure 2) had an average value of 45N, but an EXTREME100 value of 292N was measured in subject K1L (Table 3); B) peak ''3'' of M y during walking (Figure 2) had an average value of 7.3 Nm, but had an EXTREME100 value of 27.3 Nm in subject K8L ; C) peak '2' of F y during ascending stairs (Figure 3) had an average Deviations of a factor of 5 or more were frequently observed between the average and individual peak values, especially in the transverse force and moment components.
Comparison of ISO loads with standardized loads
In Figure 6, the 3 load components defined by the ISO standard 14243 for wear tests are compared with the same components in the measured HIGH100 data from all activities.
Comparison of ISO loads with data from walking
The ISO loads were defined to simulate walking. However, nearly all extrema in the time courses of the ISO components were smaller than the HIGH100 values. The 1 st small maximum in the ISO course of the axial force -F z was lacking in reality. The 2 nd ISO maximum was only 9% smaller than the measured maximum, but the 3 rd maximum was 39% smaller. For the anterior force F y , the first ISO peak was lacking again in vivo, the 2 nd ISO peak was 158% smaller, but the 3 rd ISO peak was 43% larger than measured in vivo. The largest differences between the ISO standard and the values measured in this study were determined for the torsional moment M z . The 1 st ISO peak value was 287% smaller and the 2 nd ISO peak was 82% smaller than in vivo.
Comparison of ISO loads with data from other activities
A direct comparison between the mechanical effect of the ISO standard loads and the measured in vivo HIGH100 loads is not possible because the peaks of the ISO components act at flexion angles that are different than the flexion angles measured during the activities investigated in this study (Figure 6, bottom). The in vivo maxima of -F z were determined to be much higher than the ISO maxima during all investigated activities. For the 1-legged stance, knee bend, standing up and sitting down activities, the measured -F z maxima were 31-46% larger than the ISO maxima. During ascending or descending stairs, the measured peaks were 60-65% greater than the ISO peaks, and during jogging, the measured maxima were 97% greater than the ISO maxima.
Except for walking, only during jogging did the measured torsional moment M z have a higher maximum (+100%) than the ISO standard. The absolute values of the minima of M z were higher in the measured in vivo values compared with the ISO values during standing up (+30%), sitting down (+28%), knee bends (+17%) and jogging (+53%).
Limitations of the study
Even though the joint loads were collected from the largest group of subjects with instrumented knee implants currently available, the data would be different if more subjects were included in the study. In particular, the HIGH100 and PEAK100 loads would certainly increase. Deviating load levels can also be expected to occur in younger or very old subjects. Although the literature shows that in 2002 only 2.3% of people had a BW higher than 100 kg, this percentage may grow in the future. If that is the case, the loads reported here may even be exceeded.
Comparison with previous data
The only in vivo knee loads of other authors which can be compared with our data were measured with two different instrumented tibial trays [14,15]. In studies with 1-3 subjects axial forces of 180-280%BW were measured during walking, 250- 260%BW during chair rise, 250-300%BW when ascending and approximately 350%BW when descending stairs [16][17][18]28,29]. Peak anterior shear forces of 30%BW during walking, 26%BW during stair climbing, 17%BW during chair rise, and 15%BW during squatting were previously reported from one subject [18]. The peak AVER75 values which we determined for F res and F y are in the same range as the values determined in these previous studies. However, the large individual variation of F z which we found (www.OrthoLoad.com, menu Test Loads) could not have been determined in these publications, so no further comparisons could be made.
Our actual data slightly deviate from previous own measurements in only 5 of the subjects, taken at an earlier postoperative time [1]. Previously the average resultant forces were by 23% (walking), +8% (going up stairs), 20.5% (going down stairs), 211% (standing up), 28% (sitting down), and 23% (knee bends) different from the current AVER75 results. To prove whether the total force had indeed increased with the postoperative time during most activities, an analysis of the same sub-group would be required.
Adaptation of reported loads to test conditions
In joint simulators, cyclic loads which must start and end at the same values and should have the same slope are applied. Due to the time warping procedure, used to average single load cycles, these requirements are not perfectly met in this study. Therefore, curve fitting procedures must be applied to connect the last and first parts of the loading cycles reported here. Because their start and end values do not deviate much, 2 or 3% of the cycle durations may be appropriate for these transitions. The loads during standing up and sitting down may be combined to achieve cyclic loads.
Which loads for which test or analysis?
Our study shows large differences between measured loads, which can act in patients with a high body weight, and those defined in the ISO standard. Differences between this standard and analytically determined loads during walking have also been reported by others [22,30,31].
Some structural failures of knee implants and delamination of polyethylene, which occur in vivo, cannot be replicated by simulator tests [32]. When the ISO loads were replaced by a profile containing only 10% walking cycles, but 80% of cycles of ascending and descending stairs, plus cycles from chair raising and deep squatting, wear in an unicompartmental implant rose four times [33]. When neglecting either F y or M z in ISO tests, the wear rate dropped by 90% [34]. This indicates that wear would greatly increase if these components were higher. Under loads acting during activities of daily living, conventional polyethylene inlays had 30% higher wear rates than under ISO loads. If loads under high flexion were applied, the wear rate grew by 168% [31]. Such observations indicate that tests and analyses of replaced and natural knees should not be performed under pure walking conditions as defined by the ISO 14243 standard. Instead, more realistic loads from walking should be chosen and other activities should be included, especially those requiring high flexion angles. A more strenuous loading profile has also been proposed by others [32,35]. In light of these observations, the ISO wear test standard is presently discussed and will be modified in the future.
For testing or analyzing knee implants, the HIGH100 loads presented here (with fitted start and end intervals) should be chosen. For investigating problems of the static strength of the implant, its bony fixation, or of the surrounding soft tissues, the PEAK100 loads should be applied instead, but these are only 2-9% larger than the HIGH100 values. Small implants might not be able to withstand such high loads, and it could be discussed whether they are better tested at lower load levels.
Replacement of single HIGH100 components by extreme EXTREME100 components
Except for the time courses of the HIGH100 loads, the most important finding of this study is the strong inter-individual load variation, especially of the transverse force components (Figure 7 and extended data from www.OrthoLoad.com). Due to the extreme variations of some load components, even the reported HIGH100 loads will most likely not suffice to explain every case of implant damage or failure of a surgical procedure. Overloading of polyethylene or of soft tissues, such as cruciate ligaments, may greatly depend on the magnitude of a single load component such as the a/p force F y . As shown here, these components can be much higher than in the time courses given by the HIGH100 data.
If a single component is suspected to cause a certain failure or contribute to it, it could be increased so that its peak value(s) corresponds to the EXTREME100 peak value (Table 3). It could be, however, that a failure is caused (or expected) by a combination of 2 or more extreme load components. The torque M z , for example, may be more detrimental if the axial force -F z is small. In such cases, a large number of possible combinations with increased (or possibly decreased) components must be applied. This may be performed in analytical studies, but is difficult or even impossible in experimental investigations.
Another solution for this problem could be to increase all components during sections of the cycle time so that the marked extrema (Figures 2 to 5) reach the EXTREME100 values (Table 3). For peak ''1'' of F res during walking (Figure 2, top left diagram), peak ''2'' of F x would then have to be increased to 292N, peak ''1'' of F y to 2605N, and peak ''1'' of 2F z to 3,100N. This would, however, also change F res , which would increase from 2,848 to 3,172N. Furthermore, the loading directions would also be influenced (which may be the cause of the investigated implant damage). The frontal-plane angle between F res and the z-axis, for example, would change from 0.9u to 5.4u. In the horizontal plane, the angle between F res and the x-axis would decrease from 80.8 to 64.2u. The application of such a strategy is also questionable because the EXTREME100 values were taken from data collected from different subjects and may possibly never act combined in the same person.
We have no optimal suggestion for defining generally applicable combinations of load components for the most severe loading conditions. This problem must remain for future discussions, but it may well be that certain extreme loading conditions act in some subjects and that these cannot be appropriately tested in simulators.
Loads acting on implants of different design and in the natural knee joint
The investigated implant has an ultra-congruent polyethylene inlay and requires sacrificing both cruciate ligaments. Most of the forces in the transverse directions and possibly also of the moments M x and M z are therefore taken up by the implant. If prostheses of different designs are implanted, for example with a moving platform, or models which retain the posterior cruciate ligament [5,14,15,17,18], unknown portions of these components will not be taken up by the implant but by the ligaments. Similar differences will occur between the loads acting in the instrumented implants and in natural joints.
The best method for determining how much of the loads are taken up by the soft tissues would be setting up a realistic finite element model of the natural or replaced knee, including the soft tissues and the patella and to apply the reported loads from the femur to the tibia.
Medial-lateral force distribution
The distribution of the axial tibial Force -F z between the medial and lateral compartment can easily be calculated [36] from the data which is accessible from www.OrthoLoad.com (Menu Test Loads). In a previous study [37] with 5 of the subjects investigated now, up to 85% of the peak force were transferred on the medial side, depending on the valgus angle of the knee. With regard to an even load distribution, a slight valgus angle of 2u to 3u would therefore be favorable. | 2016-03-01T03:19:46.873Z | 2014-01-23T00:00:00.000 | {
"year": 2014,
"sha1": "06df05175ce18a9602bd91373eeb1e796a2fe22f",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0086035&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "06df05175ce18a9602bd91373eeb1e796a2fe22f",
"s2fieldsofstudy": [
"Engineering",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
16779459 | pes2o/s2orc | v3-fos-license | Cytogenotoxic effects of cypermethrin, deltamethrin, lambdacyhalothrin and endosulfan pesticides on Allium cepa root cells
Increased pesticides application in agriculture and public health has contributed to the pollution of the environment. This study evaluates the cytogenotoxic effects of emulsifiable concentrate of cypermethrin, deltamethrin, lambdacyhalothrin and endosulfan on Allium cepa root cells. Five concentrations (1.0, 5.0, 10.0, 20.0 and 40.0 ppm) of each pesticide were used for microscopic (48 h) and macroscopic (72 h) evaluations with distilled water as the control. Data were analyzed by Student's t-test. A dose dependent reduction in A. cepa root length was observed for the pesticides. Significant reduction in treated root length was observed at 10.0 ppm of deltamethrin, cypermethrin and lambdacyhalothrin, and at 20.0 and 40.0 ppm of all the pesticides compared to the control (P<0.05). The EC 50 values showed growth inhibition in the order of lambdacyhalothrin > cypermethrin > deltamethrin > endosulfan, while that of total aberrant cells was cypermethrin > lambdacyhalothrin > deltamethrin > endosulfan. Microscopic aberrations observed in the pesticide-treated onions include sticky chromosomes, disturbed spindle and chromosome bridges. Dose dependent reduction was observed in the total mitotic dividing cells and mitotic index of the pesticide-treated A. cepa, except for 5.0 ppm of endosulfan. The pesticides induced growth inhibition and caused cytogenotoxic effects on the meristematic cells of Allium cepa. The data herein provide more information on the pesticides of which exposure to substantial concentration might constitute health risk to non-target organisms.
INTRODUCTION
Pesticides are used to exterminate pests in order to increase yield and improve the shelf life of agricultural products.Besides, they are used in public health to reduce morbidity and mortality from pest related diseases.In recent years, there has been a tremendous increase in the use of these chemicals without paying much attention to the adverse effects they may have due to the toxic ingredients (Badr and Ibrahim, 1987;Anis et al., 1998).Reports have shown organochlorine pesticides like endosulfan to be toxic and have potential to be bioaccumulated in the environment and run off from field application of endosulfan leads to aquatic pollution.Animals that live in endosulfan-contaminated waters can bioaccumulate endosulfan in their bodies, the amount of which may be several times greater than in the surrounding water (ATSDR, 2008).Endosulfan has been reported to alter haematological profile in animals (Gimeno et al., 1994;Das et al., 2010;Modaresi and Seif, 2011;Yekeen and Fawole, 2011).Its accumulation in the environment led to its ban in most developed countries.However, it is still being used in most of the developing countries.Endosulfan is highly toxic and due to its persistence in the environment, its harmful effects are expected to manifest even in future generation of exposed population (Kumar and Chaudhary, 2012).
Bioaccumulative effects of organochlorine and high *Corresponding author.E-mail: tayekeen@yahoo.com.
toxic effects of organophosphates especially on nontarget organisms led to the increase use of pyrethroids as a potential alternative.Lambdacyhalothrin, deltamethrin and cypermethrin are type II pyrethroids extensively used in agriculture.Pyrethroids are also used in public health to reduce malaria morbidity and mortality (Zaim et al., 2000).
Although technical grades of pyrethroids were reported to have less to no toxic effects on non-target organisms, emulsifiable concentrate formulations of pyrethroids were two to nine times more toxic compared to the technical grades (Sanchez-Fortun and Barahona, 2005).Evaluations of some pyrethroids through different biological endpoints in animals show that they cause alteration in the haematological profile of exposed animals (Gimeno et al., 1994;Yekeen et al., 2007;Khan et al., 2012;Yekeen et al., 2013, Muthuviveganandave et al., 2013).Cypermethrin caused significant increase in chromosome aberration and in micronucleated erythrocytes frequency in farm workers (Carbonell et al., 1995;Lander et al., 2000).DNA damage was detected in tissue of workers involved in the production of cypermethrin (Grover et al., 2003).
Deltamethrin as a synthetic dibromo-pyrethroid insecticide and acaricide has been known to be three times more powerful than some other pyrethroids (Bradbury and Coats, 1989), which enhances its usage both indoor and outdoor.Cabral et al. (1990) reported that deltamethrin does not appear to be carcinogenic in mice or rats, while a very low dose of deltamethrin dis-plays harmful effects by disrupting hepatic and renal function and cause DNA damages in pubescent female rats (Chargui et al., 2012).
A non-significant induction of sperm cell aberra-ions in mice was reported for emulsifiable concentrate form of deltamethrin (Yekeen et al., 2007).Lambdacyhalothrin is used in public and animal health applications where it effectively controls a broad spectrum of insects and ectoparasites (Davies et al., 2000).The cytogenetic effects of lambdacyhalothrin were investigated in humans and various animal species using different endpoints such as micronucleus (MN) formation, induction of chromosomal aberrations and sister chromatid exchange (Fahmy and Abdalla, 2001;Celik et al., 2005), while studies on plant assay are limited.
The present study sought to evaluate the cytotoxic effects of cypermethrin, deltamethrin, lambdacyhalothrin and endosulfan in Allium cepa.This plant assay was selected because it is cost effective and as reliable as other methods for evaluation of chromosome aberrations (Rank and Nielsen, 1997) and can be easily used to assess toxicity via effective concentration determination (Yildiz and Arikan, 2008).
Test chemicals
All pesticides were procured in the form (emulsifiable concentrate) Yekeen and Adeboye 6001 commonly available in the market and widely used: Thionex® 35 EC (350 g/L) for endosulfan, Karate ® 2.5 EC for lambdacyhalothrin, Deltaforce ® 2.5% EC for deltamethrin, and 10% EC for cypermethrin.Carmine salt was purchased from Zayo Sigma Chemicals Limited, Nigeria.All other chemicals used were of analytical grade.
Allium cepa assay
The onion bulbs (Allium cepa L.) used for experiment were sundried for three weeks, and the outer scales and brownish bottom plates were carefully removed, leaving the root ring primordial intact.Five concentrations (1.0, 5.0, 10.0, 20.0 and 40.0 ppm) of each pesticide were prepared with distilled water used as diluents as well as the control.Twelve (12) onion bulbs were planted per concentration with each bulb placed on 50 ml capacity beaker filled separately with the prepared concentrations of the pesticides.Onion roots were grown at room temperature (25±1°C) in a dark cupboard.The contents of the beaker were replaced with freshly prepared pesticide solution at every 24 h.
The root tips used for microscopic evaluation were harvested from five onion bulbs per concentration at 48 h, and fixed in ethanolethanoic acid (3:1 v/v) before been transferred to 70% ethanol.The root tips were then hydrolyzed in 1 N HCl at 65°C for 3 min.Two root tips were squashed on slides, and then stained with acetocarmine for 15 min.
One thousand (1,000) cells per slide and a total of 5000 cells per concentration were scored for the frequency and occurrence of different types of chromosomal aberrations in the dividing cells at 1000x as previously described (Fiskesjo, 1985;Bakare et al. 2000;Lateef et al. 2007).The photomicrographs were taken with the Ocular VGA adapted Bresser Erudit DLX microscope (Germany).The mitotic index and mitotic inhibition were determined from the scores obtained for dividing cells based on these formulae: The length of each root from the 5 onion bulbs per concentration and the control were measured at 72 h for macroscopic evaluation, and growth inhibition was evaluated.The EC 50 was extrapolated from the graph of percentage root growth relative to control against pesticides concentrations.
Statistical analysis
The means with the standard errors for each of the concentrations per pesticide were calculated.The data obtained for the root length of the treated groups and the control was compared using t-test and considered significant at P≤ 0.05.
RESULTS AND DISCUSSION
The mean root length of the treated A. cepa for the four pesticides in all concentrations was lower compared to the control (Table 1).A dose dependent reduction was observed in A. cepa root length for the pesticides except at 5.0 ppm of deltamethrin.Significant difference in root length was observed at 10.0, 20.0 and 40.0 ppm of deltamethrin, cypermethrin and lambda-cyhalothrin, while endosulfan showed difference at 20.0 and 40.0 ppm (P<0.05).Highest percent-tage root inhibition was observed at 40 ppm of each of the pesticides.Figure 1 shows the percentage root length relative to control where the EC 50 values of 9.0, 21.5, 23.5 and 29.00 ppm were obtained for lambdacyhalothrin, cyperme-thrin, deltamethrin and endosulfan, respectively, which indicate the decreasing order of their inhibitory effects on A. cepa root growth.The growth inhibitory effect of the pesticides is indicated by the significant reduction of root length compared to the control.Table 1 also shows the microscopic evaluation of the pesticides.A dose dependent reduction in the total mitotic dividing cells and mitotic index was observed in A. cepa treated with the pesticides, except for 5.0 ppm of endosulfan.However, complete cell arrest was observed only in deltamethrin at 40.0 ppm.The values of mitotic index obtained for all pesticides at 10.0 (except in endosulfan), 20.0 and 40.0 ppm were lower than half of the negative control, which reflect their cytotoxicity.Similar observation was reported in A. cepa treated with different pesticides (Asita and Matebesi, 2010;Sibhghatulla et al., 2012).The total chromosomal aberrations induced were in the order: Cypermethrin > lambdacyhalothrin > deltamethrin > endosulfan.
The aberrations observed in the three pesticides included sticky chromosome, disturbed spindle, cmitosis, chro-mosome-bridge and laggard chromosomes (Table 1 and Figure 2).Stickiness observed in the pyrethroid-treated onion roots may be due to physical adhesion of the proteins of the chromosome (Patil and Bhat, 1992).The occurrence of c-mitosis indicates that spindle formation was adversely affected (El-Ghamery et al., 2003).Disturbed spindle resulted in inability of chromosomes to move to the poles.
Chromosome bridge is formed by breakage and fusion of chromosomes and chromatids, the stickiness of chromosome and subse-quent failure of free anaphase separation, and unequal translocation or inversion of chromosome segments (Gomόrgen, 2005).Permjit and Grover (1985) attributed laggard chromosomes to the delayed terminalization, stickiness of chromo-some ends or the failure of chromosomal movement.
Aberrations of mitotic cycle, change of mitotic index and chromosomal abnormalities observed after exposure to toxic metals, metalloids or organic pollutants were attributed to the disorganization and depolymerization of microtubules, which underlie these processes in higher plant cells (Liu et al., 2009;Xu et al., 2009;Dho et al., 2010;Eleftheriou et al., 2012Eleftheriou et al., , 2013;;Adamakis et al., 2013).Cypermethrin among other pesticides tested in this study has the highest total chromosomal aberration.Seehy et al. (1983) reported that in mice, both technical and formulated products of alpha cypermethrin showed a dose dependent sister chromatid exchanges in dividing cells at all dose levels but the highest doses inhibited mitotic division.
Cypermethrin and alphamethrin were reported to elicit varying degrees of cytotoxic, turbagenic (toxicity to spindle) and clastogenic effects but generally more turbagenic and weak clastogenic (Rao et al., 2005).However, Asita and Makhalemele (2008) reported that alpha-thrin (active ingredient of alpha-cypermethrin) was only cytotoxic but not genotoxic at various concentrations in treated A. cepa.Cypermethrin has been classified as a possible human carcinogen (EPA, 2002).
The pesticides used induced significant growth inhibition at 10.0, 20.0 and 40.0 ppm.Also, at these concentrations, the mitotic index was lower than half of the values obtained for the control which indicate their cytotoxic effects.Induction of chromosomal aberrations at different concentrations shows the genotoxic effects on the meristematic cells of A. cepa.The aberrations observed were however not dose dependent, which may be due to fewer number of dividing cells at higher concentration of the pesticide and complete cell arrest observed at 40.0 ppm of deltamethrin.
Our results are in accord with the previous reports, where mitotic inhibition and genotoxicity of pesticides were demonstrated (Mosuro et al 1999;Chauhan et al., 1999;Kumar and Chaudhary, 2012).Reduction in mitotic activity could be due to the inhibition of DNA synthesis (Schneiderman et al., 1971;Sudhakar et al., 2001) or due to a block in the G2-phase of the cell cycle, thus preventing the cell from entering mitosis (Van't Hof, 1968).Prior to occurrence of chromosome aberrations, there is always some growth restriction which is the cumulative response of all the damaging effects (Fiskejo, 1997).
Conclusion
The inhibition of growth and induction of chromosomal aberrations by the pesticides show their cytogenotoxic effects.This data provide more information on the cypermethrin, deltamethrin, lambdacyhalothrin and endosulfan of which exposure to substantial concentration may constitute health risk to non-target organisms and thus will assist in future ecotoxicological evaluations.
Figure 1 .
Figure 1.Growth inhibitions of pesticides treated A. cepa root.
Table 1 .
Macroscopic and microscopic evaluations of the pesticide treated Allium cepa. | 2014-10-01T00:00:00.000Z | 2013-10-31T00:00:00.000 | {
"year": 2013,
"sha1": "d7f6af67a5ead714164141d5364c8c64cb34a6dd",
"oa_license": "CCBY",
"oa_url": "https://academicjournals.org/journal/AJB/article-full-text-pdf/C24126E30643.pdf",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "6e8ea4d8feaf63a6e91768bfa621aa5a3c58b4fb",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology"
]
} |
237882853 | pes2o/s2orc | v3-fos-license | Public health practitioners’ perspective on the sustainability of the tuberculosis control programme at primary health care level in Pakistan
Background: In resource-limited settings, national tuberculosis (TB) control programmes are highly dependent on external funds, which may pose a challenge to programme sustainability. There is a recognized need for developing guidance around sustainable programming of current TB control initiatives. Aims: The aim of this study was to explore public health practitioners’ perspectives on the sustainability of TB control initiatives in Pakistan at the primary health care (PHC) level. Methods: Guided by an interpretive epistemology, online in-depth interviews were conducted with 10 public health practitioners who had experience as resource planners in the TB control programme in Pakistan. Thematic content analysis was employed to the textual data as the analytical approach. Results: Three themes were inductively derived from the thematic analysis: community involvement, stakeholder engagement and efficient use of the PHC system. Community involvement was a determinant in sustaining TB control initiatives. This was attributed to the nature of the disease and prevalent health seeking behaviour. Stakeholder engagement was associated with funding arrangements between public and private partners and considered important in how new initiatives can be made part of the routine structure. Overall, having an efficient PHC system was deemed critical in sustaining current TB control initiatives at the PHC level in Pakistan. Conclusion: Fostering an enabling operational environment through regulations, supporting the utilization of existing resources, expanding the network of providers, inclusive planning, increasing spending on research and cost–effective testing are pivotal for sustaining the TB control initiatives.
Introduction
According to the Global TB Report of 2020, Pakistan is one of 30 high tuberculosis (TB) burden countries with an incidence rate of 570 000 cases per year and 357 893 new and relapse cases (2018 cohort) (1). This means that a significant proportion of cases (~200 000) are missed, posing a significant threat to public health in Pakistan.
Over the past few decades, Pakistan's National TB Control Programme has achieved a remarkable improvement in notification and successful treatment rates. These can be attributed to the adoption of the directly observed treatment short-course (DOTS) strategy from 1995 onwards, revival of the National TB Control Programme in 2001 and to financial contributions of government and its partners (2). However, adaptation in the managerial set-up of the National TB Control Programme and the continuation of technical and donor support will be important in achieving the sustainability of the programme.
Sustainability refers to the continuation of a programme after the initial efforts implementing it (3,4). In the public health discourse, sustainability of public health initiatives refers to the evaluation of longterm effects of public health programmes because they are implemented over a longer period (5). As Altman contends, sustainability remains a key challenge as most public health interventions are discontinued after the initial funds are exhausted (6).
The sustainability of public health programmes has gained attention among various stakeholders (researchers, donors, community partners, etc.) in the recent past (7) with the focus on understanding contextual factors in which interventions are embedded (5). Evidence suggests that sustainability of public health initiatives can only be achieved if primary health care (PHC) is adequately emphasized, which is also true for Pakistan (8).
With the increasing global interest in sustainability of public health initiatives, programme managers in the TB control programme in Pakistan have recognized the need for developing understanding and guidance around sustainable programming of public health initiatives and have called on the government in regard to developing a EMHJ -Vol. 27 No. 9 -2021 sustainable approach (9). Therefore, the aim of this study is to explore public health practitioners' perspectives on the sustainability of TB control initiatives in PHC settings in Pakistan. This study will guide policy and programmatic decisions to support sustainable TB programming at the PHC level in Pakistan.
Study design
This research utilized an interpretivist approach, which acknowledges that reality is socially constructed within a context (10) through natural-style conversation (11). That is why we adopted an exploratory qualitative research design (12): it allowed us to explore the perceptions of public health practitioners regarding the sustainability of the TB control programme.
Sampling and eligibility criteria
Purposive sampling was employed to recruit public health practitioners from different types of organization, including governmental (national and provincial TB control programmes) and not-for-profit nongovernmental organizations. Purposive sampling was deemed appropriate for the study as it intends to yield in-depth understanding of information-rich cases (13).
Potential participants who met the following eligibility criteria were invited to take part in the study: having more than 5 years of work experience, either previously or currently, in programming of TB control and strategic health planning in Pakistan; either working or having worked at the national or provincial level; fluent in English or Urdu; able to take part in Skype-based interviews. The exclusion criteria included: having no or little experience of TB programming in Pakistan; and having no access to Skype.
Recruitment
E-mail invitations were sent to participants who met the inclusion criteria. A participant information sheet was shared and informed consent was sought via e-mail. A Skype interview was scheduled with the participants at a time and place of their convenience. Out of nineteen potential participants who were invited to take part in the study, 10 public health practitioners participated in the in-depth interviews. While some of the invited participants did not answer multiple reminders, 2 declined to take part in the study because of their work commitments. Ten in-depth interviews were conducted. The profile of the participants is given in Table 1.
Data collection
A semi-structured, in-depth interview was conducted with the use of an interview guide (available on request), which was developed from the relevant sources (3)(4)(5)7,8,14,15). Key questions are listed below. • In pursuit of the Sustainable Development Goals, what is the importance of primary health care, with focus on tuberculosis control?
• What do you think sustainability is and why is sustainability important in today's world?
• What is your opinion of the sustainability of the TB control programme in Pakistan?
• What kind of sustainability challenges is national TB control facing and how can they be managed?
• Based on the discussion and in your opinion, can you name one or a few critical factors which are required for sustainability?
Skype interviews were conducted between November 2019 and February 2020. These interviews were video/ audio recorded with participant consent, and lasted between 30 and 45 minutes.
Analytical approach
Thematic analysis was employed to analyse the interview transcripts. This involved immersion into textual data and identification of emerging themes or ideas relevant to the area of inquiry (16).
Ethical considerations
Ethical clearance for the study was granted from the review board of the International Research Force in Pakistan and from the University of Liverpool's ethical review committee. Participants were allowed to withdraw from the study at any time without giving any reason. They were assured in regard to their privacy and the confidentiality of the data. Relevant records were anonymized (Table 1). No monetary compensation was given to participants.
Conceptions of sustainability
The thematic content analysis included interviews with 10 public health practitioners with knowledge and experience of resource planning in the TB programme. The analysis of textual data highlighted 3 broad themes in relation to the understanding of the sustainability of TB control initiatives at the PHC level in Pakistan (Table 2). Respondents tried to deconstruct the concept of sustainability based on their conceptualization. Most of the respondents viewed sustainability as the continuation of financial resources until TB is eliminated from Pakistan. However, an alternative conception of sustainability was elaborated as the maintenance of existing control efforts, as reflected in the quote below: "Sustainability is more referring to continuity of that [existing] service … about rest of 30-35% missing cases, how to reach this population is more of an innovation and expansion rather than sustainability." [N4PNGO20191230] Given the conceptions on what sustainability meant to participants, this was constructed around the following 3 themes: an efficient PHC system, community involvement and stakeholder engagement.
Efficient primary health care system
Pakistan has an extended primary health care system that forms the backbone of the overall health care system. The importance of the PHC setup was also recognized in the Sustainable Development Goals by prioritizing PHC services and thinking beyond vertical programmes (17). One of the respondents reflected this as: "… services at the grassroots level … are normally curative and preventive in nature … these services actually proved beneficial to reach out the targeted population when you try to integrate the vertical programmes, just like TB or malaria." [N4PNGO20200219] An efficient PHC system is elaborated through 2 subthemes: significance of the PHC system and health care system strengthening. The PHC system in Pakistan is the first level of health care, and comprises both public and private sector facilities. Most of the respondents recognized that the PHC system is critically important and without strengthening it further, sustainability in the TB control programme cannot be achieved.
"Primary health care set up is important … [because] it is approachable and affordable to community … and is a first point of contact … strengthening this level is important for sustainability." [N4PNGO20191207] The role of the PHC system is also significant in running advocacy campaigns that will allow for capacity-building among the community. One respondent gave an example: "... they engaged schoolgirls and then they made them their TB advocates. They were given training on how to screen and later they were asked to do screening in their respective areas …" [N4PNGO20191207]
Health care system strengthening
Most of the respondents mentioned the significance of reforms to develop and implement relevant guidelines. The need for building the capacity of health care professionals and improving referral linkages between health care facilities were recognized as important factors for identifying missing TB cases and sustaining the control efforts. One of the respondents said: "… what type of patient, at what level of care and when to access specialized care … so … this type of [inequitable] system is not sustainable until we do reforms." [N4PNGO20191116] For health system strengthening, innovations were accorded immense importance by the majority of the respondents. One respondent representing a government organization suggested: "… private sector is needed to make interventions … here the innovations are needed … new experiments can be performed so this can be executed by the private sector … at many times, we are so much restricted by regulations and also due to HR constraints that we cannot travel far and cannot leave facilities." [GO20191201]
Community involvement
From the analysed data, community involvement in TB control initiatives was conceptualized in the following 2 subthemes: health seeking behaviour and contributions towards health care.
Health seeking behaviour
Low education level, poor health awareness and the stigma associated with TB in Pakistan result in the development of negative health seeking behaviour.
Given the stigmatization of TB in Pakistan, raising disease awareness is considered particularly important for generating the demand for treatment. However, several of the respondents articulated that meaningful participation of community members is lacking in the current programming. One of the respondents said:
Contributions towards health care
The cost of TB care is considered an important factor in the accessibility and acceptability of the TB care and prevention services in Pakistan. Increasingly, the published literature supports social protection schemes and policies; hardly any respondents had opposing views.
A few respondents had the opinion that community members exhibit non-responsible behaviour as they do not acknowledge the availability of free-of-cost services. Therefore, they supported the idea of a nominal contribution from the community towards health care costs.
"... we need to make our community realise that if they are provided with free-of-cost services, then they should acknowledge them rather than to condemn services and discourage continuity of treatment." [N4PNGO20191116] In Pakistan, TB is prevalent among those who have low socioeconomic status and a low education level (18). Therefore, there is a need for raising awareness so that the demand for TB care and prevention services is created.
Stakeholder engagement
The End TB Strategy demands actions beyond the health ministry and emphasizes that the National Strategic Plan should be developed and implemented in close coordination and collaboration with all stakeholders (19). After stakeholders are identified, their roles and responsibilities and funding arrangements should be defined based on the nature of the interventions.
Nature and institutionalization of interventions
Sustainability concerns the institutionalization of the newly implemented interventions, and institutionalization depends on the extent of shared understanding of sustainability among different stakeholders (7). The majority of respondents considered the government of Pakistan, or the National TB Control Programme, as a prime stakeholder. The involvement of other functionaries, such as finance, economics, planning and development, was considered equally important. Other nongovernmental stakeholders identified were community-based organizations, faith-based organizations, professional associations, and global and bilateral donors, thus suggesting a multisectoral approach to planning and implementation.
Nearly all respondents agreed that the government's commitment has to be increased and funds allocation on the TB control programme should be prioritized, as illustrated by one respondent thus: "... government says that health and education are our priorities in Pakistan and they allocate the lowest budgets for health and education sectors. So, now you [we] must have a clear idea about their priorities." [GO20200205] In terms of sustainable TB control programmes, most of the respondents acknowledged the importance of the private health care sector and identified the need for utilizing existing resources, for which regulation is an important step. Therefore, programme design was given importance and expressed as: "One key dimension of sustainability would be the programme design," and explained further as "Roles and responsibilities are assigned to individuals and [their] settings, which are regular structures, rather than project structures." [N4PNGO20191230] Although respondents representing the government considered innovations as a means of engaging private sector organizations, they criticized the disproportionately high operational and human resource costs. An experienced resource planner within the TB control programme said: Generally, innovations and research were ranked high among respondents, but one respondent expressed concern about the research situation in the TB control programme and said: "If you start prioritizing funding/priority areas, then the component of research goes very down in that priority list ... and this is [the] reality of all low and middle income countries …" [N4PNGO20200219]
Funding arrangements
With inadequate domestic funding and system-level inefficiencies, dependence on donors is recognized as a potential limiting factor for sustaining the TB control programme.
Implementation of the National Strategic Plan became challenging because of the competing interests among the public and private implementation organizations. This took away from the programme the opportunity to allow partners to complement each other.
Moreover, politicization of the funding process and donors supporting their own funding mechanism were seen as potential hindering factors in the implementation of the National Strategic Plan. One of the respondents explained this analogously: "If you ask Coke and Pepsi to sit down and figure out nicely, they would laugh out and would say we don't want to figure everything out. Coke don't want Pepsi in the market and Pepsi don't want Coke in the market." [N4PNGO20200205] Owing to this situation, few of the respondents supported the idea of role distributions among national and provincial partners (both public and private) so that pooled money is distributed based on their roles, hence promoting resource efficiency. EMHJ -Vol. 27 No. 9 -2021
Discussion
Having an efficient PHC system is a key aspect of sustaining the TB control initiatives at the primary care level in Pakistan. Utilization of existing resources, integration of services and capacity-building of health care providers are some examples. Moreover, stakeholder engagement and management should be guided by the national strategy, while recognizing community as an important stakeholder.
Inclusive planning, in which the government of Pakistan is a prime stakeholder, is critical for the sustainability of TB control efforts. The WHO has recognized the importance of government and community and defined sustainability as the ability of a project to continue delivering services with high treatment coverage, integration into existing health care services, strong community ownership and communityand government-driven resource mobilization (14). However, respondents expressed concern over the lack of involvement of communities in the planning process. The capacity of the community to continue with programmes is also seen as favourable to their sustainability (15,20). Financial support is an important factor in enhancing the community's capacity to sustain the programme. For example, in Myanmar the contribution of local nongovernmental organizations fell due to the diminishing involvement of community members in the absence of a payment or financial support mechanism (21).
Insufficient government funding allocation increases dependence on donors (e.g. the Global Fund) for even basic services such as TB drugs. The same trend has been noted in many other developing countries, putting sustainability at risk (22). Similarly, policy-makers in Pakistan see the donor's influence on priority-setting negatively for both policy formulation and programme implementation (23).
There is a recognition that strengthening at the PHC level will help in sustaining existing TB control efforts in Pakistan as in 90% of TB cases at the national level, the contact is with the private sector, forts including the community-level informal sector (24). China has set an example in the fight against TB because of its increasing focus on the PHC system (25).
Despite the recognition of the importance of research, lacking a research agenda at the national level is a concern for formulating evidence-informed resource allocation decisions. The need for increased funding on research and development was accepted at a high-level meeting of the UN (26). Similarly, the need for increasing the research capacity and utilizing the evidence for various decisions in a more sustained and effective manner are also stressed (27).
Sustainability of health interventions is needed to allow the assessment of the long-term effects of health interventions (28) and to enable the detection of changes in community health status (29). There is a need to ensure a sustained funding mechanism to sustain evidence-supported interventions (30). Therefore, prioritizing assessment of the sustainability of the TB control programme is essential for the efficiency of the programme.
Conclusion
There is a clear need for investing more in sustaining the TB control programme at the primary health care level in Pakistan. Financial resources alone will not help achieve sustainability. Rather fostering an enabling operational environment through legislation and regulations, utilizing existing resources and expanding the network of providers at the PHC level are also needed. In consideration of these factors, inclusive planning with various government functionaries and communities, and increasing spending on research, cost-effective testing and evidence-informed innovation are all pivotal for sustaining the programme. Going forward, there should be an increased focus on innovation and research for guiding relevant investment and management decisions aimed at improving the efficiency of the programme at the PHC level. | 2021-09-01T15:16:16.833Z | 2021-06-17T00:00:00.000 | {
"year": 2021,
"sha1": "d01e0ab76663d58f333e8dc45041a134d61a36aa",
"oa_license": null,
"oa_url": "https://doi.org/10.26719/emhj.21.044",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "4f425867cb092eb0e949602536e64265613281e0",
"s2fieldsofstudy": [
"Medicine",
"Political Science"
],
"extfieldsofstudy": [
"Political Science",
"Medicine"
]
} |
118901109 | pes2o/s2orc | v3-fos-license | Gravity Control Propulsion: Towards a General Relativistic Approach
Evaluation of gravity control concepts should be examined with respect to currently known physical theories. In this work we study the hypothetical conversion of gravitational potential energy into kinetic energy using the formalism of general relativity. We show that the energy involved in the process greatly exceeds the Newtonian estimate, given the nature of general relativity. We conclude that the impact of any gravity manipulation for propulsion greatly depends fundamentally on its exact definition.
Introduction
Access to space using currently available propulsion systems is extremely limited both in distance and in time. In recent years the interest in unconventional propulsion proposals has grown in the hope that new forms of propulsion, that increase the range of spacecraft while reducing the trip time, are discovered.
Several conceptual mechanisms have been proposed to radically improve the performance of propulsion systems, such as warp-drives 1,2 , transient mass fluctuations 3 , antigravity 4 or gravitational shielding effects 5 . While some of these proposals are only conceptual, others such as the gravitational shielding have been tested and proved unfruitful 6 .
Many of these systems are based on the effect of the manipulation of mass and gravity in a rocket's motion. The study of the impact on propulsion systems' performance of such manipulations has been previously considered and demonstrated that even if gravity could be controlled or modified it would not lead to any breakthrough in propulsion systems 7 .
Other conceptual devices (e.g. the space drive) go even further, idealizing a form of propulsion without any reaction mass that would somehow manipulate space-time and matter to create propulsive forces. One of the possible energy sources for such a device, as suggested in Ref. [8], is gravitational potential energy. For concreteness we shall argue along the energy considerations as suggested in that reference. In there, it has been proposed that these systems could not be analysed using rocketry metrics and that their full potential could only be understood in terms of the energetic considerations 8 . We understand that this energy based study should be undertaken; nonetheless it should be regarded with caution, especially when trying to estimate the potential benefits of converting gravitational potential energy into kinetic energy. Furthermore, we believe that the results of any approach in the context of the Newtonian framework should be considered with great care.
In the current work we approach this space drive problem from the general relativity point of view. For this purpose we consider the energy-momentum pseudo-tensor in order to estimate the energy in a given of space-time volume. As expected, we find that the resultant energy is considerably larger than the Newtonian estimate based on the difference in the gravitational potential between two distinct points.
Before we present our computation and discuss its implications, we review some of the results presented in Ref. [7] concerning gravity manipulation based on rocketry metrics.
Rocketry Metrics
Classical propulsion systems rely on Newton's mechanics. The foundations of these propulsion devices lie on the conservation of linear momentum in a variable mass system composed of a rocket and its propellant.
The existence of hypothetical devices capable of manipulating gravity or mass and their influence on propulsion breakthrough concepts has been studied in the Newtonian mechanics framework 7 . There have been analysed several possible manipulations: -Inertial mass manipulation (scaling of the inertial mass: m i → δ m i ), -Gravitational mass manipulation (scaling of the gravitational mass: m g → ε m g ), -Gravitational field manipulation (scaling of the gravitational coupling G → ε' G).
It has been shown that even if achievable, these manipulations would not imply a breakthrough for propulsion, and in some cases they would have to compete with the existing technologies.
The results of these hypothetical manipulations can be summarized as follows: For a full discussion, see Ref. [7].
General Relativity Inspired Energy Considerations
General relativity is currently the theory that best describes gravitational phenomena having been tested in a wide variety of situations, from the solar system up to larger scales (see e.g. Ref. [10] for a review).
In the context of general relativity, gravity is interpreted as the curvature of a 4dimensional space-time. The fundamental equations of general relativity are the Einstein's Field Equations: where G ik is the Einstein tensor ( ) and T ik the energy-momentum tensor of matter.
Given that general relativity is the theory that best fits available data at solar system scale and beyond, it provides the most suitable framework to evaluate the potential breakthrough for propulsion of any system that hypothetically converts gravitational potential energy into kinetic energy.
We argue that in dealing with gravity manipulation using Newtonian mechanics is somewhat misleading given that Newton's gravity is concerned only with the dynamics along field lines. Given the nature of general relativity one should expect that any manipulation of gravity has global consequences since it would affect the space-time in its surroundings. In a previous approach it has been suggested that the amount of energy available for conversion in a trip between two points separated by a distance L in a Newtonian gravitational field is given by 8 : However since general relativity describes space-time dynamics as a whole it allows for a more accurate understanding of the impact of a hypothetical gravity manipulation.
Energy conservation in a certain volume of space requires that any hypothetical gravity manipulation can convert only a fraction of the energy available in the volume element into kinetic energy and for that it must have energy resources to manipulate the space-time in question.
In order to estimate this energy, we compute the energy available in a space-time volume near a spherically symmetric mass, like the Sun. To do so it is necessary to introduce some general relativistic formalism.
The energy-momentum conservation of a system composed of matter and a gravitational field can be expressed as 11,12 : where T ik is the matter energy-momentum tensor and t ik is the energy-momentum pseudotensor.
The energy-momentum pseudo-tensor t ik can be written as a function of the affine connection as 11,12 : From Eq. (3) it is clear that the following quantity is conserved: being P i the 4-momentum of the matter plus the gravitational field.
Integrating over a hypersurface of constant time, P i can be written in the form of a 3dimensional space integral: To evaluate the amount of potential gravitational energy available in a certain volume of space it is necessary to find the corresponding P 0 : Of course, the energy-momentum pseudo-tensor formalism is just an approximation to the complex problem of defining mass in general relativity without ambiguity. Nevertheless, since our computation is carried in the weak field limit, that is, in post-Newtonian limit, it is fairly accurate for our purposes.
It is now necessary to specify the space-time metric; in a weak field and low-velocity approximation, the metric can be written as: Hence, the (00) component of the energy-momentum pseudo-tensor of the gravitational field that accounts for the gravitational energy density, becomes: where r GM U ≡ is the Newtonian potential.
To calculate P 0 it is also necessary to know T 00 , the space-drive energy density.
Assuming that the space drive is a point-like particle at rest in a position 0 r r relative to the centre of the spherical mass, T 00 can be then written as: We have now an explicit formula for the energy (c P 0 ) contained within the considered volume element: where (14) To compute these integrals is necessary to choose a particular geometrical setting.
Considering that we are dealing with a device that somehow manipulates space-time in the surroundings of the spacecraft as it moves, it is hence natural to consider a cylinder as the volume element. Of course one could consider a different geometrical configuration, i.e. a conical geometry, given the radial nature of gravity. However, this would not change significantly our results. For the proposed setting, a cylinder type configuration seems more appropriate, but this is for sure not a fundamental issue. The key question is that general relativity suggests that gravity manipulation involves necessarily a volume integration.
Computing these integrals in cylindrical coordinates with with 0<r<R, 0< θ<2π and R sun <z<L (see Figure 1) we obtain: where from the metric determinant in Eq. (14) we kept only the terms up to first order in U to compute the integral.
We point out that a computation using the higher order metric: We can now see that the energy available in a cylinder of radius R and length L measured form the surface of the Sun is: (
Conclusions and Outlook
The possibility of manipulating gravity to extend space travelling to interstellar distances and to reduce trip time is an interesting topic from the physics and from the propulsion point of view. Evaluating the potential of any gravity manipulation concept must be carried out in the context of the currently theories of physics.
In this work we approached these issues using the general relativity framework.
Through the use of the energy-momentum pseudo-tensor of the gravitational field we estimated the energy contained in a volume of space-time. Our results reveal that there is more potential energy than in the previous Newtonian estimate. However the interpretation of this result greatly depends on the exact definition of gravity manipulation. Understanding our calculation as the energy available for conversion leads to an encouraging conclusion, since the energy available is much larger than previously estimated. On the other hand, regarding this result as the energy that must be spent to control a region of space-time, leads to a radically different conclusion. From this point of view, gravity manipulation is an essentially unfruitful process for propulsion purposes. | 2019-04-14T03:15:35.567Z | 2006-10-16T00:00:00.000 | {
"year": 2006,
"sha1": "fd4ba6abe612e141705a89f4e78ded20cf56cf28",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "fd4ba6abe612e141705a89f4e78ded20cf56cf28",
"s2fieldsofstudy": [
"Physics",
"Engineering"
],
"extfieldsofstudy": [
"Physics"
]
} |
29503905 | pes2o/s2orc | v3-fos-license | Stroboscopic vision and sustained attention during coincidence-anticipation
We compared coincidence-anticipation performance in normal vision and stroboscopic vision as a function of time-on-task. Participants estimated the arrival time of a real object that moved with constant acceleration (−0.7, 0, +0.7 m/s2) in a pseudo-randomised order across 4 blocks of 30 trials in both vision conditions, received in a counter-balanced order. Participants (n = 20) became more errorful (accuracy and variability) in the normal vision condition as a function of time-on-task, whereas performance was maintained in the stroboscopic vision condition. We interpret these data as showing that participants failed to maintain coincidence-anticipation performance in the normal vision condition due to monotony and attentional underload. In contrast, the stroboscopic vision condition placed a greater demand on visual-spatial memory for motion extrapolation, and thus participants did not experience the typical vigilance decrement in performance. While short-term adaptation effects from practicing in stroboscopic vision are promising, future work needs to consider for how long participants can maintain effortful processing, and whether there are negative carry-over effects from cognitive fatigue when transferring to normal vision.
The human visual system typically receives an intermittent flow of incoming information due to blinks, saccades and periods of transient occlusion when an object-of-interest disappears from view behind another object or surface (e.g., as the ball is obscured by the defensive players during a free kick in soccer). This usually goes unnoticed, with the intermittent input transformed into a unified and continuous perceptual experience. However, even when there are longer periods of occlusion (e.g., artificial manipulation using stroboscopic vision eyewear), relevant information can be gained from intermittent visual samples to provide sufficient information for successful performance of precision interceptive actions 1 . Recently, it has been reported that practicing in such vision conditions can facilitate sports-specific skills in ice-hockey 2 and baseball 3 . Analogous to altitude training for the endurance athlete 4 , the premise is that practicing in stroboscopic vision encourages visual-cognitive processes to adapt in order to cope with the suboptimal information available. Processes shown to transfer positively when vision is subsequently restored to normal include short-term visual memory 5 , coincidence-anticipation timing 6 , and motion coherence and attention in central vision 7 .
Continuing with the analogy of altitude training, it follows that practicing in stroboscopic vision is effortful and attentionally demanding. Indeed, anecdotal reports suggest that participants exhibit more focussed attention on an approaching object when practicing catching tasks in stroboscopic vision 8 . This is consistent with related empirical work that has shown an overall increase in attention (i.e., "high-beams" effect) in order to maintain a persistent visual-spatial memory of relevant stimulus locations (i.e., object and distractors) when vision is intermittently occluded 9 . It is important to recognize, however, that a high attentional load and effortful processing cannot be maintained indefinitely. In accord with the overload hypothesis 10 , it follows that a high attentional load can eventually lead to the depletion of attentional resources and a decrement in performance. This has implications for the design of stroboscopic vision training programmes, which to date have used both experimenter-determined (e.g., 25 minutes 5 , 5-7 minutes 6 , and self-determined (e.g., 10-45 minutes 2 ) exposure duration.
In the current study we sought to determine the effect of stroboscopic vision on attentional allocation while performing coincidence-anticipation timing, which is a key element to many daily life activities such as driving or in different sporting disciplines where it is necessary to avoid or intercept moving objects 11 . Rather than using a probe-reaction procedure to determine the amount of attention used when performing coincidence-anticipation timing in stroboscopic vision compared to normal vision, we were interested to know if stroboscopic vision influences the ability to sustain attention as a function of time-on-task. Therefore, we adopted the method used for testing psychomotor vigilance, whereby participants are required to sustain attention over time in order to respond efficiently to repeated presentation of the imperative stimulus 12 . Specifically, we compared vigilance in a normal vision condition and a stroboscopic vision condition (4 Hz) while performing repeated trials of a coincidence-anticipation task in which the object moved with constant acceleration (−0.7, 0, +0.7 m/s 2 ). We hypothesised that participants would exhibit deterioration in performance (accuracy and variability) as a function of time-on-task in the normal vision condition due to monotony and attentional underload 13 . Conversely, we hypothesised that the greater demand on visual-spatial memory for motion extrapolation in the stroboscopic vision condition would enable participants to sustain attention and thus offset the typical vigilance decrement.
Methods
Participants. Twenty male undergraduate students (M = 23.15 years of age, SD = 2.35) volunteered to take part in the study. All participants reported having normal or corrected-to-normal vision. Participants were provided with general information about the task and stimulus prior to giving informed written consent. All procedures were conducted in accordance with the Declaration of Helsinki and were approved by the Liverpool John Moores University Research Ethics Committee.
Apparatus, Task and Procedure. Coincidence-Anticipation. Participants were required to press a button mounted in a hand-held joystick at the moment an object (single red LED of 5 mm diameter) that moved along a 3 m linear track (HEPCO) reached a fixed target position. The target comprised two red LEDs (5 mm diameter) mounted on either side of the track. The object was attached to a sled that was moved along the linear track by a stepper motor controlled by in-house routines implemented in MATLAB (The Mathworks, Inc., MA, USA). The object moved with constant acceleration (−0.7, 0, +0.7 m/s 2 ) such that it reached the target after 1000 ms moving with a velocity of 1.25 m/s. It then continued to move with the same acceleration for a further 100 ms, after which it was brought to a standstill. The target remained stationary for 2000 ms, and then moved slowly back to the start position for the next trial. The moment the button was pressed and the sled reached a switch located coincident with the target were recorded via a data acquisition card (NI PCI-6035E) and stored for offline analysis.
Participants performed the coincidence-anticipation task in a normal vision condition and stroboscopic vision condition. In the latter, participants wore eyewear (Nike Vapor Strobe ® ) with LCD lenses that cycled between "open" and "closed" states. The "open" state had a fixed duration of 100 ms, whereas the "closed" state could be set at one of eight levels. Following previous work on stroboscopic vision during coincidence-anticipation 6 , here we selected level 3 for the closed state, which had a duration of 150 ms (i.e., 4 Hz cycling rate). In the "closed" state the lenses are less transparent and thereby are likely to perturb perception of motion and form (see discussion). Effectively, in the "closed" state the lenses act as neutral density filters and thus reduce light transmission. Under ambient room lighting (625 lux), which was used throughout experimental testing, a digital light meter (Lutron LX-1108, Taipei, Taiwan) located directly behind the lens of the stroboscopic vision eyewear indicated the illuminance was 128 lux in the "closed" state. An illuminance of 100 lux is similar to that of a "very dark overcast day" 14 , while 320 lux is the minimum illuminance for office lighting recommended by the US Department of Labour. We were unable to reliably measure illuminance when the stroboscopic eyewear were in the "open" state (100 ms), although the lenses were sufficiently transparent that participants reported having normal visibility.
To ensure that participants in the normal vision condition believed they were the subject of an intervention, and thus experienced similar expectation effects as the stroboscopic vision condition (i.e., Hawthorne and/or placebo effect), they performed the coincidence-anticipation task while wearing a pair of NVIDIA LCD shutter glasses (Expressway Santa Clara, CA, USA). These were not switched on and connected to a 3D graphics card, and thus permitted light transmission of 239 lux. While illuminance was reduced compared to that of ambient lighting, participants reported having an uninterrupted view of the moving object during the coincidence-anticipation task.
Prior to the commencing experimental testing, the experimenter explained the procedure and provided the participant with the necessary eyewear. Half of the participants performed the coincidence-anticipation task in the stroboscopic vision condition followed by the normal vision condition, whereas the other half performed the normal vision condition followed by the stroboscopic vision condition. The participant next performed 10 familiarization trials, followed by 4 blocks of 30 experimental trials. Within each block, the level of acceleration was pseudo-randomly ordered to encourage participants to use the available visual information (e.g., not respond at a fixed distance) and to minimize boredom associated with repeated attempts with the same motion. To prevent a learning effect with respect to acceleration that could have influenced allocation of attention, knowledge of results on coincidence-anticipation accuracy was not communicated to the participant. The duration to complete each vision condition was between ten and eleven minutes depending on the participant's response time to each trial, and was thus similar to previous studies that have shown a vigilance decrement when completing a computer-based reaction time (RT) task (see below) for an extended number of trials without a break.
Psychomotor Vigilance. Between completing the coincidence-anticipation task in the normal vision and stroboscopic vision conditions, participants performed a computer-based psychomotor vigilance task (PVT). The presentation of stimuli, timing operation, and collection of responses was controlled by E-Prime software (Psychology Software Tools, Pittsburgh, PA, USA) running on a desktop computer (Dell OptiPlex). The PVT required participants to respond, as rapidly as possible, to a visual stimulus that appeared on a computer monitor located 50 cm from where they were seated. During each trial of the PVT, a Gabor patch (4.20° × 4.20°) was presented with a horizontal orientation against a grey background at the center of the screen. Then, after a random time interval between 2000 ms and 10000 ms, the orientation of the Gabor patch was abruptly switched to vertical (see Fig. 1). Participants were instructed to respond to this change of orientation as quickly as possible by pressing the space bar on a keyboard (Razr Lycosa, 1000 Hz polling) with the index finger of their dominant hand. Feedback of the response time was displayed on the screen after each trial during a 300 ms inter-trial interval. If no response was given within 5000 ms of changing the orientation of the Gabor patch, the message "You did not answer" appeared on the screen and the next trial began. The PVT lasted for 9 minutes without interruptions and is accepted to provide a simple and reliable measure of vigilance given the monotonous, repetitive, and unpredictable nature of the target onset 15 . It has been reported that failures (e.g., slowing of RT or increase in lapses) in vigilance performance in the PVT can occur within 5 minutes in adults 15,16 , and even shorter duration in adolescents and children [17][18][19] However, we decided to follow the original developers' recommendation of a 9 minute PVT 20 , which also approximated the duration of coincidence-anticipation task and thus permitted between-task comparison.
Data Analysis. Coincidence-Anticipation. We first calculated the signed error on each trial between button press and object arrival at the target. Response with an absolute value of >300 ms were classified as outliers and removed (<1.0%) from additional analyses 11 . From the remaining trials, we calculated intra-participant mean constant error (accuracy) and variable error (variability) for each level of object acceleration in each of the four blocks. The intra-participant mean data were submitted to separate 2 Vision (normal, stroboscopic) x 3 Acceleration (−0.7, 0, + 0.7 m/s 2 ) by 4 Block (1, 2, 3, 4) repeated measures ANOVA. In cases where Mauchley's sphericity test was significant, Greenhouse-Geisser corrections were applied. Tukey's Honestly Significant Difference (HSD) tests were then used to determine the origin of any significant main and interaction effects.
Psychomotor Vigilance. Intra-participant mean (accuracy) and standard deviation (variability) of RT was calculated for consecutive 3 minute intervals of the 9 minute total task duration. Trials with RT below 100 ms (<1.0%) were considered to be anticipation errors and therefore discarded from the analysis 15 . The intra-participant mean and standard deviation data of RT submitted to a one-way ANOVA with Block (1, 2, 3) as a repeated measure. Tukey HSD post-hoc tests were used to investigate the significant main effect.
Results
Coincidence-Anticipation. For constant error there was a significant main effect of Acceleration, For variable error there was a significant main effect of Acceleration, F(2,38) = 10.62, p < 0.001, η 2 partial = 0.36. Participants were more variable when the object decelerated (45 ms) compared to when it moved with constant velocity (40 ms) or accelerated (38 ms), both p < 0.05. There was also a significant main effect of Vision, F(1,19) = 64.68, p < 0.001, η 2 partial = 0.77, but this was superseded by a significant interaction between Vision and Block, F(3,57) = 3.04, p = 0.04, η 2 partial = 0.14. As shown in Fig. 3, participants became more variable in the normal vision condition across the 4 blocks, whereas they maintained a similar level of variable error in the stroboscopic vision condition. As a consequence, the initial difference in variable error between the normal vision and stroboscopic vision conditions at block 1 (p < 0.01) and block 2 (p < 0.03) was no longer present at block 3 and block 4 (p > 0.50). Observation of the individual participant data revealed that 15 of 20 exhibited an increase in variable
Discussion
It has recently been reported that practice under stroboscopic vision conditions can facilitate the development of sport-specific skill 2,21 , and that this could be explained in part by adaptation of processes such as motion coherence and attention in central vision 7 and visual-spatial memory 5 . Central to this adaptation is the premise that practice in stroboscopic vision is effortful and demanding 20 . For instance, it has been reported that contrast sensitivity, which is important for form perception, is impaired at low levels of luminance 22 , and that thresholds for coherent motion (translational) and heading direction (radial) increase as luminance levels decrease 23 . Indeed, it is known that tracking a moving object relative to the surrounds in stroboscopic vision is an attentionally demanding task 9 , which likely engages areas in pre-frontal cortex associated with working memory for trajectory extrapolation 24 . In the current study, we adapted a method used to study psychomotor vigilance 12,20 , in order to determine if stroboscopic vision influences the ability to sustain attention to response accurately while performing a coincidence-anticipation task.
We found that the group of participants became less accurate and more variable in their coincidence-anticipation response in the normal vision condition as a function of time-on-task. Conversely, accuracy and variability were maintained a similar level in the stroboscopic vision condition. Consequently, differences in accuracy and variability that existed between normal vision and stroboscopic vision conditions during the first block of 30 trials were no longer evident during the last block of 30 trials. Consistent with explanations of the vigilance decrement, we interpret these data as showing that participants failed to sustain attention after repeated trials due to the monotony and relatively simple demands of coincidence-anticipation performed in normal vision (i.e., underload hypothesis 13 ). At the level of individual participants, this was reflected in approximately two-thirds exhibiting deterioration in both accuracy and variability in the normal vision condition. Such a change in behaviour would not be expected if participants had developed a systematic bias (i.e., underestimation or overestimation) in their anticipation of object arrival time as a function of block.
In contrast, in the stroboscopic vision condition where there was a greater demand on visual-spatial memory for trajectory extrapolation, it would seem that participants were better able to sustain attention, and thus maintain performance over time. Indeed, there was evidence that some participants improved accuracy (n = 13) and variability (n = 11) across blocks in stroboscopic vision. For half of the participants there was a concurrent change in accuracy and variability that was consistent with a systematic improvement in anticipation of object arrival time. Importantly, this positive adaptation would not be expected had participants disengaged from the task due to high levels of boredom or fatigue. That is not to suggest, however, that coincidence-anticipation performance would be maintained indefinitely in the stroboscopic vision condition, and by all participants. Rather, in accord with the overload hypothesis 10 , it follows that a high attentional load and effortful processing cannot be maintained, thus eventually leading to the depletion of attentional resources and subsequently a vigilance decrement in performance.
An important consideration in previous work regarding the benefit of stroboscopic vision training has been the potential influence of motivational and expectancy effects such as placebo or Hawthorne 25 . In the current study, we were careful to include additional experimental control to ensure that any change in coincidence-anticipation was not simply a result of expectancy. In particular, given the use of novel eyewear for both the stroboscopic vision and normal vision conditions, there is no reason to believe that participants would have associated a particular eyewear with a treatment or control condition and thus modified their response accordingly. Also, we did not provide participants with knowledge of results, thus minimizing motivational effects of learning. This was important because there could have been asymmetrical motivational effects if participants were better able to use the knowledge of results in the normal vision condition to reduce response error to very low levels (e.g., 0-30 ms constant error previously reported 11 ). Another important control was to present a real moving object rather than an apparent motion stimulus (e.g., Bassin-Anticipation timer). The idea was to minimize the possibility of asynchrony between the open state of the stroboscopic eyewear and presentation of the stimulus, thereby giving participants the opportunity to see the moving object for the duration of the 100 ms intermittent "open" interval. That said, it is worth noting that we were unable to equate the amount of light transmitted through the different eyewear, and thus reaching the eye, in the "open" state. Although a potential confound, we suggest that any difference in light transmission between the stroboscopic eyewear in the "open" state and the control eyewear is unlikely to have influenced the observed results. For instance, participants in our study reported being able to see normally through both eyewear (i.e., stroboscopic eyewear in the "open" state), whereas others have found that throwing and catching drills are not sufficiently demanding when the stroboscopic eyewear are set to level 1 (100 ms open, 67 ms closed) 5 . Finally, we also measured sustained attention in a computer-based vigilance task (PVT). This confirmed that the majority of participants became less accurate and more variable as a function of time-on-task. However, the vigilance decrement in the computer-based task was not significantly correlated with the change in coincidence-anticipation performance (i.e., accuracy and variability). This finding was not unexpected given that the two tasks have different processing and response demands, and consequently might be affected differently by the vigilance decrement 10,26 .
The notion that practicing coincidence-anticipation in stroboscopic vision engages attention is consistent with the "immediate benefit" reported by Smith & Mitroff 6 . In their study, participants' coincidence-anticipation behaviour was significantly more accurate in a normal vision post-test immediately after practicing for 5-7 minutes in a stroboscopic vision condition compared to a normal vision condition. We concur with the authors' suggestion that this effect was not evidence of long-term improvements due to learning, and instead that brief exposure to stroboscopic vision could be used to enhance performance when needed in specific game situations (i.e., before a baseball player prepares to bat). Interestingly, there is also some evidence that stroboscopic vision can be used to prevent and accelerate rehabilitation from injury 27 . However, as with studies that have shown improvements in psychomotor and function following stroboscopic vision training, it remains to be determined to what extent the benefits are due to an increase in attentional resource in order to cope with increased task difficulty and/or redirection of attention to alternative sources of information (e.g., somatosensory and vestibular inputs in the case of ACL injury). Notably, while the current study used eyewear that are no longer available, there are alternative commercial eyewear (e.g., PLATO Visual Occlusion Spectacles; Senaptec Strobe; Visionup Strobe Glasses) that permit greater control over the duration of the open and closed states. While not the aim here, in future work it will be relevant to determine whether resistance to a vigilance decrement in stroboscopic vision is influenced by factors such as the amount of light transmitted through the lenses of the eyewear and the strobe rate, both of which could influence the perception of motion and form. For instance, we used a strobe robe of 4 Hz in the current study, but a lower strobe rate requiring longer intervals of extrapolation would potentially place greater demand on visual-spatial memory, thus more quickly leading to overload. Alternatively, practicing at a higher strobe rate requiring shorter intervals of extrapolation could quickly become less demanding, thus leading to disengagement. In this respect, the use of a "levelling-up" procedure whereby strobe rate is progressively reduced based on performance success 7 would seem justified. Finally, an interesting question from the current study is whether alternating between periods of stroboscopic vision and normal vision during practice might have additional benefit. For instance, practice in stroboscopic vision might enable participants to offset the monotony of practicing in normal vision alone, thereby facilitating improved processing of relevant information and better learning. | 2018-04-03T01:25:55.844Z | 2017-12-20T00:00:00.000 | {
"year": 2017,
"sha1": "fba82617cec1eb41343b2b82201bc9fe466bfe67",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-017-18092-5.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4b4fca594250736a197a66c64e03b50b2d6f4e4c",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
258950660 | pes2o/s2orc | v3-fos-license | High Genetic Diversity and Structure of Colletotrichum gloeosporioides s.l. in the Archipelago of Lesser Antilles
Colletotrichum gloeosporioides is a species complex of agricultural importance as it causes anthracnose disease on many crop species worldwide, and strong impact regionally on Water Yam (Dioscorea alata) in the Caribbean. In this study, we conducted a genetic analysis of the fungi complex in three islands of the Lesser Antilles—Guadeloupe (Basse Terre, Grande Terre and Marie Galante), Martinique and Barbados. We specifically sampled yam fields and assessed the genetic diversity of strains with four microsatellite markers. We found a very high genetic diversity of all strains on each island, and intermediate to strong levels of genetic structure between islands. Migration rates were quite diverse either within (local dispersal) or between islands (long-distance dispersal), suggesting important roles of vegetation and climate as local barriers, and winds as an important factor in long-distance migration. Three distinct genetic clusters highlighted different species entities, though there was also evidence of frequent intermediates between two clusters, suggesting recurrent recombination between putative species. Together, these results demonstrated asymmetries in gene flow both between islands and clusters, and suggested the need for new approaches to anthracnose disease risk control at a regional level.
Introduction
Colletotrichum is a widespread pathogen of cultivated plants [1], causing anthracnose disease or fruit rot or stem dieback on many crops worldwide [2][3][4]. Its ubiquity in both wild [5] and cultivated environments [6] is probably increased by its relatively complex ecology, with lifestyles ranging from casual commensal endophyte [7] to parasitic pathogen, biotrophic to necrotrophic phases [8] and organizing as multiple species complexes [9] with blurring degrees of gene flow and varying levels of host ranges and agressivity on their incipient hosts [10][11][12]. It has long been regarded through the lens of pathogen-host interaction pairs, with a transient historical redefinition as morpho-species complexes (formally changing recognized species from the thousands down to twelve clearly identified morphs [13]). Current classification trends are building on bar-code-like sequence approaches to systematic [1,14,15], and identified species are reformulated progressively and their total number is increasing back (currently within the hundreds) [16]. Many issues regarding characterizing the species at more ecologically relevant levels remain, and these might at least partially be assessed via regular population genetic analysis and identifying polymorphisms segregation pools both at local and regional scales.
Studies of the genetic structure of populations have shown a fairly high dependency on the kind of marker in use. In this regard, studies of populations of Colletotrichum gloeosporioides
Materials and Methods
From November to December 2015, we collected information on farm management practices and varietal diversity from yam producers and sampled their yam fields for necroses on leaves in four islands of the Lesser Antilles: Barbados, Guadeloupe (with both tropical humid Basse Terre and dry Grande Terre areas considered distinct populations due to climate and altitude contrasts, see [39] for geographic details) and its high farming dependency Marie Galante and Martinique. We collected 15 necrotic yam leaves in sample fields (except for in Barbados, where yam plots were bigger and we increased sampling effort to 25 necrotic leaves for the sake of field size representativeness) during the day and placed them in Eppendorfs filled with 2 mL of autoclaved V8 solutions. In the lab, we rinsed each collected necrosis for 1 min in hypochlorite solution, followed by a 1 min bath in alcohol, before two further rinsing steps of 1 min in distilled water [6]. We then placed necroses on Petri dishes with S media to facilitate the growth of Colletotrichum strains. After 5 days, we verified whether fungi belonged to the C. gloeosporioides complex based on conidia morphology and placed study strains in V8 liquid culture media for three days at room temperature, then kept microtubes refrigerated at 4 • C for a few days before multiplication and DNA extraction. The prevalence of C. gloeosporioides was very diverse in the sample fields, and was on average 48.26% (range 7-88%). In 2016, DNA extractions were conducted from the V8 solutions, using a FastDNA kit (MP Biomedicals, Irvine, CA, USA) using Lysing Matrix A for fungal cell lysis. Beforehand, we amplified via PCR the CaInt2, CgInt and ITS4 region to confirm prior visual assessment by microscopy [38]. Every study strain correctly amplified the expected fragment for C. gloeosporioides. Nevertheless, 3 strains also amplified fragments diagnosing C. acutatum, and were further dismissed from the study sample. We genotyped the strains for 4 microsatellite loci recently developed in the lab (markers Cg150, Cg68, Cg71, Cg92) [36], with the following forward and reverse primers, respectively: TACCAGGGGTG-GCAGCTC and GGTCCAGGGACTCAAGCTC for Cg150, TGGTCTGCTTCTCGACACTG and AGCCAAGAGACCAAGCAAGA for Cg68, TGATGGTTGTCATGGGATTC and GAT-CATGTCTCCATCCGCTC for Cg71 and CATTTTCCACAGCCCACAC and GCAGCAGGT-GTGAGAAGAGA for Cg92. Genotypic stability was verified by randomly retesting with secondary independent PCR amplification. The primers had PCR conditions consisting of a denaturation stage at 95 • C for 5 min followed by 40 cycles at 95 • C for 30 s, 59 • C for 30 s, 72 • C for 30 s (more details in [36]).
The sample size was 560 sample strains in total from 58 yam fields, from 16 fields in Guadeloupe (10.38 ± 6.6 strains per field, range 1-22), 18 fields in Martinique (4.70 ± 3.2 strains per field, range 1-12) and 24 fields in Barbados (11.75 ± 5.5 strains per field, range 2-22). Since there was a dramatic variation in strains sampled by fields, the field was not further analyzed as a structuring level for genetic diversity. Genetic analysis was run with hierfstat [40] in R [41]. We report here allelic diversity at each study locus, population structure indices: H s , local gene diversity; H t , total gene diversity; H t , gene diversity corrected for sampling size; D st , genetic distance among populations; D st , genetic distance among populations, corrected for sampling size; F st , indice of structuration; F st , indice of structuration, corrected for sampling size; D est , shared genetic diversity among population, or Jost indice. We estimated migration rates with the following formula: Nm = [(1/F st ) − 1]/2, adapted from Wright [42] by correcting for ploidy level (C. gloeosporioides species being haploid, we divided by 2 where 4 is used in diploid populations, see [43]). Lastly, we conducted a principal component analysis on individual genotypes frequencies (centered, unscaled matrix) with hierfstat [40], to explore potential clustering occurring in our dataset.
Genetic Diversity
The extant of genetic diversity was very high for all populations, with the four loci demonstrating high allelic richness (42 alleles for Cg150, 50 alleles for Cg68, 54 alleles for Cg71 and 76 alleles for Cg92), a high share of these alleles among islands (see Figure 1, though islands with lower prevalence such as Basse Terre and Marie Galante have lower allelic diversity levels overall) and greater diversity in Barbados in general (Table 1). Most alleles were nevertheless rare (low frequency), resulting in an important number of diagnostic alleles for islands, and rarefied allele estimates averaged between 2 and 3 ( Table 1).
As a consequence of this dramatic diversity level, most strains were characterized by a unique genotype, and we counted only 20 multilocus genotypes shared among Colletotrichum samples, up to 61 strains in total. Most identical multilocus genotypes occurred in pairs or triplets (mean clonality level = 3.05 ± 1.80, range 2-8). Clonality was thus representing about 10.89% of total samples, spread similarly among islands. Interestingly, few clones were actually sampled within fields (two clones in Barbados, one clone in Basse Terre, two clones in Grande Terre and two clones in Martinique), while many clones were distributed in different fields within populations (a situation found nine times in Barbados, three times in Basse Terre, three times in Grande Terre and three times in Martinique). In a few cases, clones we sampled between different populations: three times between fields in Basse Terre and Grande Terre, and once between Grande Terre and Barbados. The latter situations probably represented recent migration events.
Terre, two clones in Grande Terre and two clones in Martinique), while many clones were distributed in different fields within populations (a situation found nine times in Barbados, three times in Basse Terre, three times in Grande Terre and three times in Martinique). In a few cases, clones we sampled between different populations: three times between fields in Basse Terre and Grande Terre, and once between Grande Terre and Barbados. The latter situations probably represented recent migration events. Table 1. Summary statistics for allelic diversity and genetic structure among study islands. Number of alleles (A), number of diagnostic alleles (not shared in other islands) (D) and rarefied allele numbers (R) are indicated for each study locus. Fst (p-values of testing difference from 0 as superscripts) and confidence intervals (95% CI) are produced. Populations behaving as panmictic locally (95% CI includes 0) are indicated in bold. We give statistics for both Guadeloupe globally, or each Guadeloupean area individually (Basse-Terre, Grande-Terre and Marie Galante). Global Fst value is 0.095 with 95% CI [0.0285-0.168]. NS indicate non-significant departure from 0. Since allelic diversity was shared among populations, with the exception of diagnostic alleles, and actually reached reasonably high levels everywhere, all loci had important impacts on the structuration of genetic diversity in the Archipelago (local allelic richness was important, but always lower than expected as a single theoretical panmictic population: H s was lower than H t or H t , and both D st and D st show an important share variation between populations for all loci, Table 2). As a result, both F st , F st and Jost D est estimates give evidence of a geographical effect of Archipelago condition, with signs of moderate to strong genetic structuring of C. gloeosporioides (Table 2).
Estimates of Migration and Gene Flow
Since there was overall evidence of genetic structure in the islands (Table 2), we calculated pairwise F st values between study populations. There was indeed variation in the extant of genetic structuration between islands (Table 3), and interestingly, the differences were not consistent with geographic distance: for example, Barbados demonstrated smaller values with Grande Terre and Marie Galante than with closer Martinique. Alternately, geographically close populations had greater values (example: Basse Terre and Basse Terre, Table 3). Lastly, both Grande Terre and Marie Galante populations hinted to behaving as a single panmictic population with recurrent propagule exchange. We estimated migration rates based on pairwise F st and values indicated a broad variation in the number of migrating spores both within and between islands (Table 3). These estimates suggest different processes, as migration between islands reflects the longdistance dispersal and relative contribution of other genetic pools to local gene admixtures, while migration estimates within islands reflect the ease with which spores can establish via local dispersal, operating through altitude, climatic and vegetation constraints. Estimating the average number of spores contributing to genetics and mating pools allowed us to envision how gene flows link the different islands ( Figure 2). Some flows are indeed much lower than others, and there were strong asymmetries in the contribution of migration between islands.
Overall, the dispersal within islands followed two contrasting trends: situations where local dispersal was lower on average than long-distance migration (Basse Terre especially, but also Grande Terre, and the similar but less marked Martinique), and situations where local dispersal was greater than long-distance migration (Barbados, Marie Galante) (Figure 2). We can safely assume that genetic dynamics for C. gloeosporioides complex in the Lesser Antilles follow a metapopulation pattern with both source and sinks of strains. Since the pattern does not reflect the physical distance between islands (and does not hint at isolation by distance), alternative hypotheses need to be developed, among which climate and vegetation act as a local dispersal barrier, and winds as a major driver of gene flow for long-distance dispersal (see discussion). flows are indeed much lower than others, and there were strong asymmetries in contribution of migration between islands.
Overall, the dispersal within islands followed two contrasting trends: situatio where local dispersal was lower on average than long-distance migration (Basse Te especially, but also Grande Terre, and the similar but less marked Martinique), and si ations where local dispersal was greater than long-distance migration (Barbados, Ma Galante) (Figure 2). We can safely assume that genetic dynamics for C. gloeosporioid complex in the Lesser Antilles follow a metapopulation pattern with both source a sinks of strains. Since the pattern does not reflect the physical distance between islan (and does not hint at isolation by distance), alternative hypotheses need to be develop among which climate and vegetation act as a local dispersal barrier, and winds as a ma driver of gene flow for long-distance dispersal (see discussion). Auto-arrows represent flow within populations (yam fields within island) and follow the colour code described above. Islands are not following geographic arrangement for the sake of clarity (actual geographic arrangement on the right map). Scale for Guadeloupe (Upper Island) is 20 km, scale for Martinique (lower left) is 15 km, and scale for Barbados (lower right) is 10 km. Islands are grossly at scale comparatively to each other. Scale for the Lesser Antilles is 200 km.
Genetic Clusters
Congruently with high levels of dispersal, clustering was not really altered by geography, yet three independent genetic clusters emerged from our data, reflecting three sampled Colletotrichum species from C. gloeosporioides complex in yam fields in the Caribbean (Figure 3). Preliminary sequence analysis indicates that one of them is C. siamense, and a second one is a currently undefined species (S. Guyader, personal communication) (work in progress). All islands presented their share from two clusters (origins are interspersed in both), in approximately similar proportions save for Martinique which demonstrated no samples from the leftward cluster ( Figure 3). Interestingly, one cluster (on the left) is separated and stands alone, possibly as a true species and genetically isolated from the other clusters (though this might otherwise be due to lack of sampling), while two clusters seemed interconnected by numerous intermediate strains, strongly suggesting that recombination between strains from both clusters is occurring at high enough frequency. demonstrated no samples from the leftward cluster ( Figure 3). Interestingly, one cluster (on the left) is separated and stands alone, possibly as a true species and genetically isolated from the other clusters (though this might otherwise be due to lack of sampling), while two clusters seemed interconnected by numerous intermediate strains, strongly suggesting that recombination between strains from both clusters is occurring at high enough frequency.
Discussion
Our results showed astonishingly high levels of genetic diversity of C. gloeosporioides complex sampled on Yam in fields of three Caribbean islands from the Lesser Antilles (Guadeloupe-Basse Terre, Grande Terre, Marie Galante, Martinique and Barbados). Allelic diversity was rich enough to demonstrate both diagnostic alleles, sometimes to field level, and importantly shared genetic components between islands. Clonality was nevertheless relatively low, suggesting asexual multiplication is not contributing strongly to the local structure at the field level, but that contamination occurs via many sources, most probably from local vegetation. Genetic structure was strong, indicating
Discussion
Our results showed astonishingly high levels of genetic diversity of C. gloeosporioides complex sampled on Yam in fields of three Caribbean islands from the Lesser Antilles (Guadeloupe-Basse Terre, Grande Terre, Marie Galante, Martinique and Barbados). Allelic diversity was rich enough to demonstrate both diagnostic alleles, sometimes to field level, and importantly shared genetic components between islands. Clonality was nevertheless relatively low, suggesting asexual multiplication is not contributing strongly to the local structure at the field level, but that contamination occurs via many sources, most probably from local vegetation. Genetic structure was strong, indicating that study populations indeed function as distinct entities at least partially, yet also highlighted the importance of long-distance migration (wind dispersal between distant islands), often with rates greater than local dispersal (suggesting factors such as vegetation and local climate are impeding propagation locally). Lastly, PCA highlighted three distinct genetic clusters, indicative of the sampling of three putative species within the complex, with one cluster fully differentiated while two clusters exhibited numerous intermediate genotypes thus hinting to casual recombination between strains. Clusters were sampled in all the study islands. We will discuss these results in the light of anthracnose disease management on yams.
Genetic diversity levels were high, as expected given the propensity of microsatellite markers to mutate. Furthermore, at field and population levels, allelic diversity was more important than clonality, and most sampled strains had distinct genotypes. This study is confirming earlier results based on RAPD markers in the same patho-system (Yam/Colletotrichum) [37] or in other crops [26]. This stands in sharp contrast to most crop diseases, where strains are fairly homogenous, genetically speaking, when epidemics declare regionally (e.g., [44][45][46]). Here, clonality accounted only for approximately 10% of strains, and clones were often sampled as few units (multilocus genotype shared between a few strains only, three on average). Moreover, clones were more often sampled between than within fields (thus confirming the importance of dispersal as a structuring factor for genetics in the species complex, see below). Most importantly, a low level of clonality between strains is indicative of a high prevalence of sexuality and recombination compared to asexual multiplication, despite a high capacity for multiplication via conidia from necroses. The observation would occur if broad strain reservoirs accumulating fungi diversity and local contamination dynamics co-occur, which seems to be the case with Colletotrichum as prevalence in natural flora was shown to be particularly high [5]. This pattern of diversity is at odds with most fungal diseases, where pathogenic strains are often genetically homogenous and spread regionally on susceptible cultivars. In our case, the genetic pool of strains is highly diverse, and as a result, putative aggressive strains can declare new epidemics at any time. We should expect direct consequences for agriculture, since this means the pool of potentially pathogenic strains is dramatic, and efforts toward pyramiding resistance genes in varietal breeding may be circumvented faster [47], thus reducing the durability of disease management via increased disease resistance. A possible solution to this issue would be carefully planned varietal turnover at a regional level, to reduce local pathogenic load impact and decrease anthracnose risk.
Migration rates were reasonably high, yet varied considerably between constitutive populations, segregating situations where the intra-deme dispersal was lower than longdistance migration, and conversely, situations where local dispersal was greater than migration. Overall, these results suggest strong metapopulation dynamics [48], with some key populations contributing heavily in genetic composition at broader scales (such is the case of Barbados in our study). Monitoring these source populations, especially for strain aggressiveness, may be an important strategy in disease control and management [49]. Long-distance dispersal was shown to occur in the region (Mexico to Trinidad, see [30]), and dispersal may not. Our results suggest intra-population dispersal may be fairly low: the population of Basse Terre has the lowest local dispersal, for example. This population is geographically characterized by denser tropical humid and altitude vegetation, possibly implying that forested vegetation increase the viscosity of the landscape in terms of spore dispersal (trees as spore traps hypothesis [50]), or increased local adaptation requirements compared to drier areas, or both. If this hypothesis holds scrutiny, then a simple disease control strategy might be to increase recourse to trees in agriculture, for example planting more hedges, and even the field if vegetation margins can become inoculum sources following fungi establishment [35]. Lastly, long-distance dispersal is an important driver of the system. The Caribbean region is subjected to hurricane seasonality (during the rainy season), so that Colletotrichum species may be seen as "storm riders" and following dominant winds (northwards) as migration roads. This reinforces the importance of monitoring source populations for disease risk estimation. A further hypothesis regarding wind-based long-distance dispersal, not accounted for in the case of anthracnose to the best of our knowledge, is that the Caribbean region is also casually and seasonally subjected to sand mists originating from Sahelian West Africa (during the dry season, or Lent) [51]. Since sand mists are known to help fungal spores travel in addition to sand [52], West Africa could be another region contributing to the genetics of Colletotrichum species in the Caribbean, and this phenomenon should be the focus of further research, especially focusing on Ivory Coast where D. alata is also the dominant yam cultivated as in the Caribbean islands [53]. In summary, long-distance dispersal is a very important component of anthracnose dynamics [30], and can possibly jeopardize management and control practices. Possible solutions may involve creating agriculture environments with decreased dispersal, such as greater recourse to hedges and forested areas.
Principal component analysis yielded three genetic clusters representing putative species on yams, all grossly distributed in sampled islands, and broadly coexisting locally at field level, though one species seemed not sampled in Martinique. Interestingly, one of these clusters is standing apart, while the two others show signs of genetic admixing and recombination for a significant number of sample strains. It is worth noticing that Colletotrichum spp. are known to casually recombine [54,55] and that species delineation, as in other fungi, is sometimes a blurry concept. In our initial dataset, three strains amplified both fragments allegedly delineating two species complexes (C. gloeosporioides and C. acutatum) [55], though both are known to be closely related and are sometimes a source of taxonomic confusion if the shape of conidia is the only criterion. Here, our results show that recombination might be more frequent between putative species within complexes (as an approximation, 40/560~7.14%, nearly the same level as clonality in the study sample) Colletotrichum species are indeed notoriously hard to define, and while the approach of morpho-species developed by von Arx [13] allows gross delineation of complexes, current standing involves sequencing to reach 'adequate' taxonomic evaluation. Our results nevertheless suggest that none of morpho-species and sequencing approaches [56] would be fair enough to delimit real species entities, and will be either too liberal (morpho-species line) or possibly too conservative (sequencing/barcoding) in assessing the real diversity of Colletotrichum species (and therefore, overestimate diversity in the Genus). Our team usually favours a morpho-species approach to understand the ecology of C. gloeosporioides species complex (e.g., [5,6,36]), and we thus call for more flexibility and inclusion of a diversity of stances and viewpoints regarding the complex issue of Colletotrichum genus worldwide. Evaluating the frequency of recombination events both within species complexes and between species complexes is a promising avenue of research in our quest to understand the biology of these important crop pests.
Conclusions
Strains from the C. gloeosporioides complex sampled in Water Yam fields in the Lesser Antilles were genetically highly diverse and demonstrated a dominance of sexual reproduction over clonality and asexual multiplication. Lesser Antilles populations are structured, with important long-distance migration, viscosity in local dispersal probably due to vegetation acting as natural barriers. Some populations (Barbados) are propagule sources at a regional scale. Three species coexist on Yams, but there is strong evidence of recombination between some of them, furthering the importance of sex events in the dynamics of recombination in the Genus and increasing diversity in rich reservoir pools, thus raising anthracnose disease risk. Potential metapopulation functioning in the Caribbean suggests that anthracnose control will be difficult to sustain only by increasing genetic resistance in varieties, though potential solutions exist to manage risk include: i/careful monitoring of strain skill in inoculating yams aggressively, especially in source populations; ii/increasing viscosity of dispersal in the landscape by increasing vegetation/tree cover; and iii/a regional varietal scheme allowing rotation of cultivars with different resistance levels to avoid local matching of Colletotrichum strains and yams. for helping us better design for Figure 2. We are thankful to Sébastien Guyader for discussions about the nature of species within C. gloeosporioides complex based on his ongoing work. We really appreciated comments from reviewers that greatly improved the overall quality of the manuscript. | 2023-05-29T15:09:25.295Z | 2023-05-27T00:00:00.000 | {
"year": 2023,
"sha1": "dcb34e471d07b06b5d7fac06a2d8b1e246304520",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2309-608X/9/6/619/pdf?version=1685176682",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ad2d715d367f325e4ca745097d405dd4f55294f7",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
} |
253255074 | pes2o/s2orc | v3-fos-license | SpectroMap: Peak detection algorithm for audio fingerprinting
Audio fingerprinting is a technique used to identify and match audio recordings based on their unique characteristics. It involves creating a condensed representation of an audio signal that can be used to quickly compare and match against other audio recordings. The fingerprinting process involves analyzing the audio signal to extract certain features, such as spectral content, tempo, and rhythm, among other things. In this paper, we present SpectroMap, an open-source GitHub repository for audio fingerprinting written in Python programming language. It is composed of a peak search algorithm that extracts topological prominences from a spectrogram via time-frequency bands. In this paper, we introduce the algorithm functioning with two experimental applications in a high-quality urban sound dataset and environmental audio recordings to describe how it works and how effective it is in handling the input data. Finally, we have posed two Python scripts that would reproduce the proposed case studies in order to ease the reproducibility of our audio fingerprinting system.
Introduction
In computer science, fingerprinting is a procedure that summarizes the input data by mapping it to a much shorter item [1]. Similarly to human fingerprints, such transformation contains the essential information and properties of the original data, so it can be used to identify it among other samples [2].
Regarding the acoustic field, audio fingerprinting is understood as an algorithm that extracts the main components taking into account the perceptual characteristics of the audio [3]. Most of the time, these techniques are applied over the spectrogram representation of the signal. Then, the pattern extraction is conducted by means of time domain [4], frequency domain [5], or a combination of both called time-frequency domain [6]. As far as implementation is concerned, there are some techniques created for this purpose, although they have their own advantages and limitations. The phase-based [7,8] and chroma-based [9,10] fingerprinting techniques are widely used. In regard to data transformations, wavelets have been very effective in this field [11,12,13]. Nonetheless, this is not the only feature utilized for this purpose [14,15,16,17].
For applicability purposes, [18] developed the idea of a constellation map for Shazam Entertainment in order to implement an audio search algorithm. Over the years, many different techniques have been developed [19]. However, it is worth mentioning that their implementation in machine learning tasks is very useful to reduce training costs, and so enable faster implementations. For example, we can also find recognition of activities of daily living via audio fingerprinting [20]. This paper presents the SpectroMap algorithm for creating audio fingerprints from a given audio signal. The method has been designed in order to deal with both raw audio excerpts and pre-processed spectrograms. The main objective is to cover the audio matching task because it can be considerably time-consuming [21,22]. In essence, this paper is motivated by our previous work [23], where an in-depth example of the application of audio fingerprinting was applied to music plagiarism.
Methodology
The algorithm presented in this paper has been designed in order to carry out the entire process required to get the fingerprint from a given audio signal. In this manner, we facilitate open-source software able to produce large-scale signal processing. Depending on the objective of the user, we can use a raw signal or an already computed spectrogram as input when initializing the SpectroMap object. In case we decide to use raw signals, we can also include the required parameters for the signal processing step. Thereupon, the algorithm computes a local search to extract the topological prominences of the given spectrogram. The architecture of SpectroMap is depicted in Figure 1. In this section, we have detailed the two steps that perform the fingerprint extraction of our algorithm.
Signal processing
With the aim of implementing a fingerprint extraction for a given musical signal X t , we have designed an algorithm that computes a global peak detection over the spectrogram associated to give us its constellation map. Let N F F T and N O be the length of the Fast Fourier Transform (FFT) window and the number of elements to overlap between segments respectively, we first compute the spectrogram of the signal (S tf a ), by using the Hamming window, in order to get the (time, frequency, amplitude) vectors by considering these two parameters. Such representation contains the amplitude spatial information to analyze. Our engine search determines whether a time-frequency point can be considered locally relevant according to its neighborhood. Then, the detection is processed regarding a required band. Let {T i } n i=1 and {F j } m j=1 be the time and frequency bands of the spectrogram with the amplitude of the event, we can reformulate the spectrogram S tf a = (T i ) n i=1 = (F j ) m j=1 as its rows and columns representations. As part of the engine search, we define two windows φ d T T and φ d F F to process the local pairwise comparisons with a respective length of d T and d F , whose functionality is to extract a number of elements of the band and return the local maximum. Without limiting the generality of the foregoing, we can mathematically describe the time-band window mechanism with a length of 0 < d T ≤ n and structure T i = (T 1 i , ..., T n i ) as: per each band i ∈ {1, . . . , n}.
When we group all the values we drop those elements that have an equal index to avoid duplicates. Hence, we can group the window of each band to create the set: This way, we get the topologically prominent elements per each feature vector. Owing to the equation (1), it is easy to note that even though there are n − d T − 1 matches, the window φ d T T (T i ) may contain a smaller number of elements whenever d T > 2. Depending on how restrictive we need to be, we can proceed with just one of the bands or combine them to create a more stringent search and distortion resistance since it is returned only the peaks that are prominent in both directions. Finally, the algorithm merges all the band-dependent peaks, as shown in equation (2), to give us the total number of spatial points that determine the so-called audio fingerprint. In Figure 2, we can see a graphical example of an audio fingerprint. Figure 2: Example of the spectrogram of an acoustic signal with its fingerprint stacked. The magnitudes are presented as seconds on the X-axis, Hertz on the Y-axis, and Decibels depicted as a color map.
Algorithm
Our engine search, which boosts SpectroMap, processes audio signals in order to return an output file with the (time, frequency, amplitude) peaks detected in its spectrogram representation. Thus, it can be combined with the Mercury software to complete an in-depth comparison between music excerpts. Figure 3 has a cursory description of the performance of SpectroMap. The algorithm basically batches the files by means of the following steps: Step 1 Decide the window to use and set the parameters N F F T and N O .
Step 2 Read the audio file to get its amplitude vector and its sample rate. Step 3 Compute the spectrogram through the associated Fourier transformations.
Step 4 Set a fixed window length (d T , d F or both) for the pairwise comparisons.
Step 5 Choose the settings to proceed with the peak detection over a selected band or a combination of both.
Step 6 Create an identification matrix consisting of a binary matrix with the same shape as the spectrogram with the position of the highlighted prominences.
Step 7 Extract such elements and create a file with the (time, frequency, amplitude) vectors.
Regarding step
Step 5, the authors highly recommend selecting both bands to perform the peak detection since the output is more filtered and spatially consistent. For the remainder steps, the choice is a personal decision that depends on the scope of the research. It is worth mentioning that the limitations of the method depend on the functionality of the Signal module of the SciPy library. Both installation and usage are described in our GitHub repository [24].
Case study: Processing of environmental and urban sound events
The aim of this section is to present an experiment in which the performance of SpectroMap is analyzed in terms of computational cost. To this end, we have evaluated the speed of our algorithm over two datasets. On the one hand, Urban Sound 8K [25] is an audio dataset that contains 8732 labeled sound excerpts. The files are pre-sorted into ten folds in order to help in the reproduction and comparison of machine-learning experiments. The samples have a duration of ≈ 4s and they are classified as urban sounds from 10 classes: air conditioner, car horn, children playing, dog bark, drilling, engine idling, gunshot, jackhammer, siren, and street music. On the other hand, ESC-10 [26] is a labeled collection of 400 environmental audio recordings suitable for benchmarking methods of environmental sound classification. In particular, ESC-10 is a subset of the major dataset ESC-50, which contains 2000 audio excerpts with a total size of ≈ 600MB publicly available (https://github.com/karolpiczak/ESC-50#download).
For both datasets, the main use commonly attached is the classification task via supervised AI models. We can find robust performance (94.6% accuracy) utilizing CNN architectures [27] for the Urban Sound 8K set and other applications in low-cost monitoring devices [28]. For the ESC-50 dataset, and so the ESC-10, it has been shown that deep architectures such as Transformers [29] and CNNs [30] can learn with high precision ratios from this kind of audio sources with 97.00% and 96.70% of accuracy respectively. Table 1 is presented the computational cost associated with the audio fingerprinting extraction task. All the experiments were produced by using seconds as time magnitude. For both datasets, it is computed the peak detection per folder and per audio sample. The Python script utilized to obtain the Table 1 is displayed in A. The computer that conducted the experiments was equipped with an AMD Ryzen 7 3700u with 16GB RAM running in Ubuntu 20.04.3 LTS OS.
Graphical representation of the outputs for the environmental sound dataset
When we are conducting the signal processing stage for extracting the audio fingerprint of the audio samples, the representation of the fingerprints per class gives us significant information about the events. In order to give an overview of such a depiction, we have analyzed the classes of the ESC-10 dataset. The main point is to present the coordinates (time-frequency) relevant in terms of membership. On the one hand, figure 4 contains a random sample per each of the 10 available classes. On the other hand, once we have carried out our algorithm, we have stored the coordinates that represent a peak within the fingerprint per each sample as a sequence {(t i , f i )} In i=1 so that each fingerprint contains I n topological prominences. With that information, we have generated a global class fingerprint consisting of natural entries that determine the number of times a coordinate has been selected as a peak per each sample of the same class. With the same notation as (2), we can define the global class fingerprint per each class k as where the summation stands for the matrix sum operator and 1 for the matrix characteristic function of each fingerprint, Φ k i for the i fingerprint of the class k, and N k for the number of elements in the class k. Considering all the mathematical notation aforementioned, Figure 5 shows each of the F P k in a viridis color palette indicating that brighter colors have a major impact in the representation of F P k .
Discussion and future work
The SpectroMap algorithm has shown great performance when dealing with real-world acoustic scenarios. In the case studies conducted, our algorithm took a total of 899.03" (14' and 59.03") for the Urban Sound 8K and 71.50" (1' and 1.50") for the ESC-10 datasets. This can be summarized in a number of 9.82 ± 1.14 and 5.59 ± 0.03 iterations per second on average respectively. Therefore, SpectroMap can be considered an effective publicly available technique for audio fingerprinting.
One of the major advantages that arise from our experiments is that we can efficiently process audio signals (or even many others) for further analysis. Additionally, all the stages from which the acoustic sample is transformed are clearly defined, thus removing any kind of black boxes. Then, from these contributions, a potential future work would be to approach machine learning tasks by means of distance measures or similarity functions between audio samples. On the one hand, we could attach classification problems with a similar strategy to the KNN algorithm [31,32,33]. Basically, we would predict the class of some audio regarding its distance to the already known fingerprints. Another alternative would be the use of AutoEncoders [34], with a semi-supervised approach, that reconstructs some audio [35] from the information of a given set of fingerprints. On the other hand, we could perform an unsupervised strategy to determine the different sound sources based on the distribution that they present, such as K-means [36] or DBSCAN [37].
Finally, it is important to remark that the use of Hertz as a frequency scale has been used for simplicity. Our main purpose has been to introduce the SpectroMap algorithm and show its applicability and performance. Then, we conducted a basic signal process to convert a signal into a spectrogram. However, there exist many choices to get different scales or units. For instance, the Mel-scale [38,39] would be a great alternative in order to get the perceptual scale of pitches of the events studied. A further application can be found in [40] and [41]
Conclusions
We have introduced SpectroMap, a peak detection algorithm whose main application is the extraction of audio fingerprints. The algorithm not only processes raw signals but also preprocessed spectrograms, which means a major advantage in this field. Apart from a detailed explanation of the procedure and structure of the algorithm, we have also evaluated its performance in state-ofthe-art datasets for audio analysis. It has been shown that SpectroMap is an effective and fast algorithm with an average of 1.340 and 3.336 iterations per second for the datasets presented in the case study (Urban Sound 8K and ESC-10). Further interpretations and representations have been shown in order to give a better understanding of the outputs of our algorithm. The code and Python implementation of the package has been presented in a straightforward manner in order to ease applicability and reproducibility. Even though we have not emphasized the underlying application on audio signals with comparison purposes, an instance of such an application can be found in our last paper [23].
Appendices A Python implementation
This section is dedicated to the application of the SpectroMap algorithm to some kind of example.
In particular, the module is designed to process either a raw signal or a spectrogram. For the first case, we make use of the spectromap object. For the second case, we apply the peak_search function. In addition, the script that reproduces the results shown is displayed at the end in order to easy reproducibility purposes. The library was written with the Python 3.8 version and its usage depends just on NumPy 1.19 and SciPy 1.6.3. packages. The repository is under the GNU General Public License v3.0. | 2022-11-03T01:15:45.408Z | 2022-11-02T00:00:00.000 | {
"year": 2022,
"sha1": "5fd2d34bf1ce6f4f43464e7989f3ee30b255bb2d",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "8f5f442d805f9759cd6dc7003efb922040ae3f61",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Engineering"
]
} |
3735224 | pes2o/s2orc | v3-fos-license | IFITM3, TLR3, and CD55 Gene SNPs and Cumulative Genetic Risks for Severe Outcomes in Chinese Patients With H7N9/H1N1pdm09 Influenza
Summary IFITM3 and TLR3 SNPs are associated with fatal clinical outcome of Chinese patients with avian (H7N9) or pandemic (H1N1pdm09) influenza virus infections, and the risks are cumulative. Our findings pose important public health and clinical implications in the at-risk populations.
but located near exon 4, the region encoding transmembrane signal induction), have been linked to impaired signaling function and weakened host responses [8,20,21]. Likewise, SNPs of the TLR4 gene have been linked to impaired innate immunity against respiratory viruses [22,23]. CD55 inhibits the formation of C3 and C5 convertases responsible for complement activation and inflammation, mechanisms implicated in fulminant influenza pneumonitis. The SNP of CD55 rs2564978 has been associated (indirectly) with decreased promotor activity and lowered protection level [8,24,25]. Results of these SNPs' associations with clinical severity, however, appear to be inconsistent, and likely confounded by patients' ethnicity, clinical characteristics, antiviral treatment, and small sample sizes; further, the impacts on survival had not been examined [17,20,21,[25][26][27]. Notably, some of these genetic variants are much more prevalent among the East Asian populations, thus potentially exerting stronger impacts than in populations where their frequencies are low (eg, those with European ancestries) [15,17,24]. Also, it is unknown if their effects are cumulative, which will have important implications on genetic risk evaluation.
In this study, we have adopted a targeted approach to simultaneously investigate the associations of the innate-immunity-related IFITM3, TLR, and CD55 gene SNPs (which had preliminary human data support) with influenza clinical outcomes in a large Chinese cohort. Avian (H7N9) and pandemic (H1N1 pdm09 ) influenza virus infections were studied. Impacts on survival were examined under different genetic models; effects of important clinical confounders were adjusted in multivariate analyses. Joint effects of the genetic variants were investigated with the use of a genetic risk score. Our data may improve the understanding of the genetic risk for severe influenza, and provide an important basis for future research. METHODS We studied SNPs of the targeted host genes in H7N9 and H1N1 pdm09 influenza cases that were consecutively diagnosed in 4 participating institutes in mainland China (Guangdong, Shanghai, Beijing) and Hong Kong during 3 respective seasonal outbreaks (H7N9, 2013-2015; H1N1 pdm09 , 2011−2014; Table 1 footnotes). Virological features, and procedures related to laboratory diagnosis and clinical management of influenza in these medical units (all being urban hospital settings) has been described in detail [2,5,[28][29][30][31]. Briefly, patients presented with symptoms of acute respiratory tract infections during the seasonal outbreaks were tested for influenza virus infections (ie, prospectively diagnosed) with molecular assays, regardless of perceived severity. The inclusion criteria were: polymerase chain reaction (PCR)-confirmed H7N9 or H1N1 pdm09 virus infection, age ≥18 years, and Chinese ethnicity (Chinese populations residing in Hong Kong, Guangdong, Shanghai, and Beijing are mostly of Han ancestry; >95%-99%); there was no exclusion criterion. Patients' original respiratory tract samples collected at presentation for diagnostic purpose, which contained virus-infected epithelial cells, were retrieved for host gene SNP and viral RNA load studies; identical methodologies were used across institutes (see below). Clinical data, including patient characteristics, disease severity, and outcomes, were collected using a standardized research database (to ensure identical definitions of variables), as previously described [5]. All patient identifying information was removed from the dataset, and only anonymous data were used for analysis. Ethics approvals for this study were obtained from the Institution Review Boards (IRB) of all participating institutes.
SNP Genotyping and Quality Control
Viral ribonucleic acid (RNA) and host DNA were coextracted directly from the samples using the PureLink Viral RNA/DNA Mini kit (Thermo Fisher) as per manufacturer's instructions. No DNase/RNase digestion step was performed [32]. Genotyping of 5 SNPs, including IFITM3 rs12252, CD55 rs2564978, TLR3 rs5743313, and TLR4 rs4986790 and rs4986791, was performed by Sanger sequencing of PCR amplicons. In brief, host DNA was subjected to gene-specific PCR amplification using Phusion High-Fidelity DNA polymerase (New England Biolabs). The PCR primers used are provided in Supplementary material 1 [15,20,23,24]. PCR amplicons (300-600 base-pairs in size) were column-purified with QIAquick PCR Purification Kit (Qiagen), followed by Sanger sequencing in both directions using PCR primers. DNA chromatograms were inspected and genotypes were called manually. With this method, genotyping results were successfully obtained in 90.5%, 88.4%, 84.4%, 87.6%, and 87.6% of cases for IFITM3 rs12252, TLR3 rs5743313, CD55 rs2564978, and TLR4 rs4986790 and rs4986791, respectively. The success rates were not significantly different among the target genes and participating institutes. At the time of writing, a study on IFITM3 SNP was published, reporting a similar genotyping success rate of 87% using archived respiratory tract samples collected from patients with influenza-like illnesses [27]. Genotyping accuracy with respiratory samples was confirmed by the results obtained using paired peripheral blood mononuclear cell samples (n = 10), which showed 100% concordance. The SNP results were tested for Hardy-Weinberg equilibrium by the exact test using PLINK (a Bonferroni-corrected P value of >.01 indicated no significant deviation) [33]. To examine for representativeness, the observed allele frequencies in this cohort were compared to the 1000 Genomes general population data on Han Chinese (and East Asians) (www.1000genomes. org/home; last accessed on 18 September 2016) (Supplementary material 2).
Influenza viral RNA Quantification and Standardization
A 1-step, probe-based real-time quantitative reverse transcription-polymerase chain reaction assay targeting the matrix (M)gene of the virus genome was used to measure the viral RNA load in the same respiratory samples, using methods that were standardized across the participating institutes (Supplementary material 1) [34,35].
Outcome Measures and Data Analysis
The primary outcome of this study was all-cause death. The secondary outcome was acute respiratory failure, defined as acute hypoxemic respiratory failure requiring mechanical ventilation, noninvasive positive-pressure ventilation, and/or oxygen therapy for vital life support. We also analyzed hospitalization requirement as an indicator for severity among the H1N1 pdm09 influenza patients; this was not performed for the H7N9 patients, as nearly all were admitted for clinical care and isolation.
The Student t test, χ 2 test with continuity correction, and the Fisher exact test were used for univariate comparisons based on data distribution. Cox proportional hazards regression analyses were used to examine independent associations between the genetic variants and death (censored at 30 days from time of presentation), which were tested under the additive, dominant, and recessive genetic models. Multivariate models were constructed to adjust for the effects of confounders, including age, gender, comorbidity, use of neuraminidase inhibitor (NAI) treatment, and influenza subtype (see Table 1 footnotes for definitions) [34,35]. The adjusted hazard ratio (aHR) and the 95% confidence interval (95% CI) were reported for each explanatory variable; an aHR >1 indicated a higher probability of death. The largest test statistic (MAX statistic) among the 3 genetic models was chosen as the best-fitting model; the experiment-wise significance of the MAX statistic was estimated from its empirical distribution under the null hypothesis after performing 10 000 permutations of genotypes of the SNPs (P perm ), to correct for multiple comparisons [36,37]. Pairwise gene-gene interactions were evaluated by including the main effects, the interaction term (product) of the SNPs, and the covariates in the Cox regression models. In addition, we tested for the joint effect of all significant loci (under corresponding genetic models) on the risk of death by computing a genetic risk score (GRS) for each individual using the simple count method. Assuming a similar and independent effect between loci, the GRS was calculated by summing the score of the risk genotype for each SNP based on the most significant genetic model. The significance of the trend was tested by the Cox regression model, using the GRS as an independent variable with adjustment for clinical confounders [36,37]. The adjusted survival curve stratified according to the GRS was constructed for graphical presentation. Statistical analyses were performed using SPSS for Windows v.22 (SPSS, Chicago, IL).
Associations With the Primary Clinical Outcome
We observed significantly higher proportions of IFITM3 CC (54.5% vs 33.2%; P = .02), and the TLR3 CC (93.3% vs 76.9%; P = .04) genotypes in fatal influenza infections ( Figure 1A and 1B). Their allele frequencies were analyzed according to the primary outcome measure of death/survival, and tested under different genetic models (Table 2). Our results showed that the IFITM3 homozygous CC genotype was significantly associated with increased risk of death in the unadjusted recessive model, which remained significant after adjusting for major clinical confounders (aHR 2.78, 95% CI 1.29−6.02; P = .01). Although the TLR3 homozygous CC genotype was insignificant in the unadjusted models, it was found to be significantly associated with increased death risk in both recessive ( 1.11−21.06; P = .04) and additive (aHR 4.55, 95% CI 1.09−19.03; P = .04) genetic models in multivariate analyses. The MAX statistic indicated that the recessive models were the "best-fitting" genetic model for both SNPs (P perm < 0.05). We did not find significant association between the CD55 alleles and primary outcome. Findings were consistent between the 2 influenza subtypes (Supplementary material 3).
Associations With Clinical Severity and Viral Load
Consistent with findings on the primary outcome, the IFITM3 CC genotype was shown to be independently associated with the development of acute respiratory failure in hospitalized patients (overall, 44.3% vs 25.5%; P = .01; adjusted odds ratio [OR] 2.10, 95% CI 1.10−4.01; P = .03) ( Table 3 b and Table 4). For the TLR3 CC genotype, an insignificant trend was observed among the H7N9 influenza patients (80.0% vs 61.5%). There was an association between the CD55 TT genotype and requirement for hospitalization (55.6% vs 38.9%; P = .04; adjusted OR 2.77, 95% CI 1.21−6.36; P = .02) ( Table 4). There was a tendency of higher viral load in patients harboring the risk genotypes, even when presenting late in the course of illness; however, the analysis results did not reach statistical significance (P > .1) (Supplementary material 4). DISCUSSION We found significant associations between SNPs of IFITM3 rs12252 and TLR3 rs5743313 genes and outcomes of avian The HRs and 95% CI were reported for the "risk allele, " as calculated using Cox regression models. Analyses were based on data from 275 patients, because no significant difference in allele distribution was found between the 2 influenza subtypes.
(H7N9) and pandemic (H1N1 pdm09 ) influenza in our Chinese cohort. There was increased death risk with the respective homozygous CC genotype of IFITM3 and TLR3, and the effects were cumulative. Our results provide evidence for genetic risks for severe influenza disease, which may have important public health and clinical implications. Everitt and colleagues [14] had first described over-representation of IFITM3 CC (5.7%), an uncommon genotype in populations of European ancestry (0.3%), among patients hospitalized for H1N1 pdm09 influenza in the United Kingdom (n = 53). Several subsequent studies from Europe reported insignificant associations with severe H1N1 pdm09 infection, but the number of cases detected with the genotype were very small (0.7%−2.4%) [17,[25][26][27]. In Chinese and East Asian populations, however, the IFITM3 CC genotype is known to be much more prevalent (25%−44%); and a study from China had reported its over-representation (69%) among hospitalized patients with H1N1 pdm09 pneumonia (n = 32) [15]. Our study added that this genotype might actually predict a fatal outcome of influenza patients. Its frequency was about 21% higher among the fatal cases (54.5%, vs 33.2% in survivors); the death risk was found to be significantly increased by nearly 3-fold, even after adjustment for clinical confounders. The recessive model for the C-allele was confirmed to be the "best-fitting" genetic model [15,26]. Consistent result was also shown for clinical severity, as indicated by development of acute respiratory failure. The mechanism of how this SNP would affect the disease course is incompletely understood and likely complex. It has been suggested that the C-allele homozygosity can impair IFITM3 function, leading to reduced viral clearance, which, in turn, aggravates host inflammatory responses [14,15]. Viral load coupled with increased proinflammatory cytokines has been described in patients harboring the CC genotype with progressive H7N9 infections, but the study was small (n = 18) [16].
In this study, we first report possible association between the TLR3 gene's SNP with influenza clinical outcomes. There was over-representation of the TLR3 homozygous CC genotype in fatal infections (93.3%, vs 76.9% among survivors; in European populations its prevalence was lower at 50%−65%); and a significant increase in death risk was found in recessive and additive models for the major C-allele after adjustment for confounders. To the best of our knowledge, there was only 1 small-scale study (n = 51) describing a univariate association of TLR3 rs5743313 CT genotype with the risk of developing influenza pneumonia in 18 European children [20]. The apparent discrepant result may be attributable to ethnicity difference, as observed in other diseases linked to this SNP, and/or the patients' age and disease stage [38]. Intriguingly, TLR3 function had been shown to be detrimental to the survival of mice with influenza virusinduced pneumonia despite a lowered viral load, owing to the induction of exuberant inflammatory responses and excessive lung damage [39]. A mechanistic study on how the SNP could have affected TLR3 signaling function, and its impact on different phases of infection is indicated [8,8,19,40]. Consistent with a recent report, we also noted a relationship between CD55 T-allele homozygosity (more common among Asians) and clinical severity (hospitalization), though there was no significant association with survival [24,25]. The TLR4 loci were shown to be nonpolymorphic in this cohort; seemingly, emerging data have suggested a lack of association between TLR4 SNP and influenza severity, unlike in pediatric respiratory syncytial virus disease [20,23].
Although the risk associated with individual genetic variant may appear moderate, our data showed that the effects can be cumulative. Fatality of patients with TLR3 CC plus the IFITM3 CC genotype was 23%, whereas patients with either or none of these risk genotypes had fatality rates of 11% and 3%, respectively. The death risk was shown to be incremental in GRS analysis (aHR 3.5, per risk genotype); the presence of both genotypes conferred an almost 10-fold higher risk than those with neither. These results strongly suggest that a combination of host genetic factors ("genetic background") may influence the clinical course and outcomes of influenza, along with known clinical and virologic factors [7,8,9,41,42]. Our list of candidate genes/SNPs is not exhaustive; as experimental and genome-wide association studies continue to uncover these determinants, more variables/ combinations, including those related to adaptive immunity are expected to be considered in genetic risk evaluation in severe influenza [7,8,9,10,21,25,43,44].
Our data have important implications. Owing to their high frequencies in Chinese and East Asian populations, the proportion of disease burden attributable to these genetic variants could be substantial. An earlier study on 83 Chinese influenza patients had put the estimate of the population-attributable risk percentage (PAR%) of IFITM3 CC for severe nonfatal/ fatal infections at 54% (in contrast to 5% in European populations) [15]. We had calculated the PAR% of IFITM3 CC for influenza death with this mostly hospitalized Chinese cohort, which was around 36% (95% CI 10-70%); the combined risk with TLR3 CC was higher (Supplementary material 5). Precise PAR estimation would require a much larger sample size; and within/cross-population epidemiological studies are necessary to verify the plausibility. Interestingly, emerging data seem to suggest significant regional heterogeneity in influenza mortality across continents (Europe, Americas, Asia), even accounting for country income status and comorbidity [45]. The knowledge on genetic risk can better inform public health-care planning, pandemic preparedness, and the development of preventive strategies against avian/pandemic influenza in our region [15,16,19]. At the individual patient level, we indicate the potential of assessing genetic risk factors, together with virologic parameters in the same respiratory sample, to assist prognostication and treatment decisions (eg, regimen intensification), which warrant exploration [15,16,19,35]. As genetic/ethnic factors can confound outcomes, future clinical trials on influenza therapeutics should consider these variables; our study has provided useful information for their design/planning.
The strengths of our study include a larger sample size (n = 275; >200 hospitalized and >150 influenza pneumonia cases), unbiased sampling (original diagnostic respiratory specimens from consecutive patients with fatal/nonfatal disease courses), data on pandemic and avian influenza, analyses of well-defined outcome measures with different genetic models, and adjustment for potential confounders. Notably, early NAI treatment was significantly associated with improved outcomes independent of genotype, age, and comorbidity (Table 3) [3,5,6,35]. Interaction and cumulative effects were examined. Unlike genome-wide association studies, however, our approach had limited the number of candidate genes/SNPs for study, and a more detailed analyses on haplotypes and population substructure are infeasible with this design. The SNPs' effects on protein function, and how this might impact on viral clearance and/ or signaling of proinflammatory responses at different disease stages, would require further elucidation (eg, serial viral load and cytokine/chemokine measurements) [14,15,16,18,19,20,28]. D222G, H275Y, and R292K mutations should have a minimal impact on results because of their reported rarity [25,28,29,31,35]. The effects on host susceptibility to initial infection (compared with uninfected patients) [14,15,20,26], vaccine response, and the reasons why these immune-related genetic variants (thus implicated in infective/noninfective inflammatory diseases) are selected and gained prevalence among the Asian populations should warrant investigation [15,19,37,38,42,43,45]. Studies on seasonal influenza where preexisting immunity may exist, and other potential genetic determinants for influenza severity are in progress.
In conclusion, our results suggest that host genetic factors may influence clinical outcomes of pandemic and avian influenza virus infections, and the effects are cumulative. The impacts on disease burden in populations where such risk genotypes are common should deserve evaluation. Our findings may pose important implications on public health-care planning, patient care, and future designs of clinical trials in the at-risk populations.
Supplementary Data
Supplementary materials are available at The Journal of Infectious Diseases online. Consisting of data provided by the authors to benefit the reader, the posted materials are not copyedited and are the sole responsibility of the authors, so questions or comments should be addressed to the corresponding author. | 2018-04-03T02:10:49.876Z | 2017-05-16T00:00:00.000 | {
"year": 2017,
"sha1": "69d39558af827978877ab3fe33ee0630a75ac0fe",
"oa_license": null,
"oa_url": "https://academic.oup.com/jid/article-pdf/216/1/97/18760113/jix235.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "69d39558af827978877ab3fe33ee0630a75ac0fe",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Medicine"
]
} |
233961572 | pes2o/s2orc | v3-fos-license | Comparative Evaluation of the Visual and Refractive Outcomes Following SMILE, FS-LASIK, and T-PRK Surgery: A Retrospective, Non-Blinded Clinical Study
Background: To comparatively evaluate of the visual and refractive outcomes after small-incision lenticule extraction (SMILE), femtosecond laser-assisted in situ keratomileusis (FS-LASIK), and transepithelial photorefractive keratectomy (T-PRK) surgery.Methods: This was a retrospective, case-series, non-blinded clinical study. Consecutive eligible patients underwent SMILE, FS-LASIK, and T-PRK at the Department of Ophthalmology of Peking Union Medical Hospital, a tertiary referral center. All myopic patients were treated with corneal refractive surgery (SMILE, FS-LASIK, and T-PRK) using the VisuMax (Carl Zeiss Meditec AG, Jena, Germany) 500-kHz femtosecond laser system and the Amaris 750S excimer laser platform (SCHWIND eye-tech solutions, Kleinostheim, Germany). Visual and topographic astigmatism changes at 6 months were the main outcome measure. Secondary outcomes were the efficacy index at 1, 3, and 6 months postoperatively.Results: We recruited 75 consecutive patients (mean age, 27.88 ± 5.76 years; 68% women; all Asian) with no significant differences between groups in terms of preoperative demographic data, except in preoperative spherical equivalent (SE) (-5.54 ± 1.86 D, -5.64 ± 1.66 D, and -3.78 ± 1.30 D, respectively; P<0.001), astigmatism (1.24 ± 1.62 D, 1.16 ± 0.75 D, and 0.72 ± 0.42 D, respectively; P=0.008), and residual bed thickness (313.08 ± 32.18 μm, 427.59 ± 30.69 μm, and 427.09 ± 41.07 μm, respectively; P<0.001). A superior efficacy index was shown in SMILE and FS-LASIK compared to T-PRK 1 month after surgery.Conclusions: The results from this retrospective, non-blind, case-series clinical study suggest that all of the corneal refractive surgery options are safe and effective. However, while SMILE and FS-LASIK procedures have equal visual outcomes, they have superior efficacy index values in the early postsurgical period.
Background
Photorefractive keratectomy (PRK) was rst introduced for the surgical correction of myopia [1]; laser ablation refractive surgery was widely applied for anterior segment operation. However, there are some complications after PRK, such as postoperative pain, discomfort, and high grade of corneal haze [2]. With advances in techniques used for epithelium removal, femtosecond laser-assisted in situ keratomileusis (FS-LASIK) has emerged as a new approache in the eld of refractive surgery. For reduction of postoperative pain, corneal ectasia, and dry-eye symptoms [3][4][5], femtosecond lenticule extraction is a new one-step procedure to create a ap and a refractive lenticule. A modi ed procedure, small-incision lenticule extraction (SMILE), potentially offers biomechanical advantages over FS-LASIK surgery [6,7].
The SCHWIND excimer laser (SCHWIND eye-tech-solutions, Kleinostheim, Germany) is a laser platform that uses a six-dimensional tracking system to compensate for eye movements to ablate corneal tissue during corneal refractive surgery [8]. Recently, SmartSurf ACE (Smart Pulse Technology, SCHWIND eye-techsolutions) touch-free transepithelial PRK (T-PRK) has become a common surgical option, with a one-step treatment system, rapid visual recovery, and functional binocular uncorrected distance visual acuity (UDVA) provided immediately after surgery [9][10][11][12][13]. However, the speci c stromal ablation designed pro les were not mentioned [14,15].
The refractive outcomes and visual quality differ between different surgical procedures, and straylight is an important assessment parameter related to visual quality. Xu et al. [16] found that surface ablation was signi cantly increased with forward light scattering after surgery, and stromal ablation was slightly increased in the early stage after surgery. However, a network meta-analysis showed that there were no statistically signi cant differences in either visual outcomes or visual quality between different procedures, and that FS-LASIK was more predictable than any other type of surgery [17].
All corneal refractive surgeries can be broadly divided into 3 categories: corneal surface ablation surgery, corneal stromal ablation surgery (involving the creation of corneal ap), and refractive corneal lenticule extraction (a form of stromal ablation that does not require a ap). This retrospective study aimed to comparatively evaluate the visual and refractive outcomes after SMILE, FS-LASIK, and T-PRK. Patients included in the study received corneal refractive surgery to correct myopia and myopic compound astigmatism. All patients demonstrated at least 1 year of stable refraction before undergoing refractive surgery, and the patients were followed up for at least 6 months. Exclusion criteria included amblyopia, ocular pathology, retinal disorders, previous ocular surgery, or insu cient follow-up.
UDVA and corrected distance visual acuity (CDVA) were assessed using Snellen charts. The CDVA was always assessed using trial frames and not contact lenses. Central corneal thickness was measured by ultrasonic pachymetry (TOMEY, Aichi, Japan), in which each single measurement is the average of ve consecutive measurements. Corneal topography was measured by TMS-4N (TOMEY, Erlangen, Germany). The value of residual bed thickness (RBT) was de ned as shown in Table 1. ap diameters. Ablations were performed using the AMARIS 750S excimer laser (SCHWIND eye-tech solutions, Kleinostheim, Germany). All corneal ablations were performed with Aberration-Free mode [8] and corneal topography was obtained by videokeratoscopy (Keratron Scout topographer, Optikon 2000 SpA, Rome, Italy) under photopic conditions (270 lux), similar to the conditions under the operating microscope [18]. Ablation was performed on a 6.0-mm to 6.8-mm optical zone. After surgery, a bandage contact lens (PureVision™ Bausch & Lomb, Rochester, NY, USA) was placed over the surgical site.
The VisuMax Femtosecond Laser System (Carl Zeiss Meditec AG, 500-kHz repetition rate) was used to perform SMILE. A small curved interface cone was used during each surgery. The anterior surface of the lenticule (spiral-out pattern) and the posterior surface of the lenticule (spiral-in pattern) were followed by a side-cut of the cap. The options value of power and spot distances for lenticule creation were 140 nJ and 4.5 µm, respectively. Parameters for the femtosecond laser were 6.0-mm to 6.5-mm lenticule diameter, 110-µm cap thickness, a 4-mm hinge width at 120 degrees position for lenticule extraction, and a 7.5-mm to 7.6-mm cap diameter with a 90 degree side-cut angle. A spatula was inserted through the side-cut over the roof of the refractive lenticule to dissect this plane to reach the bottom of the lenticule.
The lenticule was subsequently grasped with modi ed McPherson forceps (Geuder GmbH, Heidelberg, Germany) and removed.
After surgery, topical tobramycin-dexamethasone (Tobradex; Alcon, Fort Worth, TX, USA) were administered to the eyes 4 times daily for 1 week. Flumetholon (0.1% uorometholone; Santen, Osaka, Japan) was used 4 times daily for the second week, after which the frequency decreased by 1 administration per day each week for 1 month. Finally, an antibiotic (0.5% levo oxacin; Santen, Japan) was administered topically 4 times daily for 2 weeks.
Analysis of surgically induced astigmatism
Astigmatic polar value of net astigmatism (AKP) analysis methods [19] were used to analyze astigmatism changes after surgery. All keratometric values were converted to a plus power (@) net cylinder format with the magnitude of keratometric astigmatism in diopters, and the direction in degrees following the steepest keratometric meridian axis. To calculate the postoperative astigmatic polar values, the preoperative steepest keratometric meridian axis was consistently used as a reference, and the changes in polar values from preoperative to postoperative conditions were calculated and compared.
The preoperative and postoperative AKP have been de ned by Naeser et al. [20]; for the preoperative net cylinder A@a and the postoperative net cylinder B@b, the calculation formulae are as follows:
All standard visual and refractive outcomes in terms of e cacy, safety, and topographic astigmatism changes during a 6-month follow-up (Figs. 1 and 2, and Tables 2 and 3). There were no statistically signi cant differences between groups in AKP(+ 0) and AKP(+ 45) during the 6 months following surgery ( Table 2). In the current study, all procedures achieved superior refractive e cacy at 6 months, and SMILE and FS-LASIK achieved better e cacy outcomes in the early stage. As observed in terms of the e cacy index in 1.00 ± 0.16, 1.04 ± 0.26, and 0.93 ± 0.18 (P = 0.049), 1 month after surgery (SMILE, FS-LASIK, and T-PRK, respectively; Table 3), a postoperative UDVA of 20/40 or better in 94%, 98%, and 100%, respectively (SMILE, FS-LASIK, and T-PRK, respectively; Fig. 1), and 20/20 or better in 94%, 90%, and 94%, respectively (SMILE, FS-LASIK, and T-PRK; Fig. 1). In terms of the difference between postoperative UDVA and preoperative CDVA, 57% of the eyes showed no changes (62% of eyes in SMILE, 50% of eyes in FS-LASIK, and 58% of eyes in T-PRK), 22% of eyes were gain one or more lines (24% of eyes in SMILE, 24% of eyes in FS-LASIK, and 18% of eyes in T-PRK, respectively), and 18% of eyes were loss one lines (8% of eyes in SMILE, 24% of eyes in FS-LASIK, and 22% of eyes in T-PRK, respectively) after corneal refractive surgery (Fig. 2).
Discussion
The results from this retrospective, non-blind, case-series clinical study demonstrated that all corneal refractive surgery produced excellent visual and refractive outcomes in terms of refractive e cacy and safety. Nonetheless, our results suggest that all corneal refractive procedures had similar visual outcomes after surgery. In the analysis of UDVA and CDVA changes, a preoperative CDVA of 20/20 or better was seen in 100%, 100%, and 90% of eyes in SMILE, T-PRK, and FS-LASIK, respectively. Moreover, in terms of the difference between postoperative UDVA and preoperative CDVA, 57% of the eyes showed no changes, 22% of the eyes showed a gain one or more lines, 18% of eyes showed a loss one line.
Previously, Tobaigy et al. [21] and Scerrati et al. [22] suggested that the visual and refractive outcomes were better in surface ablation than stromal ablation. However, Kim et al. [23] reported that corneal stromal ablation surgery was superior to corneal surface surgery for high myopia. A longitudinal followup study concluded that corneal surface and stromal ablation surgery had similar e cacies for moderate myopia within 2 years, with a signi cantly superior e cacy in corneal surface ablation surgery after 4 years postoperatively. Meanwhile, the corneal stromal ablation surgery showed greater myopic regression 5 years postoperatively [24]. According to the current short-term follow-up results, there were superior outcomes in the early stage of corneal stromal ablation surgery and corneal lenticule extraction procedures than corneal surface ablation technique. Moreover, there were no statistically signi cant difference in e cacies within all procedures.
For this retrospective clinical study, we used the AKP analysis method [20] to evaluate the astigmatism changes after surgery, and found that according to this algorithm, corneal ablation was not signi cantly different within all procedures during the 6-month postoperative period. However, there was a statistically signi cant difference in AKP(+ 0) preoperatively. It means that corneal refractive surgery corrects the refraction error with changes to the corneal biomechanical properties by using the bitoric LASIK technique with an aspheric pro le to create a smooth transitional zone between the treated and untreated cornea [25][26][27]. This ablation technique was achieved by balancing the negative and positive cylinder ablations, creating a more aspheric optical zone. Moreover, the optimized centration in the SMILE procedures between the corneal vertex and optical zone center [28] were analyzed, and it was found that there was no signi cant difference in centration between SMILE and LASIK procedures [29].
There were also no major intraoperative or postoperative complications reported during the study period. Flumetholon was applied for the patients with minor postoperative symptoms such as visual uctuation and dry eye which were temporary (resolved during 3 months postoperatively) and not signi cantly different in terms of their occurrence between all eyes included in this clinical study [30]. Of note, the e cacy, predictability, and safety outcomes of all procedures in the current case-series study after 6 months postoperatively were comparable with previously reported studies [17,31].
We recognize that this retrospective clinical study has some limitations. This was a short-term follow-up study in which there was myopic regression after corneal refractive surgery for 10-year follow-up in a previous study [24]. Second, there was no evaluation of the visual quality (which may include increased occurrence of symptoms such as halos, glare, and starbursts) within groups. However, we noted that patients reported more uncomfortable symptom (such as uctuation in vision) in SMILE or T-PRK treated eyes than in FS-LASIK-treated eyes at 1 and 3 months after surgery. However, these symptoms reportedly diminished, and there was no difference between the eyes by 6 months. These results are important when counseling patients before surgery and explaining what to expect after the procedure, factors that sometimes are more pertinent to the patient than scienti c results. Finally, there were no statistically signi cant differences in refractive outcomes or e cacy after surgery in the early postsurgical period. On the contrary, the postoperative outcomes were signi cantly better for the corneal stromal ablation than the corneal surface ablation technique [23], and superior refractive outcomes were obtained in SMILE procedures which is a more surgeon-dependent surgical technique. Further understanding of the ablation algorithms of the femtosecond and excimer lasers with more advanced clinical trial studies to improve postoperative visual and refractive outcomes are needed.
Conclusions
In summary, our case-series, non-blinded, retrospective clinical study suggests that all the corneal refractive surgery are able to provide excellent visual outcomes for myopia and myopic compound astigmatism, in terms of visual and refractive predictability, e cacy, and safety. Moreover, other groups suggested that SMILE is a refractive technique that is more surgeon-dependent compared with other types of corneal refractive surgery [31]. There were no statistically signi cant differences in visual or refractive outcomes within any procedure in the current study. Regarding the ap-related healing process, the outcomes were superior in SMILE and FS-LASIK than T-PRK. However, the SMILE procedure needs a thicker cornea for lenticule extraction than FS-LASIK in equally myopic patients.
Abbreviations SMILE: small-incision lenticule extraction; FS-LASIK: femtosecond laser-assisted in situ keratomileusis; T-PRK: transepithelial photorefractive keratectomy; UDVA: uncorrected distance visual acuity; CDVA: corrected distance visual acuity; AKP: astigmatic polar value of net astigmatism; CCT: central corneal thickness; RBT: residual bed thickness Declarations Ethics approval and consent to participate This study was approved by the Ethics Committee of the Peking Union Medical College Hospital (China) and followed the tenets of the Declaration of Helsinki. A written and informed consent was obtained from all participants.
Consent for publication: Not applicable.
Availability of data and materials: Available from the corresponding author on reasonable request. | 2021-05-08T00:02:51.906Z | 2021-02-25T00:00:00.000 | {
"year": 2021,
"sha1": "0078ab47723c20eccf623e738a38aaacc140af27",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-257979/v1.pdf?c=1614307869000",
"oa_status": "GREEN",
"pdf_src": "Adhoc",
"pdf_hash": "f4e65893378734bb6ca888c664aa7bca35f9c555",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
195788087 | pes2o/s2orc | v3-fos-license | Is enrolment in the national health insurance scheme in Ghana pro-poor? Evidence from the Ghana Living Standards Survey
Objectives This article examines equity in enrolment in the Ghana National Health Insurance Scheme (NHIS) to inform policy decisions on progress towards realisation of universal health coverage (UHC). Design Secondary analysis of data from the sixth round of the Ghana Living Standards Survey (GLSS 6). Setting Household based. Participants A total of 16 774 household heads participated in the GLSS 6 which was conducted between 18 October 2012 and 17 October 2013. Analysis Equity in enrolment was assessed using concentration curves and bivariate and multivariate analyses to determine associated factors. Main outcome measure Equity in NHIS enrolment. Results Survey participants had a mean age of 46 years and mean household size of four persons. About 71% of households interviewed had at least one person enrolled in the NHIS. Households in the poorest wealth quintile (73%) had enrolled significantly (p<0.001) more than those in the richest quintile (67%). The concentration curves further showed that enrolment was slightly disproportionally concentrated among poor households, particularly those headed by males. However, multivariate logistic analyses showed that the likelihood of NHIS enrolment increased from poorer to richest quintile, low to high level of education and young adults to older adults. Other factors including sex, household size, household setting and geographic region were significantly associated with enrolment. Conclusions From 2012 to 2013, enrolment in the NHIS was higher among poor households, particularly male-headed households, although multivariate analyses demonstrated that the likelihood of NHIS enrolment increased from poorer to richest quintile and from low to high level of education. Policy-makers need to ensure equity within and across gender as they strive to achieve UHC.
AbstrACt
Objectives This article examines equity in enrolment in the Ghana National Health Insurance Scheme (NHIS) to inform policy decisions on progress towards realisation of universal health coverage (UHC). Design Secondary analysis of data from the sixth round of the Ghana Living Standards Survey (GLSS 6). setting Household based. Participants A total of 16 774 household heads participated in the GLSS 6 which was conducted between 18 October 2012 and 17 October 2013. Analysis Equity in enrolment was assessed using concentration curves and bivariate and multivariate analyses to determine associated factors. Main outcome measure Equity in NHIS enrolment. results Survey participants had a mean age of 46 years and mean household size of four persons. About 71% of households interviewed had at least one person enrolled in the NHIS. Households in the poorest wealth quintile (73%) had enrolled significantly (p<0.001) more than those in the richest quintile (67%). The concentration curves further showed that enrolment was slightly disproportionally concentrated among poor households, particularly those headed by males. However, multivariate logistic analyses showed that the likelihood of NHIS enrolment increased from poorer to richest quintile, low to high level of education and young adults to older adults. Other factors including sex, household size, household setting and geographic region were significantly associated with enrolment.
Conclusions From 2012 to 2013, enrolment in the NHIS was higher among poor households, particularly male-headed households, although multivariate analyses demonstrated that the likelihood of NHIS enrolment increased from poorer to richest quintile and from low to high level of education. Policy-makers need to ensure equity within and across gender as they strive to achieve UHC.
IntrODuCtIOn
Many low-income and middle-income countries are increasingly implementing prepayment schemes to provide financial risk protection and equitable access to healthcare services for their populations, particularly the poor. [1][2][3] Prepayment schemes such as social health insurance, if implemented effectively, can reduce out-of-pocket payments (OOP) and associated catastrophic effects on households. 4 The quest to ensure equity in access to healthcare services and to achieve Universal Health Coverage (UHC) has become more imperative, following adoption of the Sustainable Development Goals (SDGs) by member countries of the United Nations. Equity in prepayment schemes is also recognised by WHO as one of the fundamental elements of UHC. 5 Ghana had a free healthcare system after independence in the 1950s, financed by general taxation. However, this system of healthcare changed when the economy started declining and user fees were partially introduced in the 1970s and 1980s to offset costs of healthcare services delivery. [6][7][8] Although the OOP somewhat helped public healthcare services providers to recover partial costs of essential medicines and other pharmaceutical products and to raise revenue, the system-created inequity in access strengths and limitations of this study ► Our study is the first to use data from the Ghana Living Standards Survey to examine equity in enrolment in the National Health Insurance Scheme (NHIS). ► We developed concentration curves and multivariate logistic regression models to produce new findings to inform decision-making. ► Unlike previous studies, this study found that enrolment in the NHIS is slightly concentrated among the poor; however, the odds of enrolling increases with wealth quintile, level of education and age. ► As a secondary analysis, the data used for the study lack a number of important factors including trust in scheme management, perceived quality of care, ease of enrolment, etc., which would be useful for better understanding NHIS enrolment.
Open access
to healthcare and in some cases led to avoidable deaths. 6 8 9 This situation resulted in the introduction of a National Health Insurance Scheme (NHIS) in 2003 to replace OOP and ensure equity in healthcare access. 10 The NHIS is managed by the National Health Insurance Authority, a body mandated by law to regulate both public and private health insurance schemes in the country. 11 Membership in the NHIS is broadly categorised into exempt and non-exempt groups. 11 The exempt groups are members who are exempted from paying premiums to the scheme and they include persons below 18 years of age, persons aged 70 years and above, pregnant women, indigent (extreme poor), formal sector workers who contribute to Social Security and National Insurance Trust (SSNIT) and beneficiaries of the Livelihood Empowerment Against Poverty (LEAP) programme. The non-exempt group includes members who pay premiums and enrolment processing fees to the scheme and these are workers in the informal sector of the economy. The NHIS is tax-funded through the National Health Insurance Fund which is based on 2.5% levy on selected goods and services. Other sources of funding are a 2.5% deduction from formal sector workers' SSNIT contributions, premiums from informal sector workers, funds allocated by parliament, interest from investments and donor funds and gifts. 11 The premium and enrolment processing fee from the non-exempt group is GHS30.00 (US$6.33) per year. However, the exempt group only pays a processing fee of GHS8.00 (US$1.69) for new enrolment and GHS5.00 (US$1.05) for renewal of membership per year. Relative to the per capital income of GHS8863 (US$2035), 12 the NHIS premium and processing fee represent 0.34%. Again, reference to the daily minimum wage of GHS10.65 (US$2.25) 13 or GHS2769.00 (US$584. 18) per year, the NHIS premium and processing fee constitute 0.38%. Like many health systems around the globe, Ghana's health system is hierarchical with the Ministry of Health (MoH) as apex body mandated to formulate policies to improve health of the population. 14 The MoH has about 12 agencies, comprising the public, quasi-government and private health facilities, as well health education institutions. The biggest agency is the Ghana Health Service, which is charged with the responsibility of delivering healthcare to the population, as well as implementing policies of the MoH. The Ghana Health Service has a decentralised system of healthcare delivery with a considerable number of healthcare facilities located across the country. The lowest level of the healthcare delivery system is the community-based and health planning services compound and the highest being the tertiary or teaching hospitals at the national level. The number of healthcare facilities and professionals are unevenly distributed across the country, with the majority located in the urban areas. 15 16 On the other hand, many of the private healthcare facilities particularly the faith-based ones are located in remote areas, where they provide about 40% of healthcare services to the population. 14 Evidence shows that the NHIS has made progress in population coverage and contributed to utilisation of healthcare services and to expansion of healthcare facilities in its short period of existence. 17 A report of the NHIS shows that the scheme has covered 36% (10.8 million) of the population as of December 2018. 18 It has 166 district offices and a network of over 4000 healthcare providers comprising both public and private healthcare facilities across the country. The benefits package reportedly covers 95% of the disease conditions afflicting the population. It broadly covers outpatient services, inpatient services, oral health, eye care services, maternity care and emergencies. 19 Preventive services, for example, immunisation and service that have the potential to pose sustainability challenges are excluded from the benefit package. 9 11 There are few equity-oriented studies of the NHIS in Ghana. A mixed-method study that evaluated equity in NHIS enrolment in two regions (Central and Eastern) found that more males had registered in the scheme than females and households in the richest quintile were significantly more likely to enrol than those in the poorest quintile. 1 The study also found that old age, higher education, female-headed households and perceived NHIS benefits were significantly associated with NHIS enrolment. Another mixed-method study examining why the NHIS is not reaching the poor used the same two regions and found fewer of the poor to be covered due to poverty and policy-makers' and implementers' lack of commitment to pursue NHIS's equity goal. 20 Kusi et al, 21 in examining affordability of NHIS contribution, used three districts from the southern, middle and northern ecological zones of Ghana and also found that significantly more of the rich were enrolled in the NHIS than the poor. These three studies were conducted in 2008 and 2011 and employed bivariate and logistic regression analyses to examine enrolment equity. Other studies that also examined equity in NHIS enrolment, using data from the 2008 Ghana Demographic Health Survey, employed concentration curves and logistic regression and found that coverage was highest among the educated, households in the richest quintile, and urban residents. 22 23 This study examines equity in enrolment in Ghana's NHIS to inform policy decisions regarding attainment of UHC. It is necessary now to study equity to assess major NHIS policy reforms instituted in recent years to make the scheme more attractive to the general public. One such policy is the intersectoral collaboration with state-owned social protection institutions, for example, Ministry of Gender and Social Protection, Ministry of Education, LEAP Secretariat and Savannah Accelerated Development Authority, to increase the population of the poor and vulnerable in the NHIS and to improve equity. Findings from this study can inform policy-making on UHC attainment and contribute to the body of knowledge on equity in NHIS enrolment and progress towards achieving the SDGs.
Open access
MethODs study design and setting This study analyses secondary data from the sixth round of the Ghana Living Standards Survey conducted between 18 October 2012 and 17 October 2013. The survey covered a representative sample of 18 000 households in 1200 enumeration areas across the 10 administrative regions of the country. 24 Survey participants had an average age of 44 years and 48 years for males and females, respectively. In the 2010 Population and Housing Census, Ghana had a population of 24 658 823, with 51.2% being females. The majority of the population resided in the Ashanti (19.4%) and Greater Accra (16.3%) regions, the two most urbanised regions 25 of the country. These two regions also have the lowest poverty rates, while those in the northern savannah ecological zones (Northern, Upper East, Upper West, Brong-Ahafo, Volta) have the highest poverty rates. 26 Online supplementary appendices 1 and 2 provide more details on the population distribution and poverty profile of Ghana.
Data collection and analysis
Data were sourced from the Ghana Statistical Service (GSS) and had already been cleaned and managed including creation of sampling weights and wealth quintiles. The GSS constructed the wealth quintiles using household expenditure as a proxy. 24 The household expenditure is composed of food and non-food items. The total number of households covered in the survey was divided into five groups by their total household consumption expenditure. The quintile ranking was then constructed using the household members total expenditure per capital. Bivariate analyses examined unadjusted relationships between socio-demographic factors and wealth quintiles. Equity in enrolment was assessed using concentration curves and indices, and multivariate logistic regression models to determine factors associated with enrolment. 1 22 27 28 While the concertation curve analyses equity in NHIS enrolment between the poor and the rich, the logistic regression model shows factors associated with enrolment in the scheme. The use of these two analytical techniques is therefore meant to produce reliable findings for informed policy decision-making.
The unit of analysis was the household, and we examined cumulative proportion of enrolment by wealth quintiles, decomposed by sex, within and across male-headed and female-headed households. A multivariate logistic regression model was employed to assess whether lower wealth groups were more likely to enrol in the NHIS than higher wealth groups, holding the other socio-demographic variables constant. The outcome or dependent variable 'NHIS enrolment status' was labelled 1 for active card-bearing members and 0 for inactive cardbearing members or those who had never enrolled in the scheme. The main independent variable was 'wealth quintile' and the others (control variables) were socio-demographic characteristics such as age of household head, sex of household head, household size, education level of household head, household head employment status, household setting and geographic region of residence. Age of household head was categorised based on the
Open access
Medical Subject Headings age definition. 29 30 Microsoft Excel 2016 and STATA V.13 were used for all analyses.
Patient and public involvement
Patients were not involved in this study.
Characteristics of study participants
A total of 16 772 household heads with an average age of 46 years (SD=15.58) and household size of 4 persons (SD=2.78) responded to questions on NHIS in the survey (table 1). Majority of the household heads (47%) were in the age bracket of 25-44 years. Out of the total number of survey participants, 72% were females; 51% had no formal education; 90% were employed; 24% were in the richest quintile; 56% lived in urban areas; and 12% resided in the Ashanti region. About 71% of households had at least one person enrolled in the NHIS.
equity in enrolment
Results of the concentration curve analyses demonstrate that enrolment was slightly more concentrated among poor households (figure 1). Enrolment by sex also showed that enrolment was more concentrated among households headed by males compared with those headed by females. The concentration indices further revealed that among the study participants, equity was more pronounced in the insured than the uninsured and within male-headed households than female-headed households (table 2).
relationship between household characteristics and wealth quintiles There were significant differences in all household characteristics by wealth quintiles, except employment status (table 3). The poorest households (73%) enrolled in the NHIS more than the richest households (67%).
Open access
Interestingly, the richer households had the second highest enrolment (72.4%) in the scheme. Majority of the poorest households (80.1%) had no formal education compared with about 25% of the richest households with tertiary level education. Similarly, majority of the poorest households (91%) were more employed as were the richest households (89%), and there were more females (79%) in the poorest quintile than in the richest quintile (67%). There were also significantly more household heads aged 45 years or more in the poorest quintile than those in the richest quintile, and more households in the poorest quintile (86%) living in urban settings than households in the richest quintiles (30%). Results of the multivariate logistic regression showed that the likelihood of enrolling in the NHIS increases from poorer to richest quintile, low to high level of education and young adults to older adults (
Open access
interest, which showed a decreased likelihood of enrolling in the NHIS from poorer to richest.
DIsCussIOn
This study examined equity in NHIS enrolment employing data from the Ghana Living Standards Survey (round 6) which was conducted between October 2012 and October 2013. The findings show inequity in enrolment and significant associations between socio-demographic factors and NHIS enrolment. Among households surveyed, enrolment is disproportionally concentrated among poor households especially those headed by males. The possible explanation relates to policy changes made over the last few years to increase enrolment in the scheme. One such policy is the deliberate attempt to increase numbers of the poor and vulnerable in the scheme through enrolment of the LEAP beneficiaries, students in secondary and tertiary institutions in Ghana, prisoners and individuals living in less developed geographic regions, particularly those in the northern savannah ecological zone, where there is high prevalence of poverty. The disproportionate concentration of enrolment among poor households contradicts previous studies on the NHIS, 1 20-22 31 32 due possibly to the years in which those studies were conducted (2008 and 2011), as well as the limited regional scope (three administrative regions except the 2008 Demographic Health Survey that covered the entire country). This present study employs a nationally representative survey.
Our study also shows that a number of socio-demographic factors are significantly associated with NHIS enrolment. Although unadjusted findings illustrate that enrolment is concentrated among poor households, multivariate findings illustrate that the odds of enrolling in the scheme increases with wealth quintiles, that is, the rich are more likely to enrol than the poor. This may be attributed to evidence that the rich are more able to afford the cost of enrolling in the health insurance programme than the poor. 1 20 33 34 Besides, as explained earlier, the policy decision to deliberately enrol the poor might have contributed to their higher numbers in the NHIS, but voluntarily other factors other than being poor contribute to enrolment in the scheme. Individuals with higher levels of education are more likely to enrol in the NHIS compared with those with no formal education; females are more likely to enrol than males; and older adults are more likely to enrol than young adults, consistent with previous studies. 1 22 32-35 The employed are less likely to enrol compared with the unemployed. The plausible explanation is that the employed may be able to afford OOP for healthcare services because they are more economically resourced than the unemployed. This result runs counter to earlier studies. 21 35 Findings from this study also reveal that individuals residing in rural settings are significantly less likely to enrol in the NHIS compared with those living in urban areas, consistent with previous studies, 32 35 but contradicting a study by Jehu-Appiah et al, 1 One reason may be due to poverty; prior studies showed that the majority of rural dwellers are unable to afford the NHIS premium and processing or renewal fee. 20 31 34 36-38 This study's findings also show that the odds of enrolling in the NHIS increases with household size, consistent with other studies, 22 33 34 because larger households may be risk averse and thus would enrol in the NHIS to seek financial risk protection against their healthcare costs and to avoid catastrophic OOP. Our findings also reveal that individuals residing in less developed regions of the country are significantly more likely to enrol in the scheme compared with those in developed regions. Again, this may be attributed to policy reforms focused on enrolling individuals living in deprived regions, particularly those in the northern savannah ecological zones, comprising the Northern, Upper East, Upper West and some parts of Brong-Ahafo and Volta regions, 24 consistent with some studies 22 23 and contradicting other. 35 Our study's primary limitation is that the data lacked several important factors (such as trust in scheme management, perceived quality of care, ease of enrolment, etc) which would be useful for better understanding NHIS enrolment. Nonetheless, the variables used in the multivariate logistic regression modelling did not significantly affect model robustness.
COnClusIOn
The study reveals that from 2012 to 2013, enrolment in the NHIS was higher among poor households, particularly male-headed households, although the multivariate analyses demonstrated that the likelihood of NHIS enrolment increased from poorer to richest quintile, low to high level of education and young adults to older adults. While the NHIS strives to achieve its pro-poor goal of providing financial risk protection for the poor and vulnerable in society, equity must be addressed within and across the entire population. Adequate funds are also required to cover the anticipated increase in medical claims costs because as more poor and vulnerable groups enrol in the scheme, the claims cost is likely to escalate and threaten the scheme's sustainability. Thus, policy decisions to ensure equity in enrolment must also ensure commensurate funding to avoid financial uncertainty and collapse. Further research on equity in healthcare services utilisation, expenditures and accreditation of healthcare providers is needed to provide a fuller picture of equity assessment in the NHIS. | 2019-07-04T13:05:55.865Z | 2019-07-01T00:00:00.000 | {
"year": 2019,
"sha1": "b657c20a0ef8d6dd63da91793e7edd6c32fabc65",
"oa_license": "CCBYNC",
"oa_url": "https://bmjopen.bmj.com/content/bmjopen/9/7/e029419.full.pdf",
"oa_status": "GOLD",
"pdf_src": "Highwire",
"pdf_hash": "b657c20a0ef8d6dd63da91793e7edd6c32fabc65",
"s2fieldsofstudy": [
"Medicine",
"Economics"
],
"extfieldsofstudy": [
"Medicine"
]
} |
224819414 | pes2o/s2orc | v3-fos-license | All sheeps and sizes: a genetic investigation of mature body size across sheep breeds reveals a polygenic nature
Summary Mature body size is genetically correlated with growth rate, an important economic trait in the sheep industry. Mature body size has been studied extensively in humans as well as cattle and other domestic animal populations but not in sheep. Six‐hundred and sixteen ewes, across 22 breeds, were measured for 28 linear measurements representing various skeletal parts. PCA from these measures generated principal components 1 and 2 which represented 66 and 7% of the phenotypic variation respectively. Two‐hundred and twenty sheep were genotyped on the Illumina Ovine HD beadchip for a GWAS investigating mature body size and linear body measurements. Forty‐six (Bonferroni P < 0.05) SNP associations across 14 chromosomes were identified utilizing principal component 1, representing overall body size, revealing mature body size to have fewer loci of large effect than other domestic species such as dogs and horses. Genome‐wide associations for individual linear measures identified major quantitative trait loci for withers height and ear length. Withers height was associated (Bonferroni P < 0.05) with 12 SNPs across six chromosomes whereas ear length was associated with a single locus on chromosome 3, containing MSRB3. This analysis identified several loci known to be associated with mature body size in other species such as NCAPG, LCORL, and HMGA2. Mature body size is more polygenic in sheep than other domesticated species, making the development of genomic selection for the trait the most efficient option for maintaining or reducing mature body size in sheep.
Introduction
Growth rate is an important economic trait for sheep producers in the USA because live weight at slaughter or carcass weight determines a producer's income from lamb sales. Taylor (1980) showed that mature size can be used as the only parameter in a model of growth curves. Therefore, as producers select lambs for increased growth rates, an increase in mature body size is likely to occur as an unintended consequence (Herd et al. 1993;Borg et al. 2009). Mature weight has been shown to be heritable with estimates of 0.30 in the Chios breed, 0.38-0.53 in Targhee, 0.73-0.76 in South African Merino and 0.41-0.43 in Lleyn (Mavrogenis & Constantinou 1990;Borg et al. 2009;Ceyhan et al. 2015;Nemutandani et al. 2018). US sheep breeds vary in mature weight from a low of approximately 32 kg seen in breeds such as the Shetland to a high of 114 kg seen in the Suffolk. Increases in mature body size may lead to increased energy demands, resulting in higher feed maintenance requirements of the ewe flock. There are also indirect impacts on handling facilities designed for smaller sheep and physical handling may become more difficult during shearing and other management tasks. Genetic correlations between mature weight and lamb growth rates and weaning weights have been estimated to range from 0.31 to 0.84 across multiple breeds and studies (Mavrogenis & Constantinou 1990;Safari et al. 2007;Borg et al. 2009;Ceyhan et al. 2015;Nemutandani et al. 2018). Whereas improved growth rate results in more lamb being sold, the increase in ewe maintenance and other indirect costs negatively impact a shepherd's bottom line, reducing the economic benefit.
There have been few studies on using linear measures to estimate mature body size within sheep (Mavule et al. 2013). A previous study estimated heritabilities for various linear body measurements ranging from 0.26 to 0.57 across three different sheep breeds (Janssens & Vandepitte 2004). Linear measurements could be used to estimate mature body size in sheep, similar to frame scoring in cattle (Dhuyvetter 1995). Studies on horses and dogs have used linear measures in PCA successfully to approximate mature body size (Chase et al. 2002;Brooks et al. 2010). Genomic studies on mature body size have been successful in determining the genetic nature of mature body size in cattle, horses and dogs (Sutter et al. 2007;Pryce et al. 2011;Makvandi-Nejad et al. 2012;Bouwman et al. 2018).
Previous GWASs in sheep have focused on mapping growth rates, weight, height and other carcass traits (Al-Mamun et al. 2015;Bolormaa et al. 2016;Kominakis et al. 2017;Zhang et al. 2019), but to date, no GWASs have been reported specifically for mature body size in sheep. These studies have associated weight and height with genes such as NCAPG and LCORL (Al-Mamun et al. 2015), which appear to be shared across mammals for influencing mature body size, whereas others have highlighted a single aspect of body size such as associating SMARCA5 and GAB1 with chest width in the Hulun Buir sheep (Zhang et al. 2019). The aim of this study was to identify genetic associations with mature body size across several sheep breeds present within the US.
Sample collection
All sampling followed Cornell University's Institutional Care and Use Committee Standards (Protocol no. 2014-0121) for animal handling after obtaining owner consent from private commercial flocks. Whole blood was collected from the jugular vein into 10 ml vacutainers with K 2 EDTA anticoagulant. DNA was extracted from whole blood using the Qiagen Puregene Protocol (Gentra Systems Inc.) and stored at À80°C until genotyping.
All measures were collected with a flexible tape measure pulled taut against the skin to ensure minimal variability from wool length differences. Measurements were taken within four weeks of shearing for wool sheep or after spring shedding and prior to winter wool growth in hair sheep, which prevented wool length from being an impediment to collecting body measurements and minimized variation owing to wool or hair growth. All ewes measured were at least one and a half years of age. Sheep were restrained, either tied by a halter or held by the head. All measures were collected by a single data collector to minimize potential bias. Maximum girth was excluded from subsequent analysis owing to varying pregnancy status among ewes measured. Ewes were sampled to represent a diversity of breeds in breed groups, economic uses and mature body sizes. Six-hundred and sixteen ewes, across 22 breeds, had a full set of the 28 measures for use in downstream analysis. This dataset includes breeds such as Suffolk and Hampshire that represent common large breeds used as terminal sires in the US sheep industry. At the other end of the size spectrum are Shetland, Jacob and Icelandic, representing smaller breeds primarily used for wool within the US. We were fortunate to sample nine breeds listed as heritage breeds by the Livestock Conservancy TM , representing genetically unique populations with varying body sizes, which included the Romeldale, Jacob, Hog Island, Clun Forest and others. The number of ewes measured per breed can be found in Table 1.
Phenotypic analysis
The six-hundred and sixteen ewes and 28 measures were used in a correlation matrix PCA, performed using R statistical software (R Core Team 2018). PCA was used to reduce the dimension of the measurement data while retaining as much variance as possible.
Genome-wide associations
Two-hundred and twenty ewes representing 14 breeds were genotyped on the Illumina Ovine HD SNP chip (Illumina Inc.). Some individuals had already been genotyped for prior studies (Posbergh et al. 2019) so additional ewes were selected to represent the extreme values in body size as reflected in PC1 score. The number of ewes genotyped per breed can be found in Table 1. Quality control was applied, and SNPs were retained if they passed the following thresholds: SNP MAF greater than 0.01, SNP call rate greater than 0.9, individual call rate greater than 0.9, mapped on the autosomes and no more than two alleles per SNP. Following this quality control, 217 ewes and 506 939 SNPs were utilized for the associations. GWAs were performed using EMMAX to fit the genomic relationship matrix as a random effect to adjust for potential population structure in the dataset (Kang et al. 2010). An additive model was used, and no additional covariates or fixed effects were added. Phenotypes utilized for GWA were principal components 1 and 2 and individual body measures using the full dataset of 217 ewes. A Bonferroni threshold of 0.05 was used to account for multiple testing. Genome coordinates are from the rambouillet version 1.0 assembly. Quality control and genome-wide associations were performed using the SNP AND VARIATION SUITE (version 8.7.2 win64; Golden Helix www.goldenhelix.com). Candidate genes were considered if they were within a 1 Mb window surrounding an associated marker.
Body measures and PCA
Principal component 1 (PC1) had all 28 factors significantly loading (>0.40) in the same direction and was interpreted as overall mature body size (Fig. 1). Figure 2 shows the distribution of PC1 scores sorted by the median value of each breed for breeds which had five or more individuals sampled. Principal component 2 (PC2) was predominantly influenced by jaw width and neck circumference loading negatively and fore cannon length loading positively (>0.40; Fig. 1). Despite the rest of the loadings being less than 0.40, nearly all of the widths and circumferences loaded in one direction whereas the lengths loaded in the opposite direction, leading us to interpret PC2 as overall thickness. The remaining 26 PCs explained little phenotypic variance (<3% individually) and the loadings became increasingly difficult to interpret so we chose to utilize only PC1 and PC2 for further study. Principal components 1 and 2 explained 66.3 and 7.85% of the phenotypic variance respectively. See Fig. 3 for a scatterplot of PC1 vs. PC2 across the 616 measured ewes.
Genome-wide associations
Principal components 1 and 2 Forty-six SNPs, across 14 chromosomes, were associated with PC1 in the across-breed analysis (Bonferroni corrected P-value < 0.05; Fig. 4). These associated markers are close (within a 1 Mb window) to genes such as APP, IGFBP2, IGFBP5, HMGA2, MSRB3, NCAPG/LCORL and HOXA and HOXB clusters previously associated with body size in other species (Eckstein et al. 2002;Pearson et al. 2005;Sutter et al. 2007;Pryce et al. 2011;Makvandi-Nejad et al. 2012;Bouwman et al. 2018). A full listing of PC1-associated regions and nearby genes can be found in Table S1. The SNP with the highest Àlog 10 (P-value), located on chromosome 10 at 30 964 378 bp, explained 19.6% of the variance. The association with PC2 did not yield any SNP associations which passed a Bonferroni corrected Pvalue <0.05 threshold.
Withers height
Twelve SNPs, on six chromosomes, were associated (Bonferroni corrected P-value < 0.05) with withers height in the across-breed analysis (Fig. 5). Seven SNPs are located on chromosome 3 in the same region as identified in the PC1 analysis containing HMGA2 and MSRB3. The others are located on chromosomes 1, 9, 11, 12 and 20. A full listing of withers height-associated regions and nearby genes can be found in Table S2.
Ear length
The association with ear length identified 12 SNPs (Bonferroni P-value < 0.05) located on chromosome 3 between 165 545 009 and 165 619 012 bp, which includes MSRB3. We also ran the association for ear length with PC1 included as a covariate to account for overall body size as larger sheep are expected to have larger ears (phenotypic r 2 = 0.77). This resulted in five SNPs passing a Bonferroni threshold of 0.05, all of which are a subset of the 12 SNPs associated without the PC1 correction. Figure 6 shows Manhattan plots of the associations for ear length without and with PC1 as a covariate.
The remaining linear measurements did not yield SNPs which passed a Bonferroni multiple testing corrected Pvalue of 0.05.
Discussion
This study found mature body size to have more QTL in sheep, although smaller effect sizes, than in other domesticated species, such as horses and dogs (Sutter et al. 2007;Makvandi-Nejad et al. 2012). The associated regions reinforces the conclusion from a cattle meta-analysis that there is a shared set of genes which regulate mammalian body size (Bouwman et al. 2018). We also identified POLR2A, EIF4A1, ATP1B2, ACADVL, FGF11, TNFSF12 and TNK1 in an associated region on chromosome 11 which overlapped with genes identified in the Frizarta breed by markers suggestively associated (P-value < 0.10) with body size (Kominakis et al. 2017). However, no other genes identified in Kominakis et al. overlapped with those identified in our PC1 associations. This difference is likely to be a result of looking within a single breed which may be fixed for certain size-related loci owing to selection for uniformity within a breed. Specifically across breeds, HMGA2 and MSRB3 on chromosome 3 and RXFP2 on chromosome 10 have been found to be under selection across the world's sheep breeds (Kijas et al. 2012). Similar selection signatures for size-related genes have been detected around the HOXA cluster, NCAPG/LCORL and LAP3 across Russian sheep breeds (Yurchenko et al. 2019). The present results validate that selection for body size has occurred across sheep breeds by utilizing a direct phenotype instead of a population-based approach. However, it is likely that this difference in mature body size is due to selection for production traits such as body growth, wool quality and/or milk production rather than strictly selecting for size as seen in various breeds of horses and dogs.
One unique finding was the linked block of markers found on chromosome 3 within methionine sulfoxide reductase B3 (MSRB3) that were associated with ear length. This gene was recently reported by Paris et al. for its association with large and/or floppy ear type in sheep using a populationbased approach (Paris et al. 2020). MSRB3 has also been shown to regulate ear size in pigs (Zhang et al. 2015;Chen et al. 2018) and to be associated with ear shape in dogs in several studies (Boyko et al. 2010;Vaysse et al. 2011;Webster et al. 2015). In contrast, a study investigating ear area in Duolang sheep did not find associations with MSRB3 which is probably due to study design differences such as a single-breed, lower-density (~50K) SNP GWA with ear area as the phenotype in their study (Gao et al. 2018) vs. a multibreed approximately 600K SNP GWA with ear length as the phenotype in the current study. Ear shape and size are important characteristics for breed identification in sheep and could influence thermoregulation. This gene is approximately 350 kbp upstream of HMGA2, a known gene influencing size in horses and dogs (Sutter et al. 2007;Makvandi-Nejad et al. 2012). This region was also identified in the PC1 and withers height GWA, indicating the region is probably pleiotropic; further study is needed to identify the individual effects of each gene within the region on each of these size measures.
We attempted to work with flocks that collected mature and lamb weight records to perform direct associations between linear measurements, mature weight and growth rates. However, too few flocks had those data readily available. Future directions should involve collecting growth weight data, feed intake and mature size to identify the efficiency of animals rather than relying on single measures. For example, the fastest growing lamb may have the largest rate of gain because it consumed more feed and not because it is genetically more efficient, assuming that the lambs being compared are at the same stage of growth. Selecting for efficiency, rather than just growth, will probably optimize the ideal mature body size for a commercial ewe.
Currently sheep selection indexes in the US do not place a significant negative emphasis on mature body size, focusing on increased weaning and post-weaning growth, lower fiber diameter and/or more lambs weaned. This singular focus on faster early growth is likely to contribute to US sheep increasing to an unsustainable mature body size, affecting management facilities, maintenance costs, processing facilities and ease of handling. Recording adult size on sheep flocks would provide a more precise estimate of mature weights and size across sheep breeds and flocks in the US. Developing automated phenotypic collection for mature body size and weight would probably encourage more frequent and accurate recording across flocks vs. individual measures using a measure tape. Genomic selection and/or marker-assisted selection should be utilized as possible tools to prevent or limit the consequences arising from increased mature body size given its polygenic nature in sheep.
Supporting information
Additional supporting information may be found online in the Supporting Information section at the end of the article. Table S1 List of associated (Bonferroni P-value <0.05) regions for PC1: regions and genes within a 1 Mb window of the associated SNPs Table S2 A list of associated (Bonferroni P-value <0.05) regions for withers height and genes within a 1 Mb window of the associated SNP Appendix S1. Brief description of the 29 linear body measures collected on each ewe | 2020-10-22T18:55:57.639Z | 2020-10-21T00:00:00.000 | {
"year": 2020,
"sha1": "0ecc39117154e24d338026a3de295f477f61643e",
"oa_license": "CCBYNCND",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/age.13016",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "8d4dec5ede1f870b322839fd0a51441bac545797",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
238038319 | pes2o/s2orc | v3-fos-license | Helicobacter Pylori Infection: A Hypothetical Balance between Improvement of Coronary Blood Flow and Permissive Pathophysiologic Promotion of Chronic Inflammatory Process of Atherosclerotic Plaque: A Case Report
Helicobacter pylori infection, in patients with classic risk factors of coronary heart disease, can present with typical clinical manifestations of ischemic heart disease without characteristic electrocardiographic changes or elevation of serum cardiac biomarkers. H. pylori does not participate in the initiation of inflammatory process of coronary atherosclerotic plaque. Instead, a commensal bacterium increases the coronary blood flow even in presence of classic risk factors of coronary heart disease. Under certain circumstances, Helicobacter pylori may aggravate an already existing inflammatory atherosclerotic process of coronary vasculature with consequent rupture and development of ischemic heart disease. This is in analogy with the physiologic role of platelet in primary haemostatic plug versus its undesired role in formation of vascular thrombosis. The factors that lead to shifting of Helicobacter pylori from a commensal promoter of coronary blood flow to a pathogenic organism activating the inflammatory response of atherosclerotic plaque remain to be elucidated.
INTRODUCTION
Marshall and Warren [1], two Australian researchers who discovered the bacterium Helicobacter pylori (H. pylori) and deciphered its role in gastritis and peptic ulcer disease. H. pylori is a microphilic gram negative bacteria belonging to the family helicobacter, is found mainly in the gastrointestinal tract of human beings. It infects about 50% of world population, out of which 10% develop peptic ulcer and around 10% develop gastric cancer [2]. H. pylori plays an important role in chronic inflammatory process of the pathogenesis of gastrointestinal diseases [3]. H. pylori can form a biofilm on gastric epithelial cells which contribute in adapting to the changing environment in gastric mucosa, helping in longer survival and fight against immune system [4]. Numerous diagnostic methods exist to detect infection that include endoscopic and non-endoscopic methods, technique used may be direct (culture, microscopic demonstration) or indirect methods (urease test, stool culture, PCR) [5]. Different treatment regimens are used which include the first line therapy (concomitant therapy and hybrid therapy), second line therapy (bisthmus-containing quadruple therapy or levofloxacin-containing therapy) and third line therapy (culture-guide therapy) [6]. Because of its proinflammatory nature, H. pylori infection has been associated with wide range of extra gastrointestinal diseases [7]. These include cardiovascular diseases, respiratory tract disorders like laryngeal and lung cancer, dermatological disorder like chronic urticaria, hematological disorders like immune thrombocytopenic purpura, Henoch-Schonlein purpura, iron deficiency anemia and cobalamin deficiency anemia [8]. Coronary heart disease (CHD) is the term given to heart problems caused by narrowed coronary arteries that supply blood to the heart muscle. Although the narrowing can be caused by a blood clot or by constriction of the blood vessel, most often it is caused by buildup of atherosclerotic plaque [9]. It is the most common of the cardiovascular diseases [10]. Types of CHD include stable angina, unstable angina, myocardial infarction, 6 and sudden cardiac death [11]. A common symptom is chest pain or discomfort which may radiate into the shoulder, arm, back, neck, or jaw. Occasionally pain may be felt like heart burn. Shortness of breath may occur and sometimes no symptoms are present [12].
CASE PRESENTATION
A 55-year-old man who is significantly overweight, diabetic, and hypertensive for the last five years. He is on oral hypoglycemic and angiotensin converting enzyme inhibitor agents. He had a 30-yearhistory of smoking 2 packs of cigarettes per day.
On the day of his presentation, he wasn't feeling well and had a sensation of about to faint on assumption of standing position and complained of chest pains and heart burns. He awakened at 3:00 A.M. with shortness of breath, crushing pressure at his chest, and pain radiating down his left arm. He was nauseated and sweating profusely. He had no history of similar attacks. He had positive family history of cardiovascular disease and hyperlipidemia).
In the emergency room, his pulse rate was, 130 beats per minute; respiratory rate, 30 breaths per minute; core body temperature 37.3 degree celsius; arterial blood pressure, 150/100 mm Hg. He had chest wheezing. His percentage saturation of hemoglobin with oxygen was 97%. His ejection fraction, measured with two-dimensional electrocardiography was 0.60. Sequential electrocardiograms did not show evidence of ischemic heart disease and serum levels of cardiac enzymes were not elevated.
13C-urea breath test was positive for H. pylori infection. The patient was treated with proton pump inhibitor simultaneously with two antibiotics; amoxycillin and clarithromycin.
DISCUSSION
Mendall and co-workers [13] showed for the first time that CHD patients have elevated levels of serum anti-H pylori antibodies. Following this finding, some authors confirm and some exclude the existence of this connection. Still there is no universal agreement on the role of H. pylori in either causation or progression of CHD [14,15].
On basis of extra gastro-intestinal involvement of H. pylori [16,17], we propose that H. pylori produces some mediators, the nature of which is unknown, that reach the cardiopulmonary system. Among organs that acted upon by these mediators, the specialized conducting system of the heart; ventricular muscle proper; coronary vasculature; and pulmonary stretch receptors. The chronic long-term inflammatory process in the gastric epithelium [18,19] disseminate to these cardiopulmonary organs. The mediators induce reduction in the threshold potential of cardiac nociceptive receptors [20] or potentiation of nociceptive pathways that will elicit pain typical to that of ischemic heart disease. The pain is often described as a discomfort that is neither sharp nor stabbing, and it doesn't vary significantly with inspiration [21]. H. pylori mediators interact with the endothelium of pulmonary microcirculation [22]. We propose that, this interaction induces changes in pulmonary capillary hemodynamics, which result into increased pulmonary capillary hydrostatic pressure, pulmonary congestion, and pulmonary interstitial edema. Juxtacapillary receptors are pulmonary stretch receptors located in alveolar walls close to pulmonary capillaries. These receptors are endings of non-myelinated C fibers, which carry afferent signals to medullary respiratory neurons.
Laboratory chemical stimulation of these receptors results in shallow, rapid breathing or apnea if stimulation is intense. There is evidence that pulmonary congestion and interstitial edema stimulate these receptors [23]. This may explain the sensation of shortness of breath in this case. H. pylori has been isolated from tracheal secretions of intubated patients [24] and produce inflammatory cytokines [25]. These cytokines stimulate irritant receptors located between epithelial cells in the large airways. Impulses are carried by myelinated vagal fibers and reflex effects include coughing, bronchoconstriction, mucus secretion and hyperapnea [23]. This explains the sensation of chest tightness and wheezes in our case.
H. pylori mediators increase work of the heart through stimulation of sympathetic nervous system. This, probably, is achieved by two mechanisms; decreased sensitivity of baroreceptors [20] and increased activity of excitatory sympathetic afferents [26]. We propose that, H. pylori mediators stimulate both of these two mechanisms. This results in increased heart rate (positive chronotropic effect), increased stroke volume (positive inotropic effect), and consequently increased cardiac output [27]. Sympathetic stimulation of systemic veins will lead to increased central venous pressure (venous return) with further increase in force of myocardial contraction in accordance with Frank-Starling law of the heart [28]. Atrial stretch receptors stimulated, also, by increased central venous pressure to send impulses through vagus nerve to excite the mredullary inspiratory neurons. This helps to oxygenate the extra-amount of blood reaching the lungs from the right ventricle (Harrison's reflex) [29].
This explains the normal percentage saturation of hemoglobin with oxygen in our case. Atrial stretch receptors, in addition, send impulses to medullary cardiac centers, which result in a positive chronotropic effect. This will help to increase cardiac output and prevent accumulation of blood in the right side of circulation (Bainbridge reflex) [30,31]. Cardiac muscle has a high oxygen extraction ratio under resting 7 condition (75%), and remains stable over a wide range of myocardial workload [32]. Thus, during stress conditions, exercise for example; the only way for the cardiac muscle to increase its oxygen supply is to increase its coronary blood flow (CBF). The increased cardiac work will be associate with accumulation of metabolites. Metabolites have been suggested as mediators of coronary vasodilatation; with consequent decrease in oxygen demand to oxygen supply ratio [33]. This is an appropriate physiologic response of normal coronary vasculature to accumulation of cardiac metabolites. More than 90% of persons with myocardial ischemia have advanced coronary atherosclerosis [34]. Atherosclerotic plaque with exposure to circulating elements of blood as well as endothelial dysfunction is the major trigger of coronary thrombosis [35,36].
Multiple studies demonstrated the involvement of H. pylori infection in inflammatory process of atherosclerotic plaque [37]. CHD occurs due to endothelial dysfunction within the vessels, accompanied remodeling of vascular wall, local inflammation, platelet aggregation, and blood clotting. These disorders promote formation of atheromatous plaque, which is often unstable and subsequently ruptures. This might impair the blood flow leading to vascular blockage or myocardial infarction [37,38].
The ensuing deficiency of myocardial oxygen supply is translated into clinical ischemic manifestations of ischemic heart disease, typical elctrocardiogrphic changes, and elevation of serum cardiac biomarkers [39,40]. Typical electrocardiographic changes include (prolongation of the Q wave (necrosis), elevation of ST segment (injury), and T wave inversion (ischemia) [39]. Serum cardiac biomarkers include cardiac myocyte troponins, myoglobin, and intracellular enzymes that can be released into the blood as a result of myocardial death [40]. CBF will be increased and oxygen supply will be enough to meet the metabolic demands of the heart despite such increase in the work of the heart; as long as the coronary vacualture is intact, nonsclerotic for example. Studies of the effect of eradication of H. pylori infection on atherosclerotic plaque process in patients with CHD revealed conflicting results. Some authors reported that eradication of H. pylori infection is associated with an attenuation of inflammatory response of atherosclerotic plaque [41,42] while others did not [43,44].
We propose that H. pylori does not participate in initiation of inflammatory process of atherosclerotic plaque, it has a permissive effect on this process. That is why neither the elctrocardiographic changes nor the cardiac biochemical makers can be demonstrated in this case. H. pylori may promote the persistence or the progression of atherosclerotic plaque.
We propose that H. pylori increases CBF. This can be achieved by direct effect, of mediators produced by H pylori, on coronary vasculature or the vasodilator effect of metabolites produced as a result of increased cardiac work. However, under certain circumstances, H pylori may promote the inflammatory process of atherosclerotic plaque with consequent clinical manifestations of ischemia, characterestic ECG changes, and elevation of serum cardiac biomarkers. These circumstances that lead to shifting of H pylori from a commensal promoter of CBF to a pathogenic organism activating the inflammatory response of atherosclerotic plaque process remain to be elucidated. From the physiologic point of view, this proposal is analogous to the haemostatic role of platelets to provide a procoagulant function versus its undesired role in induction of thrombosis.
When a normal blood vessel is injured, the endothelial surface becomes disrupted and the thrombogenic connective tissue is exposed. Formation of primary haemostatic plug is the first line of defense against bleeding. It is the function of circulating platelets. While primary haemostatic plug forms, the exposure of subendothelial tissue factors triggers the plasma coagulation cascade, initiating the process of secondary haemostasis which ultimately forms a thrombus (fibrin clot) by the action of thrombin. This clot stabilizes and strengthens the primary platelet plug. The normal haemostatic system minimizes blood loss from injured vessels, but there is little difference between this physiologic response and the pathologic process of coronary thrombosis triggered by disruption of atherosclerotic plaques [45].
In conclusion, H pylori promotes CBF even in the presence of classic risk factors of CHD. Under certain circumstances, H pylori may aggravate an already existing inflammatory atherosclerotic process of coronary vasculature with consequent rupture and development of ischemic heart disease. Further studies are needed to address the role of H. pylori in pathogenesis of haematological and cardiovascular diseases. | 2021-02-03T23:16:00.577Z | 2021-01-29T00:00:00.000 | {
"year": 2021,
"sha1": "64ea6634044e3273ba1433f5b9286ad3b1271f25",
"oa_license": null,
"oa_url": "https://doi.org/10.36348/sjbr.2021.v06i01.002",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "64ea6634044e3273ba1433f5b9286ad3b1271f25",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
2864455 | pes2o/s2orc | v3-fos-license | Comparative Performance of Tabu Search and Simulated Annealing Heuristics for the Quadratic Assignment Problem
For almost two decades the question of whether tabu search (TS) or simulated annealing (SA) performs better for the quadratic assignment problem has been unresolved. To answer this question satisfactorily, we compare performance at various values of targeted solution quality, running each heuristic at its optimal number of iterations for each target. We find that for a number of varied problem instances, SA performs better for higher quality targets while TS performs better for lower quality targets.
Introduction
The quadratic assignment problem (QAP) is a combinatorial optimization problem first introduced by Koopmans and Beckman [12]. It is NPhard and is considered to be one of the most difficult problems to be solved optimally. The problem is defined in the following context: A set of N facilities are to be located at N locations. The distance between locations i and j is D i,j and the quantity of materials which flow between facilities i and j is F i,j . The problem is to assign to each location a single facility so as to minimize the cost where p(i) represents the location to which facility i is assigned.
There is an extensive literature which addresses the QAP and is reviewed in [19,7,2,11,13]. With the exception of specially constructed cases, optimal algorithms have solved only relatively small instances (N ≤ 36). Various heuristic approaches have been developed and applied to problems typically of size N ≈ 100 or less. Two of the most successful heuristics to date for the QAP are tabu search (TS) and simulated annealing (SA). They Email address: gerryp@bu.edu (Gerald Paul) are basic heuristics which are used alone or as components in hybrid and iterative metaheuristics.
Comparisons of the performance of SA and TS for the QAP have been inconclusive. In this work, we are able to successfully characterize the relative performance of these heuristics by performing the comparisons for various values of solution quality and by setting the number of iterations for each heuristic to the optimal one for the target solution quality.
As is common practice, we define the quality, Q of a solution where C is the value of the objective function for the solution and C best is the best known value of the objective function for the instance. The lower the value of Q, the higher the quality. We find that for each problem instance, there is a value of Q, Q * , above which (lower quality) tabu search performs better -requires less time -than simulated annealing and below which (higher quality) simulated annealing performs better.
Background
The tabu search heuristic for the quadratic assignment problem consists of repeatedly swapping locations of two nodes. A single iteration of the heuristic consists of making the swap which most decreases the total cost. Under certain conditions, if a move which lowers the cost is not available, a move which raises the cost is made. To ensure that cycles of the same moves are avoided, the same move is forbidden (taboo) until a specified later iteration; we call this later iteration the eligible iteration for a given move. This eligible iteration is traditionally stored in a tabu list or tabu table. The process is repeated for a specified number of iterations.
The simulating annealing heuristic also consists of swapping locations of two facilities. In the simulated annealing approach used here [8], each possible swap is considered in turn and δ, the change in cost for the potential swap, is calculated. The swap is made if δ is negative or if where T is an analog of temperature in physical systems that is slowly decreased according to a specified cooling schedule after each iteration and r is a uniformly distributed random variable between 0 and 1. Randomly making moves which increase the cost is done to help escape from local minima.
Pardalos [19] compared the performance of four algorithms including simulated annealing and tabu search and found that "all of these approaches have almost the same performance". Paulli [21] compared simulated annealing and tabu search and found that "when CPU time is taken into consideration, simulated annealing is clearly preferable to tabu search". On the other hand, [5] finds that "RTS (Reactive Tabu Search) needs less CPU time than SA to reach average results in the 1% [of the best known value] region". In 1998, summarizing the situation, Cela [7] commented that "There is no general agreement concerning the comparison of the performance of simulated annealing approaches with that of tabu search approaches for the QAP". We are not aware of any later work which has clarified the issue.
Approach
We address the question of whether tabu search or simulated annealing performs better for the quadratic assignment problem by recognizing that the answer depends on desired solution quality and by: • defining a performance metric that ensures a fair comparison of different heuristics, • determining the optimal number of iterations for a given target quality for TS and SA for each problem instance; for a fair comparison of heuristics, it is critical to run each heuristic at its optimal number of iterations for a given target solution quality.
• measuring the performance of TS and SA at multiple target qualities.
Performance Metric
To fairly compare heuristics, solution quality and time must be taken into account. Simulated annealing and tabu search are multi-start heuristics; many runs of the heuristic are executed, each with a different random starting configuration. A commonly used performance metric for multi-start heuristics is the percentage of these runs which attain a specified value of the quality Q (typically 0.01). However, this metric doesn't take run time into account. Sometimes, the runs times for individual runs of the heuristics are constrained to be equal but this is problematic because, as we show below, for a fair comparison each heuristic should be run at the optimal number of iterations for the quality goal Q. One method of characterizing the performance of multistart heuristics with different run times employs run-time distributions of the times needed across multiple runs to achieve a certain quality goal (see e.g. [23,1]). Instead of using distributions, we define the performance metricT (Q, I) as the average time to attain a quality goal of Q during a set of runs, each run with I iterations: where t i is the CPU time for run i and N (Q, I) is the number of runs which attain a quality goal of Q or better.
Because one heuristic may perform better depending on the quality goal, we calculate this performance metric not just for a single quality goal (e.g. 0.01) but for a range of quality goals.
Numerical Results
We use C++ implementations of SA and TS in the public domain to perform our computational experiments. Both implementations are by Taillard, and are available at http://mistic.heig-vd .ch/taillard/. The TS code implements the robust tabu search of [24]; the SA code implements the simulated annealing heuristic of [8]. Both implementations are straightforward and a few pages each in length. We run the TS heuristic with parameter settings as described in [24]): tabu list size between 0.9N and 1.1N and aspiration function parameter equal to 2N 2 ; there are no settable parameters for the SA implementation.
Determination of Optimal Number of Itera-
tions Given a fixed time in which a heuristic can be executed, there is a tradeoff between the number of iterations per run and the number of runs which can be performed. The optimal number of iterations per run to reach a quality goal of Q, I opt (Q), is the value of I which minimizesT (Q, I). We determine I opt (Q)) as follows: For various values of I, I i , we run each heuristic multiple times and calcu-lateT (Q, I i ). Then, ThusT (Q) is the value of the performance metric when the heuristic is run at I opt (Q) iterations. In Fig. 1(a), using the Tai100a problem instance from QAPLIB [6] as an example, we illustrate the process of finding the optimal number of simulated annealing iterations for Q = 0.02, 0.01, and 0.006. The optimal number of iterations, I opt , for each value of Q is the well defined minimum value of T for each plot. For a given value of Q, we note the large variation inT . We also note the large variation in I opt for the different values of Q. Thus, choosing a non-optimal value of iterations (e.g. a single value for the number of iterations for different Q) will result in an unfair characterization of the performance of the heuristic. Similarly, Fig. 1(b), illustrates the process of finding the optimal number of tabu iterations for the instance Tai100a for Q = 0.02, 0.015, 0.01 and 0.009.
In Fig. 1(c), we plot I opt , versus Q for SA and TS. For TS, I opt increases as Q decreases but does not increase below Q ≈ 0.01. We infer that for TS there is no benefit to increasing the number of iterations below this point; any improvements in quality are gained by running more random starting configurations. On the other hand, SA benefits by increasing the number of iterations as Q is decreased over the complete range of Q studied. The subject of an optimal number of iterations for the quality goal Q = 0 for simulated annealing is treated analytically in [4].
Performance Comparison of SA and TS
We perform computational experiments on the following problem instances from QAPLIB [6] representing a range of problem difficulty, type and size.
• Tai100a [24] is a totally unstructured instance consisting of random distance and flow matrices.
• lipa90a [14] is a generated problem instance with a known optimal solution.
• dre110 [9] is a structured instance consisting of a "grid" flow matrix with non-zero entries for nearest neighbors only. It is part of a series of instances that are specifically designed to be difficult for heuristics.
By plottingT (Q) for each heuristic we can compare the performance of the heuristics when they are run with the optimal number of iterations. Fig. 2 plotsT (Q) versus Q for the instances studied. Despite differences in details, they all share the characteristic that for each problem instance, there is a value of Q, Q * , above which (lower quality) tabu search performs better -requires less timethan simulated annealing and below which (higher quality) simulated annealing performs better. SA achieves lowest known costs for all but the Tai100a instance. TS achieves the lowest known cost for three of the six instances.
In Table 1 we list the values of Q * for each of the instances studied. Note that if only the value Q = 0.01 were considered, the conclusion would be simply that SA is better for some instances and TS for others. This explains why earlier studies of relative performance where not able to draw clear conclusions.
Hardness of Problem Instances
To compare the relative hardness of the problem instances studied, in Fig. 3 we plotT versus Q for the problem instances in a single panel. The relative hardness of the instances for a given solution quality is given by the relative value ofT at that quality. Comparing this figure with Table 1 note that Q * appears to be correlated with the hardness of the problem. With the exception of Tai100a, the harder the problem, the higher the value of Q * and thus the wider the range of Q in which SA performs better than TS.
Discussion
How do we explain our results that, for each problem instance studied, there is a value of the quality Q, Q * , above which TS performs better than SA and below which SA performs better? A possible qualitative explanation is that TS essentially uses a steepest descent method to quickly find an initial local minimum while SA finds the local minima in a more random way -sometimes making moves which increase the total cost even when moves which reduce the cost could be made first. Hence for high Q, TS performs better. Once a local minimum is found, however, SA is better able to escape and find a lower minimum. As opposed to TS, to attain better solution quality it is always better to run fewer SA runs with a higher number of iterations.
Areas for future research might address the following questions: • Is similar behavior observed when comparing SA and TS applied to other combinatorially complex problems?
• When optimal numbers of iterations are used for SA and TS within such hybrid heuristics as hybrid genetic search, is the performance of the hybrid heuristic improved?
• How does the performance of other heuristics (e.g. hybrid, iterated, ANT) compare when taking solution quality into account?
• How are our findings changed if variants of TS are used? Can SA be modified to also outperform TS at high values of Q? | 2010-10-01T06:29:22.000Z | 2010-10-01T00:00:00.000 | {
"year": 2010,
"sha1": "e0ae631fcf3badc815230a34df069c9b34b2ecea",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1010.0157",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "e0ae631fcf3badc815230a34df069c9b34b2ecea",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
39677922 | pes2o/s2orc | v3-fos-license | Hadron-Hadron Interactions in Coulomb Gauge QCD
The experimental observation of J/ψ suppression in ultrarelativistic heavy-ion collisions by NA38 [1] and more recently the anomalous J/ψ suppression in Pb+Pb collisions observed by NA50 [2] have attracted much attention as a possible signal for a quark-gluon plasma (QGP) [3]. Such a suppression can be described by phenomenological models either in a QGP [4] or in a hadronic scenario [5]. Several theoretical studies have been described [6] and the subject is still controversial. In this way, microscopic approaches that allows one to consistently treat hadron-hadron interactions in terms of the underlying quark-gluon structure would provide a useful tool for the understanding of this issue. In a previous work [7] we described a field theoretical method known as Fock-Tani (FT) representation used to derive an effective Hamiltonian involving explicit hadron degrees of freedom and its application to study hadron interactions using a nonrelativistic microscopic quark model. In this paper we consider the extension of the method to a microscopic relativistic quark model formulated in the context of Coulomb gauge QCD which consistently combines chiral symmetry and color confinement[8]-[10]. Our aim is to set up an effective calculational scheme to comprehensively investigate hadronic structure and interactions such as charmonium suppression.
Introduction
The experimental observation of J/ψ suppression in ultrarelativistic heavy-ion collisions by NA38 [1] and more recently the anomalous J/ψ suppression in Pb+Pb collisions observed by NA50 [2] have attracted much attention as a possible signal for a quark-gluon plasma (QGP) [3].
Such a suppression can be described by phenomenological models either in a QGP [4] or in a hadronic scenario [5].Several theoretical studies have been described [6] and the subject is still controversial.In this way, microscopic approaches that allows one to consistently treat hadron-hadron interactions in terms of the underlying quark-gluon structure would provide a useful tool for the understanding of this issue.
In a previous work [7] we described a field theoretical method known as Fock-Tani (FT) representation used to derive an effective Hamiltonian involving explicit hadron degrees of freedom and its application to study hadron interactions using a nonrelativistic microscopic quark model.In this paper we consider the extension of the method to a microscopic relativistic quark model formulated in the context of Coulomb gauge QCD which consistently combines chiral symmetry and color confinement [8]- [10].Our aim is to set up an effective calculational scheme to comprehensively investigate hadronic structure and interactions such as charmonium suppression.
Coulomb Gauge QCD
The canonical QCD Hamiltonian in the Coulomb gauge, ∇ • A = 0, can be written as [11]- [13]: where m q is the current quark mass, is the covariant derivative in the adjoint representation.
The term K ab is the non-Abelian Coulomb kernel where ρ a is the full color charge density given by Note that in the Abelian limit, D → ∇, the QED Coulomb interaction is recovered.
The dynamical degrees of freedom are the transverse gauge fields A a , the transverse conjugate gluon momenta Π a and the quark field ψ.
The key features of the Coulomb gauge are [14]: a) The elimination of non-dynamical degrees of freedom creates a long-range instantaneous non-Abelian Coulomb interaction, which provides a confinement scenario: infrared divergences make colored states infinitely heavy, removing them from the physical spectrum; color neutral states, on the other hand, remain physical; b) The absence of spurious degrees of freedom yields Fock states with positive normalizations.This is essential to build nonperturbative models for the QCD vacuum and a quasiparticle basis of constituent quarks and gluons.
Quark Model with Chiral Symmetry Breaking
The starting point of our model is an approximate QCD Hamiltonian in the Coulomb gauge, in which we use an ef-fective Coulomb kernel K ab (x, y; A) → V (x − y) as obtained in Ref. [14].This is obtained making J → 1 and neglecting quarks in Eq. ( 3), ρ a q (x) = 0.The derivation is based on a self-consistent method to construct a gluonic quasiparticle basis.The kernel can be interpreted as an effective interaction between two heavy quarks and the results remarkably well with lattice computations [15].For long distances, the numerical results for V (x − y) are almost identical to V (x − y) = σ|x − y| and is, therefore, infrared singular and needs in general a careful regularization when dealing with numerical simulations.In the present paper, for simplicity of explaining the model and methods employed to construct an effective hadron-hadron interaction, we use a simpler form for V (x−y) (see below).However, it should be clear that the methods developed here are not dependent on the specific choice of the kernel.
With such a kernel, the general form of the model Hamiltonian in the fermionic sector is: where Ĥ0 is the Hamiltonian density of the Dirac field operator ψ(x), and ĤI is an effective instantaneous interaction term (7) The next step consists in constructing an approximate new vacuum state for the Hamiltonian in the form of a pairing ansatz [8,10].Let's first define a "trivial" vacuum |0 through b 0 f sc |0 = d 0 f sc |0 = 0, where b 0 and d 0 are quark annihilation operators, in terms of which the quark field operator is given by x , (8) where color and flavor indices have been neglected.Then, a nontrivial vacuum | 0 can be defined through a Bogoliubov-Valatin transformation such as b| 0 = d| 0 = 0, where the b and d quark annihilation operators are related to the bare operators b 0 and d 0 by the BVT.In terms of the dressed quark operators, the quark field operator can be expanded as x , (9) with the quasiparticle spinors u s , v s given in terms of the u 0 s and v 0 s spinors as [8,10] where ϕ(p) is sometimes called the chiral angle and is determined by a gap equation (see below).
The normal order of the Hamiltonian relatively to the new vacuum gives: where H 0 is a constant and gives the energy of the new vacuum, and where E(p) is the energy of a free quark: and gives 10 different terms that are combinations of the following four vertices (here we have introduced the color indices for clarity): The term ĤA 2 is the anomalous, nondiagonal Bogoliubov term.In order to bring the single-quark Hamiltonian into a diagonal form, one has to require ĤA 2 = 0, which leads to the gap equation It is useful to introduce a running quasiparticle quark mass, M (p), through the equations with E(p) = p 2 + M 2 (p).One can identify an effective constituent quark mass as M q = max[M (p)] and extract it from the low momentum behavior of the chiral angle [16].
Effective Hadron-Hadron Hamiltonian
Effective hadron-hadron potentials in quark potential models have been obtained within several early approaches such as adiabatic methods [17], resonating group [18], variational techniques [19] and the QBD formalism [20].In this work we use the Fock-Tani (FT) formalism, which was developed independently by Girardeau [21] and Vorob'ev and Khomkin [22] in the context of atomic physics and has recently been extended to hadronic physics [7].The method shares some similarities with Weinberg's quasi-particle approach [23].
In the following, we present the main features of the Fock-Tani formalism for the derivation of an effective meson-meson interaction.We start by specifying the microscopic Hamiltonian in Fock space (F): In Eq.( 18), T is the kinetic energy and V qq , V qq and V aq are respectively the quark-quark, antiquark-antiquark and quark-antiquark interactions.The indices µ, ν, • • • represent spatial, color, spin, and flavor quantum numbers of the quarks and antiquarks and a summation over repeated indices is implied.The quark and antiquark operators obey standard anticommutation relations: A generic meson state in F, composed by a quarkantiquark pair, is denoted by |α , where α represents the meson quantum numbers (c.m. momentum, internal energy, spin and flavor).Such a state can be written as: where M † α is the meson creation operator, Φ µν α is the meson wave function and |0 is the vacuum state, defined as q µ |0 = q ν |0 = 0. Using the quark anticommutation rela- tions of Eq. ( 19), and the orthonormalization condition for the Φ's, one can show that the meson operators satisfy the following noncanonical commutation relations: where q µ is the term that manifests the composite nature of the mesons.
The change to the FT representation is implemented by means of a unitary transformation U , such that a single composite meson state |α is transformed into a single ideal- , where m † α and m α are the ideal-meson creation and annihilation operators that satisfy canonical commutation relations: By definition, the m † and m commute with the quark and antiquark operators.In this way, within the FT representation one recovers the possibility of using traditional field theoretic techniques such as Wick's theorem, Feynman diagrams, etc.
The operator U is constructed as a power series in the bound state wavefunctions Φ.Once the operator U is known, one proceeds by transforming the original quarkmodel operators, such as currents and Hamiltonian.This is accomplished by transforming initially the quark and antiquark operators and substituting these into the expressions of quark model operators.The explicit form of U and the derivation of the transformed quark and antiquark operators is discussed in detail in Ref. [7].
The structure of the transformed Hamiltonian is: The quark Hamiltonian, H q , has an identical structure to the one of the microscopic quark Hamiltonian of Eq. ( 18), except that the term corresponding to the quark-antiquark interaction is modified such that it does not produce the quark-antiquark bound states.H mq describes quark-meson processes as meson breakup into a quark-antiquark pair, etc.The term involving only ideal meson operators, H m , has a component that represents an effective meson-meson interaction: where the effective meson-meson potential V mm is a sum of several different terms involving H(µν; µ ν ) and the product of four wave-functions corresponding to the initial and final meson states.Note that the effective meson Hamiltonian is model independent, in the sense that it depends only on the general forms of the microscopic quark Hamiltonian and of the meson states.
Ongoing Calculations
In order to illustrate the application of the framework through a simple example, we have calculated the scattering cross section for charmonium dissociation by inelastic scattering on ρ mesons, using the effective meson-meson Hamiltonian derived in section 4 and the quark model Hamiltonian with chiral symmetry breaking described in section 3. Our final aim is to perform the calculation using the potential derived from the gauge sector of the Coulomb gauge QCD Hamiltonian, as in Ref. [14].However, such an interaction exhibits a strong singularity at q → 0 that needs to be regulated in the process of performing a numerical integration.We are still in the process of regulating such a numerical singularity (there is no real singularity since the integrands are finite at q = 0).Thus, here we just show the results obtained using a Gaussian interaction given by: The J/ψ mesons are composites of a heavy quark and a heavy antiquark pair, denoted by (QQ), and the ρ mesons are composites of a light quark and a light antiquark, denoted by (qq).The final mesons D, D are composites of a (qQ) or a (Qq) pair and can be either in the fundamental D, D( 1 1S 0 ) or in the excited D * , D * ( 3 1S 1 ) states.The explicit form of the creation operator for a composite meson is where C C , χ S , and F F are respectively the color, spin and flavor Clebsch-Gordan coefficients.For the spatial meson wave-function we employ a Gaussian ansatz : where k = ηk (28) The total cross section for the reaction is a function of the center-of-mass energy and is obtained by summing over all possible final channels σ tot (s) = In Fig. 1 we show the cross sections for the reaction as a function of the relative kinetic energy of the J/ψ and the ρ in the center-of-mass system. | 2017-09-12T21:14:55.559Z | 2004-03-01T00:00:00.000 | {
"year": 2004,
"sha1": "1ab1447201e0b95a61f907df381f9fa41fd9a524",
"oa_license": "CCBYNC",
"oa_url": "https://www.scielo.br/j/bjp/a/3yK8j974YDPLK6fkZJzs5Sb/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "8abf169f7337038c7cdfc956fc857f8a7a216716",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
4688410 | pes2o/s2orc | v3-fos-license | Desferrioxamine reduces ultrahigh-molecular-weight polyethylene-induced osteolysis by restraining inflammatory osteoclastogenesis via heme oxygenase-1
As wear particles-induced osteolysis still remains the leading cause of early implant loosening in endoprosthetic surgery, and promotion of osteoclastogenesis by wear particles has been confirmed to be responsible for osteolysis. Therapeutic agents targeting osteoclasts formation are considered for the treatment of wear particles-induced osteolysis. In the present study, we demonstrated for the first time that desferrioxamine (DFO), a powerful iron chelator, could significantly alleviate osteolysis in an ultrahigh-molecular-weight polyethylene (UHMWPE) particles-induced mice calvaria osteolysis model. Furthermore, DFO attenuated calvaria osteolysis by restraining enhanced inflammatory osteoclastogenesis induced by UHMWPE particles. Consistent with the in vivo results, we found DFO was also able to inhibit osteoclastogenesis in a dose-dependent manner in vitro, as evidenced by reduction of osteoclasts formation and suppression of osteoclast specific genes expression. In addition, DFO dampened osteoclasts differentiation and formation at early stage but not at late stage. Mechanistically, the reduction of osteoclastogenesis by DFO was due to increased heme oxygenase-1 (HO-1) expression, as decreased osteoclasts formation induced by DFO was significantly restored after HO-1 was silenced by siRNA, while HO-1 agonist COPP treatment enhanced DFO-induced osteoclastogenesis inhibition. In addition, blocking of p38 mitogen-activated protein kinase (p38MAPK) signaling pathway promoted DFO-induced HO-1 expression, implicating that p38 signaling pathway was involved in DFO-mediated HO-1 expression. Taken together, our results suggested that DFO inhibited UHMWPE particles-induced osteolysis by restraining inflammatory osteoclastogenesis through upregulation of HO-1 via p38MAPK pathway. Thus, DFO might be used as an innovative and safe therapeutic alternative for treating wear particles-induced aseptic loosening.
to its receptor RANK, resulting in a cascade of intracellular events, such as the activation of nuclear factor-κB (NF-κB) signaling pathway, the mitogen-activated protein kinases (MAPKs) signaling pathways and the nuclear factor of activated T-cells1 (NFATc1) signaling pathway, which are essential for osteoclast formation. [15][16][17] Furthermore, wear particles also result in the production of reactive oxygen species (ROS), which induces oxidative stress and has a major role in regulating osteoclast function and bone resorption. 18 Heme oxygenase-1 (HO-1), the rate-limiting step in heme catabolism, besides of working as a negative regulator of inflammation and oxidative stress, which has also been demonstrated as an osteoclastogenesis suppressor. 19 Furthermore, HO-1 dampens early differentiation of osteoclast precursors into osteoclasts, but not acts on mature osteoclasts. 19,20 In addition to the cytokines and growth factors, it is reported that iron homeostasis may contribute to fine-turning of the RANKL-induced osteoclast development. 21,22 In general, inhibitions of osteoclasts formation and/or function by modulating microenvironmental cytokines, growth factors, HO-1 and/or iron homeostasis may be critical for preventing from wear particles-induced osteolysis and pathological bone loss.
Desferrioxamine (DFO) is a trihidroxamate, natural siderophore, capable of chelating iron, aluminum and other trivalent metallic ions forming stable chemical complexes. 23 DFO has been widely used as a therapeutic agent for treating iron overload diseases. 24 Growing evidences suggest that DFO can regulate the osteoblasts proliferation and differentiation and inhibit the osteoclasts formation. [25][26][27] Therefore, DFO may be used as a therapeutic agent for the treatment of bone metabolic disease, such as osteoporosis. However, it remains unclear whether DFO can prevent UHMWPE particlesinduced osteolytic diseases in vivo.
In the present study, we demonstrated for the first time that DFO could significantly alleviate particles-induced osteolysis in an UHMWPE particles-induced mouse calvaria osteolysis model. Furthermore, UHMWPE wear particles-induced osteoclastogenesis in the eroded bone surface was significantly attenuated by the treatment of DFO, which suggested DFO prevented UHMWPE particles-induced osteolysis by inhibition of osteoclast function and formation. Subsequently, we accomplished a series of biochemical and morphological studies to explore the effect of DFO on osteoclastogenesis. We found that DFO was able to inhibit osteoclastogenesis in a dose-dependent manner. Mechanistically, the reduction of osteoclastogenesis by DFO was due to increasing of HO-1 expression. In addition, blocking of p38 signaling pathway promoted DFO-induced HO-1 expression, implicating that p38 signaling pathway was involved in DFO-mediated HO-1 expression. Taken together, our results suggested that DFO could potentially be served as an alternative therapeutic option for UHMWPE particles-induced osteolysis.
Results
DFO alleviated UHMWPE particles-induced osteolysis in vivo. A murine calvaria osteolysis model was used to observe the effect of DFO on UHMWPE particles (Figure 1a)-induced osteolysis. Micro-CT analysis showed that extensive bone resorption was presented in the UHMWPE particles group (Vehicle), which was significantly attenuated by DFO treatment in a dose-dependent manner (Figure 1b). Furthermore, BMD, BV/TV and total volume of pore space in the region of interest (ROI) were also measured. The results showed that osteolysis was significantly increased in the vehicle group compared with sham control, while DFO injection with 10 mg/kg (low) or 30 mg/kg (high) daily could significantly prevent from UHMWPE particles-induced osteolysis (Figures 1c-e).
Subsequently, histological assessment and histomorphometric analysis were accomplished to detect the effect of DFO on UHMWPE particles-induced osteolysis. Hematoxylin and eosin (H&E) staining showed that there were much more inflammatory responses and prominent osteolysis in vehicle group compared with sham group, while the DFO-treated groups exhibited reduced inflammatory responses and osteolysis ( Figure 2a). Consistent with the histological results, the calvarias culture results also confirmed that DFO significantly dampened particles-induced inflammatory responses, as the increased IL-1β (Figure 2b), IL-6 ( Figure 2c) and TNF-α ( Figure 2d) expression in the particles group were all abundantly decreased after DFO treatment. Furthermore, TRAP staining showed that the number of osteoclasts lined along the eroded bone surface was significantly increased in vehicle group compared with sham group, but which was obviously reduced in both low (10 mg/kg) and high (30 mg/kg) concentrations of DFO-treated groups (Figures 2e and f). Taken together, these results suggested that DFO treatment could markedly protect from UHMWPE particles-induced osteolysis via dampening inflammatory osteoclastogenesis in vivo.
DFO inhibited osteoclastogenesis in vitro.
Having observed DFO attenuated UHMWPE particles-induced osteolysis by suppression of osteoclastogenesis in vivo, we next detected the effect of DFO on osteoclasts formation in vitro. Bone marrow-derived macrophages (BMMs) were induced with 30 ng/ml M-CSF and 50 ng/ml RANKL in the presence of different concentrations of DFO for 5 days. The results of TRAP staining showed that the number of mature osteoclasts was significantly decreased by DFO in a dosedependent manner. Approximately 50% fewer of TRAPpositive osteoclasts were observed in cells treated with 12 μM DFO compared with the control group, and there were almost no mature osteoclasts after 50μM DFO treatment (Figures 3a and b). Consistent with the results of TRAP staining, DFO also inhibited TRAP activity of osteoclasts in a dosedependent manner (Figure 3c). To determine whether DFOinhibited osteoclastogenesis was due to the cytotoxic effects of DFO, we performed a CCK-8 assay to examine the effect of DFO on cells viability. The results showed that no significant cytotoxic effect was observed in BMMs treated with DFO, even at concentration up to 50 μM (Figure 3d), suggesting that DFO could inhibit osteoclasts formation without any cytotoxic effects. To further examine at which stage DFO inhibited osteoclastogenesis, 50 μM DFO was added into culture medium at 0-4 days during osteoclastogenesis. The results showed that DFO could significantly inhibit osteoclasts formation at early stage (days 0-3), whereas adding DFO to osteoclastic precursor cells at late stage (day 4) could not affect osteoclasts formation, which predicted that DFO inhibited osteoclasts differentiation at early stage but not at late stage (Figures 3e and f).
A set of genes have been found to be associated with osteoclasts differentiation and formation, such as TRAP, c-Fos, Cathepsin K, DC-STAMP, V-ATPase a3 and V-ATPase d2. 28 Therefore, to further examine the inhibitory effect of DFO on osteoclasts formation, we detected the effects of DFO on these genes expression. Our results showed that these genes expression were obviously upregulated during RANKL-induced osteoclasts formation, whereas which were all markedly suppressed by 50 μM DFO in a time-dependent manner (Figures 4a-f). Furthermore, DFO also inhibited these genes expression in a dose-dependent manner (Figures 4g-l).
Taken together, these results further strengthened our conclusion that DFO could decrease osteoclasts formation.
DFO inhibited osteoclastic bone resorption and F-actin ring formation. Even though DFO could impair osteoclasts formation, it was unclear whether DFO could inhibit osteoclasts activity. Therefore, we performed pit formation assay to estimate the effect of DFO on osteoclastic bone resorption. BMMs were cultured on bone slices, and induced by M-CSF and RANKL in the presence of different concentrations of DFO for 10 days. We found a significant increase of pits formation in the control group. However, the resorption area was markedly decreased in DFO-treatment group. Furthermore, we found DFO inhibited osteoclastic bone resorption in a dose-dependent manner, as the resorption area decreased by 60% after 12 μM DFO treatment, and there was almost no (Figures 5a and b). In addition, a well-polarized F-actin ring was required for efficient bone resorption. Therefore, we performed F-actin ring staining to estimate the effect of DFO on osteoclastic bone resorption. The clear F-actin ring structures were observed in the untreated control group (Figures 5c and d). However, the F-actin ring structures were significantly disrupted when BMMs incubated with 12, 25 or 50 μM DFO (Figures 5c and d). Taken together, all these results demonstrated that DFO could inhibit osteoclastic bone resorption. (e,f) TRAP staining showed that the number of osteoclasts lined along the eroded bone surface was significantly increased in UHMWPE particles group, which was obviously reduced in both low and high concentrations of DFO-treated groups. Red arrows indicated TRAP-positive cells. Low and high represent 10 and 30 mg/kg DFO application, respectively. Scale bars, 300 μm. n = 6, **Po0.01 DFO mediated osteoclastogenesis by regulating HO-1 expression. Since we have observed the antiosteoclastogenesis function of DFO in vivo and in vitro, we next sought to explore the intrinsic mechanisms by which DFO mediated osteoclastogenesis. HO-1, the rate-limiting step in heme catabolism, which has been proved to be a negative regulator in osteoclastogenesis, so we hypothesized that DFO might regulate osteoclastogenesis by mediating HO-1 expression. To demonstrate our hypothesis, we first detected the effect of DFO on HO-1 expression. Osteoclastic precursor cells were treated with different concentrations of DFO in the presence of RANKL for 3 days, western blot and qRT-PCR analysis showed that DFO could induce HO-1 protein and mRNA expression in a dose-dependent manner (Figures 6a and b). Furthermore, the immunofluoresent analysis also demonstrated the stimulatory effect of DFO on HO-1 expression (Figure 6c). Taken together, all these results demonstrated DFO-induced HO-1 expression during osteoclastogenesis.
Having observed DFO increased HO-1 expression, nextly we detected whether HO-1 was essential for DFO-inhibited osteoclastogenesis. Firstly, we performed gain-of-function experiment, in which we incubated osteoclast precursors with HO-1 inducer-cobaltprotoporphyrin (COPP). The TRAP staining and TRAP activity assay showed that activation of HO-1 by 25 μM COPP significantly decreased osteoclasts formation. Furthermore, the inhibitory effect of DFO on osteoclastogenesis was also enhanced by COPP (Figures 7a-c). In addition, qRT-PCR analysis showed that the expression of TRAP and c-Fos were significantly decreased by COPP, which was further inhibited by DFO together with COPP (Figures 7d and e). Secondly, we performed loss-of-function experiment, in which we decreased the expression of HO-1 with si-HO-1. As evidenced by TRAP staining and TRAP activity assay, we found depletion of HO-1 could alleviate the inhibitory effect of DFO on osteoclasts formation, although the si-RNA against HO-1 did not completely reverse the effects of DFO (Figures 7f-i). Furthermore, inhibition of HO-1 could markedly attenuate DFO-decreased TRAP and c-Fos expression (Figures 7j and k). Taken together, all these results demonstrated that that HO-1 was an intermediator of DFO-inhibited osteoclastogenesis. DFO increased HO-1 expression by dampening p38MAPK pathway in osteoclast. Having identified HO-1 was required for DFO-inhibited osteoclastogenesis, we next sought to explore the molecular mechanisms involved in the induction of HO-1 by DFO. As it has been reported that mitogen-activated protein kinases (MAPKs) and nucleur factor-κB (NF-κB) are the key downstream pathways of RANKL in the process of osteoclastogenesis, 29-31 we first explore the effects of DFO on RANKL-induced these intracellular signalings during the osteoclast differentiation. BMMs were preincubated with 50 μM DFO, followed by stimulating with RANKL for the indicated time points. The results showed that phosphorylation of signal pathways, including p65, p38, ERK and JNK, were significantly activated by RANKL, whereas all of them were blocked by DFO in RANKL-stimulated osteoclasts (Figure 8a). To determine which signaling pathway was involved in DFOinduced HO-1 expression, we tested the effects of blocking these signaling pathways on DFO-induced HO-1 expression. The results showed that p38 inhibitor SB203580 significantly enhanced DFO-induced HO-1 expression. However, JNK inhibitor SP600125, NF-κB inhibitor BAY 11-7082, and mitogen-activated protein/extracellular signal-regulated kinase (MEK) inhibitor PD98059 did not promote DFOinduced HO-1 expression (Figures 8b and c). Furthermore, we found that SB203580 promoted DFO-induced HO-1 expression in a dose-dependent manner (Figures 8d and e). Taken together, all these data suggested that inhibition of p38MAPK signaling pathway was involved in the induction of HO-1 by DFO.
Discussion
Artificial joint replacement is widely used to treat severe joint degeneration. However, UHMWPE wear particles-induced osteolysis is a leading cause of early implant loosening in endoprosthetic surgery. Studies have showed that UHMWPE particles-induced osteolysis is due to enhanced osteoclasts differentiation and activity. 32 Thus, therapeutic agents targeting osteoclasts formation are considered for treating wear particles-induced osteolysis. In the present study, we demonstrated for the first time that DFO, a powerful iron chelator, could significantly alleviate osteolysis in UHMWPE particlesinduced mouse calvaria model by restraining inflammatory osteoclastogenesis. Furthermore, DFO was able to inhibit osteoclastogenesis in a dose-dependent manner. Mechanistically, DFO reduced osteoclasts formation by increasing HO-1 expression via p38MAPK signaling pathway. Taken together, we concluded that DFO might have great potential and value in treating wear particles-induced aseptic loosening.
With understanding of the pathogenesis of periprosthetic osteolysis, some effective preventative and nonsurgical interventions have been introduced. One large recent study indicates that early postoperative systemic administration of bisphosphonates can decrease the risk of aseptic loosening in total knee arthroplasty. 33,34 However, bisphosphonates have been proven unsuccessful in inflammatory conditions. 35 Furthermore, it is reported that long-term administration of bisphosphonates could be associated with bone necrosis and atypical fractures in long bones. 36 Therefore, the current appraisal of bisphosphonates to prevent loosening still needs for further study. Recently, TNF-α and IL-1 antagonists have variably been demonstrated efficacy in alleviating aseptic loosening, but come with unwanted immunosuppression. 35 Denosumab (Amgen; Thousand Oaks, CA, USA), a monoclonal antibody against RANKL, has emerged as a potential therapeutic avenue for osteolysis, but the clinical trials show that it impacts immunocompetence less than originally thought. 35 Thus, despite extensive research on drugs that target the inflammatory, osteoclastic and osteogenic responses to wear debris, it still needs for further studies to identify the more suitable treatment for wear particles-induced osteolysis.
DFO, an FDA-approved medication and a powerful iron chelator with 'hypoxia-mimetic' activity, was widely used as a therapeutic agent for treating iron overloaded-related diseases. 37 Besides of exerting the anti-osteolysis function like bisphosphonates, IL-1 antagonists and Denosumab by inhibiting the process of osteoclastogenesis, 38 DFO has been shown to increase angiogenesis via the hypoxia inducible factor (HIF) pathway. The HIF pathway activates angiogenesis as a regulator of response to hypoxia whose activation is also seen in skeletal repair. 39,40 In addition to promoting angiogenesis, DFO is also able to increase bone formation by enhancing osteoblasts activity. 24,26 Therefore, DFO has been emerged as a potential agent for treating bone regeneration and osteoporosis. 41,42 In this study, mouse calvaria osteolysis model was used to examine the effect of DFO on particlesinduced aseptic loosening in vivo. Both micro-CT and histological assessments demonstrated that DFO significantly protected from UHMWPE particles-induced osteolysis. Meanwhile, DFO treatment could alleviate particles-induced bone destruction and osteolysis, which were confirmed to associate with particles-promoted osteoclastogenesis. Our results for the first time demonstrated that DFO could be effectively used for the treatment of wear particles-induced osteolysis in vivo. Thus, DFO might be used as a therapeutic agent for treating wear particles-induced aseptic loosening.
In the present study, we confirmed that DFO obviously inhibited osteoclasts formation at early stage (days 0-3), whereas adding DFO to osteoclastic precursor cells at late stage did not affect osteoclasts formation. Indeed, Leger et al. 43 also found that DFO was not shown to decrease osteoclasts numbers, which might be caused by adding DFO for the last day of the human osteoclast assays. Furthermore, Philipp et al. 44 added DFO in the beginning of the osteoclastogenesis assay with cells of rodent origin, resulting in a significant suppression of osteoclasts differentiation. Consistent with these findings, our studies further demonstrated that the inhibition of osteoclast formation by DFO was due to dampen osteoclast progenitor cells differentiation. RANKL-induced osteoclast differentiation is associated with the upregulation of specific genes, including TRAP, c-Fos, Cathepsin K, DC-STAMP, V-ATPase a3 and V-ATPase d2. 28 Data from this study showed that these RANKL-induced specific genes expression were obviously attenuated by DFO in a time-dependent manner. Of note, c-Fos as a critical transcript factor for osteoclastogenesis was markedly increased at early stage (day 1), whereas the induction of c-Fos expression by RANKL was alleviated from days 1 to 5, indicated that c-Fos might be an early marker gene for osteoclast formation. Indeed, c-Fos as a major regulator of osteoclastogenesis conducts the expression of osteoclast specific genes, such as TRAP, Cathepsin K, DC-STAMP, V-ATPase a3 and V-ATPase d2. In the current study, inhibition of these specific genes expressions by DFO further provided evidence of DFO-inhibited osteoclast formation.
Previous studies suggest that overproduction or inadequate removal of ROS may be involved in the formation of fibrotic pseudocapsular tissues around revised total hip replacement components, 45 suggesting that ROS-induced oxidative stress has an important role in wear particles-induced osteolysis. HO-1, as an inducible enzyme, which is involved in oxidative stress processes. In bone tissue, HO-1 mRNA is expressed in osteoblasts, osteocytes and osteoclasts. 46 Several studies have elucidated the role of HO-1 in osteoclastogenesis. Ke et al. 46 found that HO-1-deficiency synergized with RANKL signaling to increase the number and activity of osteoclasts. Induction of HO-1 could inhibit osteoclast differentiation via MAP kinase. 19 Furthermore, Eiko et al. 47 demonstrated that RANKL induced osteoclasts differentiation by inhibiting HO-1 expression via activation of p38 MAPK signaling pathway. In the present study, we confirmed for the first time that HO-1 was involved in DFO-inhibited osteoclasts formation.
Even though many studies have confirmed that DFO could inhibit osteoclast differentiation and activity. 24 However, little is known regarding how DFO regulates osteoclastogenesis. In the process of osteoclast differentiation, RANKL binding to its receptor RANK leads to the activation of downstream signaling molecules, such as MAPKs (ERK1/2, p38 and JNK1/2) and NF-κB. 48,49 Previous studies have showed that the formation of osteoclasts can be reduced by inhibition of JNK, ERK and p38, suggesting these molecules are critical for RANKL-induced osteoclastogenesis. 50 Furthermore, RANKL stimulation triggers the induction of the NF-κB heterodimer p65 (RelA)/p50 (NF-κB1), which induces the expression of NFATc1, a transcription factor that regulates the terminal RANKL-induced differentiation of osteoclasts. 51 In our study, we found that DFO could downregulate ERK, JNK, p38 and p65 activation in osteoclast differentiation, evidenced by little ERK, JNK, p38 and p65 phosphorylation after DFO treatment. Further studies found that inhibition of p38 signaling pathway could promote DFO-induced HO-1 expression, indicating that p38 was involved in DFO-induced HO-1 expression. Our study delineated a previously unknown mechanism that DFO inhibited UHMWPE particles-induced osteolysis by restraining inflammatory osteoclastogenesis through upregulation of HO-1 via p38MAPK pathway. However, as the results showed in Figure 7, HO-1 depletion by siRNA did not completely reverse the effects of DFO, which revealed that DFO restrained the inflammatory osteoclastogenesis might through the other alternative pathway. As studies have revealed that MAPKs (including p38MAPK, JNK and ERK) and NF-κB are critical for RANKL-induced osteoclastogenesis, 50 and our results in Figure 7 have demonstrated that DFO significantly dampens the activation of MAPKs and NF-κB induced by RANKL. In addition, it has been reported that clinoquinol, another iron chelator, impairs RANKL-driven AKT phosphorylation and NFATC1 activation in the process of osteoclastogenesis, 38 both AKT and NFATC1 are required for efficient osteoclastogenesis and osteoclast activation. [52][53][54] Except of p38MAPK, we predict that DFO inhibits osteoclastogenesis may also by regulating RANKL-induced ERK, JNK, AKT or NFATC1 activation.
The mouse calvaria osteolysis model is widely used to explore the mechanisms of UHMWPE particles-induced osteolysis. However, some deficiencies exist in this model. First, mechanical loading may affect UHMWPE particlesinduced osteolysis in patients with endoprosthetic surgery, whereas which is not considered in the mouse calvaria osteolysis. Second, the size of UHMWPE particles used to generate mouse model was uniform, whereas UHMWPE particles from artificial joint are not identical. Thus, future studies are needed to further explore the most suitable mouse model for UHMWPE particles-induced osteolysis. In conclusion, in the present study, we demonstrated that UHMWPE particles-induced osteolysis could be alleviated by DFO via restraining of inflammatory osteoclasts formation and activity. Furthermore, the inhibitory effects of DFO on osteoclastogenesis, which were achieved mainly through induction of HO-1 expression. Further study confirmed that DFO induced HO-1 expression via inhibition of p38 signaling pathway, resulting in the reduction of osteoclasts formation. Taken together, we concluded that DFO might be used as an innovative and safe therapeutic alternative for treating wear particles-induced aseptic loosening.
Methods
Preparation of UHMWPE particles: UHMWPE particles were provided by the manufacturer (Zimmer Inc., Warsaw, IN, USA). The characteristics of the particle's morphology have been published previously. 55 The mean diameter of these particles was 2.6 μm (range from o0.7 to 21 μm). To avoid contamination with endotoxins, the particles were washed three times with 70% ethanol and sterilized for 72 h to remove endotoxin and heat sterilized, then dispersed in PBS at 2 × 10 8 particles per ml. Endotoxin levels of the particle suspension were determined by a Limulus assay according to the manufacturer's instructions.
UHMWPE-induced calvarial osteolysis model: A wear particle-induced mouse calvarial osteolysis model was generated as previously described. 1 Animal studies were performed in accordance with the principles and procedures approved by the Animal Care Committee of Shanghai Jiao Tong University. Briefly, 24 healthy male 8-week-old C57BL/6J mice were randomly divided into to four groups: sham PBS control (Sham), UHMWPE particles with PBS (Vehicle), and UHMWPE particles with 10 mg/kg (low) and 30 mg/kg (high) concentrations of DFO. The mice were anesthetized, and the cranial periosteum was separated from the calvarium by sharp dissection. Then, 100 ul of particle suspension was uniformly spread over the periosteum at the middle suture of the calvaria in vehicle, low and high group, whereas sham group not. Two days after implantation of UHMWPE particles, PBS or DFO was injected every day intraperitoneally, respectively for 14 days. The animals were housed 5 per cage and were maintained under a strict 12 h light: 12 h darkness cycle at 22°C with standard mice food pellets and had free access to tap water. At the end of the experiment, the mice were sacrificed, and the calvaria were excised and fixed in 4% paraformaldehyde for micro-computed tomography (CT) and histological analysis. No adverse events were found during the generation of mouse calvarial osteolysis model.
Bone resorption assay and F-actin ring formation assay: The bone resorption assay was conducted as previously described. 28 Briefly, BMM cells were plated onto bovine bone slices in 96-well plates at a density of 1 × 10 4 cells/well. The BMM cells were cultured with complete α-MEM medium supplemented with M-CSF (30 ng/ml), RANKL (50 ng/ml) and different concentrations of DFO. Cell culture media were replaced every 2 days until mature osteoclasts had formed. On 10 days, the osteoclasts were removed from the bone slices by mechanical agitation and sonication. Resorption pits stained with toluidine blue were photographed under a high-quality microscope. Three view fields were randomly selected for each bone slice for further analysis. The percentage of resorbed bone surface area was counted using the Image J software. Experiments were repeated independently at least three times.
To perform F-actin ring formation assay, osteoclasts treated with various concentrations of DFO were fixed with 4% paraformaldehyde for 15 min, permeabilized for 5 min with 0.1% Triton X-100, and incubated with rhodamineconjugated phalloidin (Invitrogen Life Technologies, Grand Island, NY, USA) for 30 min at room temperature and then washed extensively with PBS three times. The F-actin ring distribution was visualized using a fluorescence microscope (ZEISS, Jena, Germany), and the average number of F-actin ring was calculated.
Organ culture and cytokines detection: The murine calvarias culture is according to the report before. 56 The dissected calvarial tissue samples were weighted and cultured in serumless medium (10 ml/g weight) (Dulbecco's Modified Eagles Media, Life Technologies, Gaithersburg, MD, USA) containing 1% Penicillin/ Streptomycin for 72 h at 37°C with 5% CO2. The release of IL-1β, IL-6 and TNF-α from dissected murine calvaria into the medium was measured with the enzymelinked immunoassay (ELISA) kit specific for mice IL-1b; IL-6 and TNF-a (Duoset R&D Systems, Abingdon, UK).
Micro-CT imaging analysis:
The fixed calvarias were analyzed using a highresolution micro-CT scanner (Skyscan 1172; Skyscan; Aartselaar, Belgium). All calvarias were scanned according to the same parameters (pixel size, 9 μm; X-ray voltage, 50 kV; electric current, 500 μA; rotation step, 0.7°). After reconstruction, a spherical volume of interest (VOI) of 3 mm in diameter around the midline suture was selected for further qualitative and quantitative analysis. Bone mineral density (BMD), bone volume against tissue volume (BV/TV) and total volume of pore space of each sample were measured.
Histological analysis: After micro-CT scanning, the samples were decalcified in 10% EDTA for 3 weeks and then dehydrated, embedded in paraffin. Histological sections (5 μm thick) were prepared for H&E and TRAP staining. The specimens were then examined and photographed under a high-quality microscope. The numbers of TRAP-positive multinucleated osteoclasts were counted in each sample.
Cell viability assay: The cytotoxic effects of DFO on BMMs viability were determined using a CCK-8 assay according to the manufacturer's instructions. The BMM cells were plated in 96-well plates at a density of 5 × 10 3 cells/well, and cultured in complete α-MEM medium supplemented with 30 ng/ml M-CSF, and treated with different concentrations of DFO (0, 6.25, 12.5, 25, 50, 100, 200 and 400 μM) for 48 h. Next, changed the medium of each well with 10 μl CCK-8 and 100 μl α-MEM medium, then incubated at 37°C for an additional 1.5 h. The optical density (OD) was then measured at a wavelength of 450 nm with an ELX680 absorbance microplate reader (Bio-Tek, Winooski, USA).
Bone marrow-derived macrophage isolation and osteoclast culture: Primary BMMs were isolated from the long bones of 8-week-old C57BL/6J mice. cells were isolated from the femur and tibiae bone marrow and cultured in a 100mm dish with complete α-MEM medium in the presences of 10 ng/ml M-CSF for 24 h. Non-adherent cells were harvested and cultured with fresh medium containing 50 ng/ml M-CSF. Three days later, the adherent cells were harvested as osteoclasts precursors (pre-osteoclasts). These cells were then seeded and further cultured with complete α-MEM medium containing M-CSF (30 ng/ml) and RANKL (50 ng/ml) for 3-5 days with various concentrations of DFO (0, 12, 25, 50 μM). Cell culture media were replaced every two days until mature osteoclasts had formed. Next, cells were washed twice by PBS and fixed with 4% paraformaldehyde for 15 min and then stained for TRAP activity. TRAP-positive cells with three or more nuclei were counted under a microscope.
Immunofluorescence staining: BMM cells were seeded onto the sterile cover slips at a density of 5 × 10 4 cells/well in 24-well plates, and cultured with complete α-MEM medium supplemented with M-CSF (30 ng/ml), RANKL (50 ng/ml), and 50 μM DFO. After incubation, cells were fixed in 4% paraformaldehyde for 10 min, treated with 0.1% Triton X-100 for 15 min and then incubated in 3% bovine serum albumin (BSA)/ PBS for 30 min at room temperature. Next, cells were incubated with mouse anti-HO-1 antibody (1:100 dilution) at 4°C overnight. Cell nuclei were counterstained with Hoechst 33258 at room temperature for 15 min in the dark. Images were acquired using a fluorescence microscope (ZEISS Axio Imager A2, Carl Zeiss microscopy GmbH).
RNA interference:
The small interfering RNA (siRNA) oligonucleotide for HO-1 was designed and synthesized by GenePharma (Shanghai, China). The targeting sequences of murine HO-1 siRNA (si-HO-1) were as follows: forward 5′-CCACACAGCACUAUGUAAATT-3′ and reverse 5′-UUUACAUAGUGCUGUGU GGTT-3′. BMM cells cultured with or without DFO in the presence of RANKL in antibiotic-free media were transfected with 100 nM si-HO-1 using lipofectamine 3000 (Invitrogen) according to the manufacturer's instructions. The sequences of negative control (NC) were as follows: forward 5′-UUCUCCGAACGUGUCACGUTT -3′ and reverse 5′-ACGUGACACGUUCGGAGAATT-3′. After incubation for 48 h, the cells were harvested to extract total RNA for RT-PCR. For TRAP staining, we incubated the cells for another 5 days.
RNA extraction and quantitative real-time PCR (qRT-PCR): To measure specific gene expression during osteoclast formation, we performed quantitative PCR assay. Briefly, cells were seeded in six-well plates at a density of 1 × 10 5 cells per well and cultured in complete α-MEM medium supplemented with 30 ng/ml M-CSF and 50 ng/ml RANKL. After treatments with various concentrations of DFO, COPP or siRNA, total RNA was isolated from BMM cells using Trizol reagent (Invitrogen) according to the manufacturer's instruction. Next, cDNA was synthesized from 1 μg of total RNA using reverse transcriptase (TakaRa, Shiga, Japan). qRT-PCR was performed to amplify the cDNA using the SYBR Premix Ex Tag kit (TaKaRa) and an ABI 7500 Sequencing Detection System (Applied Biosystems, Foster City, CA, USA). The following cycling conditions were used: 40 cycles of denaturation at 95°C for 5 s and amplification at 60°C for 24 s. β-actin was used as the house keeping gene, and all reactions were run in triplicate. The mouse primer sequences for TRAP (Accession Numbers: NM_011611), c-Fos (Accession Numbers: NM_010234), Cathepsin K (Accession Numbers: NM_ 007802), DC-STAMP (Accession Numbers: NM_001289513), V-ATPase α3 (Accession Numbers: NM_016921), V-ATPase d2 (Accession Numbers: NM_175406), HO-1 (Accession Numbers: NM_010442) and β-actin (Accession Numbers: NM_007393) were described in Supplementary Table 1.
Western blot analysis: BMM cells were seeded in six-well plates at a density of 1 × 10 5 cells per well. After various treatments in the presence of M-CSF and RANKL, cells were washed with PBS and lysed in ice-cold lysis buffer (Cell Signaling Technology) supplemented with cocktail for 30 min. Next, the lysates were centrifuged at 12 000 × g for 15 min, and the supernatants that contained the proteins were harvested. Protein concentrations were determined by a BCA protein assay kit (Pierce Biotechnology, Rockford, IL, USA). Equal amounts of protein lysates were resolved using SDS-PAGE on 10% gels, and transferred to PVDF membranes (Millipore, Bedford, MA, USA). Afterwards, the membranes were blocked with 5% skimmed milk solution for 1 h, and then incubated with primary antibodies diluted in 1% BSA powder in TBS-Tween (TBST) overnight at 4°C. The membranes were then washed three times with TBST solution and incubated with the appropriate secondary antibodies. The antibody reactivity was visualized using the enhanced chemiluminescence detection system as recommended by the manufacturer. Signal intensities were quantified using Image-J software (Bethesda, MD, USA).
Statistical analysis. Data were collected from three or more independent experiments and expressed as mean ± S.D. A two-sided Student's t-test was used to analyze the difference between groups. One-way analysis of variance was performed to show the difference between groups. Po0.05 was considered significantly different. | 2017-11-08T18:20:46.417Z | 2016-10-01T00:00:00.000 | {
"year": 2016,
"sha1": "ddcc505b0a5286adb1776ccb08c50af9727e17c6",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/cddis2016339.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ddcc505b0a5286adb1776ccb08c50af9727e17c6",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
208212321 | pes2o/s2orc | v3-fos-license | Neurophysiological Correlates of Concussion: Deep Learning for Clinical Assessment
Concussion has been shown to leave the afflicted with significant cognitive and neurobehavioural deficits. The persistence of these deficits and their link to neurophysiological indices of cognition, as measured by event-related potentials (ERP) using electroencephalography (EEG), remains restricted to population level analyses that limit their utility in the clinical setting. In the present paper, a convolutional neural network is extended to capitalize on characteristics specific to EEG/ERP data in order to assess for post-concussive effects. An aggregated measure of single-trial performance was able to classify accurately (85%) between 26 acutely to post-acutely concussed participants and 28 healthy controls in a stratified 10-fold cross-validation design. Additionally, the model was evaluated in a longitudinal subsample of the concussed group to indicate a dissociation between the progression of EEG/ERP and that of self-reported inventories. Concordant with a number of previous studies, symptomatology was found to be uncorrelated to EEG/ERP results as assessed with the proposed models. Our results form a first-step towards the clinical integration of neurophysiological results in concussion management and motivate a multi-site validation study for a concussion assessment tool in acute and post-acute cases.
Traumatic brain injury (TBI) impacts upwards of 2.8 million individuals annually in the united states alone 1 . Concussions (henceforth used synonymously with mild TBI; mTBI) form a considerable subset of that figure and are defined as closed-head injuries that leave the affected with functional and cognitive deficits 2,3 . The current understanding of underlying mechanisms in concussion remains lacking, with echoing concerns both in the identification and management of the condition 4 . An expansive body of work has targeted the multiple facets of concussion, offering different means of elucidating the cognitive deficits caused by concussion and its co-morbid sequelae 5 . Electrophysiology is one tool with promising applications in concussions. Specifically, event-related potentials (ERPs) as recorded by electroencephalography (EEG) have shown persistent changes in concussed individuals in the post-acute stage and decades after insult [6][7][8][9][10] .
ERPs are non-invasively-recorded indices of cognitive function 11 . The P300, a positive-deflecting response peaking approximately 300 ms after stimulus onset, is a commonly studied component in neurophysiology that is associated with attentional resource allocation, orientation, and memory 12 . The P300 was found to be impacted by concussion immediately 13 after occurrence and decades post injury 6,[8][9][10] . P300 effects were observable when patients were symptomatic as well as after symptom resolution 14 and were affected cumulatively following a series of concussive blows to the head in comparison to a single hit 15 . The N2b is an ERP often linked to executive function manifesting as a fronto-central negative deflection 200 ms after stimulus onset 16 . Similar to the P300, the N2b was affected after sustaining hits to the head 7,10,15,17,18 . Research has demonstrated the versatility and sensitivity of both the P300 and N2b to concussion; however, a transition from controlled, group-level findings to individual assessment is required before clinical adoption is made feasible.
Machine learning (ML) has gained significant traction in the clinical field, offering a cost-efficient way of replicating expert judgements and decisions in a setting overloaded with data 19,20 . ML introduces a dynamic process that is able to ingest high-dimensional clinical data and learn complex patterns that might also be difficult to detect or visualize for a human expert 19,20 . Despite some scrutiny due to black-box solutions 21 and susceptibility to bias in misapplication 22 , machine learning remains a great tool for exploiting resources to improve clinical standards 19,[21][22][23] . EEG data are characterized by their rich high-dimensionality that requires certain degrees of aggregation to simplify for a human observer -quite possibly at the cost of losing critical information. That complexity has made ML a valuable method in EEG analysis [24][25][26][27][28][29][30][31][32] .
Although this study details the first EEG/ERP application of deep learning (DL) in mTBI, DL has been explored in various EEG applications 33 . Broadly, DL expands on traditional ML techniques by providing a multi-layer architecture that enables fitting complex and custom models that promote hierarchical feature extraction. In EEG, model complexity and layer stacking has been proposed as a valuable tool in creating end-to-end solutions that integrate feature extraction and classification as opposed to the more manual feature engineering of traditional ML 24 . Most DL applications on EEG to date have been on resting-state, using shifting windows in time as input, to provide datasets with sufficient size for training such complex models 27,28,33 . Recently, there have also been studies of DL to classify targets (P300) vs. non-targets in a brain-computer interfacing setup 27,33 .
In the present study, we developed the TRauma ODdball Net (TRODNet), a deep learning network that uses convolutional layers in extracting information from single-trial EEG/ERP data to identify signs of concussion. The network learns a set of topographical maps that characterize different ERPs elicited in a multi-deviant oddball paradigm designed to elicit both the P300 and the N2b responses. The temporal activations of these maps form a set of automatically extracted features to predict a single-trial's label. TRODNet is trained and assessed using 10-fold class-stratified cross-validation on a dataset of 54 participants (28 controls). All concussed participants were clinically diagnosed and were symptomatic at the time of testing. Supplementary self-reports were collected to investigate concussive and depressive symptomatology as captured by the post-concussion symptom scale (PCSS) and the Children Depression Inventory 2 (CDI), respectively. Nineteen of the 26 concussed subjects returned for a follow-up test (see Fig. 1A), nine of which reported full symptom recovery (PCSS of 0) with the others developing post concussion syndrome (PCS). Analyses on the longitudinal samples were run in parallel to assess whether symptom resolution was identifiable by the trained model (see Fig. 1C). Model interpretation is a critical factor for integrating machine learning into the clinical setting 21 . Thus, trained models were interpreted using the SHapley Additive exPlanations (SHAP) method, a recent introduction to the field with demonstrated success in clinical applications 23,24,34 .
The study was designed to investigate two primary hypotheses. First, the study examined whether single-trial classification can be aggregated for each subject to provide a viable tool of detecting concussion-related neurophysiological effects using minimal feature engineering. Second, the model's judgements on longitudinal datapoints were examined. It was postulated that performance would deteriorate after symptom resolution due to a normalization of the recovered subjects' neurophysiological responses, as opposed to consistent performance in those who retained their symptoms. Model interpretability was prioritized to ensure a transparent representation of learned information and to serve as a confirmatory step for the model's results.
Results
Concussion identification. As the model was trained (and tested) on single trials, aggregation of the TRODNet output was performed to create a prediction on the subject-level (see Methods for more details). As such, if more than 50% of a subject's trials were classified as concussed, the subject was predicted as belonging to the concussed group. The TRODNet model was able to achieve a single-subject cross-validation accuracy of 85%. Specifically, four control subjects were misidentified as concussed while four concussed subjects were misclassified as controls. This put the model's sensitivity to concussive effects at 84.6% and its specificity at 85.7%. Single-trial cross-validation accuracy was recorded at 74.4%; however, this figure should be assessed with care as discussed below. A detailed list of the model's single-trial accuracies; PCSS and CDI scores; demographics; and number of days since injury for each subject in the concussed group, including the longitudinal results, is reported in Table 1 Longitudinal factors. Assessing the model's single-trial accuracy for the concussed subgroup that partic- Fig. 2, showing a clear main effect of Testing Date that is not influenced by Recovery. Additionally, it can be observed that subjects that didn't report symptom recovery had lower single-trial accuracies overall.
Injury acuteness and correlation analyses. The effect of days since injury on perceived results was
inconclusive for the first day of assessment (see Fig. 3 and Table 1). For the second date, self-reported symptoms seemed to increase as days since injury increased for the no symptom resolution (NSR) group. This effect was equally observable in the PCSS and CDI scores. Although the two measures are inherently confounded, this result proposes a layer of subjectivity indicating a worsening of effects as an individual is subjected to symptom persistence. Conversely, no clear effect of days since injury was noted on the EEG/ERP results when accounting for symptom resolution. Overall, the SR subgroup reported a lower PCSS score at the date of the first test compared to the NSR subgroup. This is concordant with reports of symptom severity being a consistent measure of clinical recovery 2 .
Insights from model explanations. Upon interpreting the model with SHAP, TRODNet highlighted areas of interest overlapping with previously demonstrated effects in the literature 10,35 . The mean absolute SHAP values, indicative of feature importance, were reshaped for display on a 64-channel EEG plot for each condition (see Fig. 4). The two deviants had the most prominent features with important ones forming a bimodal distribution in the posterior regions, morphing into a unimodal shape in the frontal areas. The first and second peaks correspond in time and topography to the P300 and N2b, respectively 12,16 . Features tended to be uniformly important bilaterally, with slightly higher importance for the right side. Responses to the standard condition showed smaller and more dispersed distributions of feature importance, an unexpected finding considering an earlier study on chronic effects of concussion that showed early discernible effects to the standard tones 24 .
Discussion
Our results demonstrated the efficacy of an acute/post-acute automated system for concussion identification in individual subjects. In contrast to earlier work in concussion, the utilization of deep learning and convolutional networks enabled an end-to-end solution with minimal feature-engineering 24,26,36,37 . Additionally, the hypothesis that single-trials offer a more granular and effective method of assessing EEG/ERP data was supported.
Results relating symptomatology and neurophysiological effects were negative. Despite the misalignment between the present study's hypothesis and the data, symptomatology has been previously shown to have little correlation to EEG/ERP effects 6,10,35 , especially as neuropsychological measures completely return to baseline in most cases 38 . This disagreement extends to other assessment modalities such as quantitative EEG 36,39 . It is noteworthy that the model's performance drop may be attributable to the time-elapsed since injury, a finding that agrees with a regression study conducted in parallel to the present one (under review). These results highlight the need to examine the multiple stages of concussion progression and their effects with care as some may potentially be observable strictly at a particular stage of injury and/or recovery. Moreover, in the longitudinal subset, the model predicted trials of subjects that exhibited symptom resolution as concussed more than the subjects with persisting symptoms. Interestingly, that difference was observed irrespective of Testing Date (1 st vs. 2 nd ; Fig. 2). These results introduce the possibility that a subject's recovery trajectory may be inferred from a participant's EEG/ERP results during their symptomatic stage; however, no strong evidence could be drawn given the constraints of the present dataset. Of note, performance in the longitudinal sample is difficult to interpret provided that at no time was the model trained on a longitudinal sample from our data. We are not able to draw conclusions on whether the results are due to PCS-related neurophysiology or a more broad neurophysiological persistence of the injury that remains beyond symptom resolution. In practical terms, we posit that a sufficiently-large PCS group is required in addition to a symptomatically-resolved group to train a model to effectively differentiate the two against a control group. Ideally, given sufficient data, a model should also have access to date from injury to properly factor for a dynamically changing manifold of injury-affected responses.
The present study is the first report of ML-based EEG/ERP analysis in acute/post-acute concussion assessment. We reported a higher accuracy than previous studies classifying mTBI using RS EEG 26,37 and marginally higher than a previous study on injury detection decades after injury 24 . A quantitative comparison with clinical tools typically used in mTBI assessment is not straightforward as some of the best-reported tools decline in utility as soon as 5 days after injury 2,4 while our first day of data collection was an average of 20.2(13.6) days separated from injury. Clinical tools such as self-reported symptoms, postural control evaluation, and a pen-and-paper assessment scored sensitivities of 68.0%, 61.9%, and 43.5%, respectively, when administered within 24 hours of injury 40 . Combining all these tools was reported to exceed 90% sensitivity, although it is critical to be mindful that with these increments in sensitivity, specificity of these methods deteriorates and, by definition, reduces accuracy. Overall, we argue that the implementation of a single-subject EEG/ERP evaluation for acute/post-acute concussion is feasible provided group-level studies in the literature 17,35 and extended to single-subjects by the methodology presented here. Clinical applicability beyond the acute stage, however, requires further investigations that would augment the data used for training as discussed above.
The interpretability layer on our neural network model confirmed our results' origins as pertaining to neurophysiological signals commonly affected by concussion. This provides strong evidence that the model's predictive www.nature.com/scientificreports www.nature.com/scientificreports/ power is linked to the ERPs that the experimental paradigm was designed to elicit. Primarily, in the deviant conditions, TRODNet's most important features, as extracted by SHAP, corresponded to the 100-500 ms window, encompassing both the N2b and the P300 (see Fig. 4). Topographical examination of feature importance showed the effects to be predominantly central, with an earlier effect that is marginally lateralized to the right. Examination of the standard condition showed a small parieto-occipital effect in the 100-300 ms range, likely related to the N1-P2 complex. While this finding agrees with previous work on chronic neurophysiological effects of concussion observable in responses to the standard tones in an oddball paradigm, the features show low and dispersed importance measures compared to what was observed in the earlier study 24 . This is compatible with a hypothesis that alterations in earlier responses (in the mismatch negativity 10 or the N1/P2 complex 24 ) may correspond to irreversible effects of concussion and are strictly prominent in chronic cases. Further, tracing the model's results provides additional, empirical and data-driven, support of mTBI's impact on facets of cognitive function linked to the P300 and N2b such as attention and executive function 10,17,41 .
The study exhibits two primary limitations. First, the difference in age between the two groups can be argued to contribute to the model's ability to discern between the two experimental groups. Although there have been several reports of age-related differences in ERPs and resting-state EEG, the evidence supports little to no differences in the range of our two groups (15.04 and 19.3) [42][43][44] . Thus, we argue that an effect pertaining to the presented age-range is minimal, if not unlikely. Secondly, as correlations between model output and symptomatology were conducted post-hoc, further work is required to confirm the relationships between time-elapsed since injury and ERP effects.
In sum, a strong case for the clinical utility of ERPs in individual assessment of acute/post-acute concussion patients has been presented. The current findings improve upon those from resting-state and quantitative EEG 36,37 to establish a modality that is able to capture the effects of concussion immediately after insult and years post-injury 24 . The intent of this research was not directed at the mechanisms of progression and symptom manifestation, which remain unclear. However, a major step in that direction has been achieved in the translation of a complex, multi-trial EEG signal that was successfully able to provide an accurate identification of concussion incidence on a single-subject basis. The proposed model, TRODNet, was able to capture distinguishing features without the need for feature engineering, enabling further application to prospective different population ages and pathologies.
Data collection and EEG recordings.
Participants. Data were collected from 26 (7 male) adolescents (mean age = 15.04) with a recently sustained and clinically diagnosed concussion (mean days since insult = 20.15). A comparative group of 28 (5 male) participants (mean age = 19.3) acted as healthy controls, reporting no previous head injuries. All participants reported no neurological or auditory problems. The study was reviewed and approved by the Hamilton Integrated Research Ethics Board, Hamilton, Ontario, Canada. Prior to study participation, all participants provided informed consent in accordance with the ethical standards of the Declaration of Helsinki.
EEG stimuli and experimental conditions. ERPs were collected to a multi-deviant auditory oddball paradigm 10,45 . A 600-tone sequence was presented across two blocks of 300 each. Three deviant tones were presented pseudo-randomly in a continuous stream of standard tones. The standard tone was presented 492 times (82%) at 1000 Hz, 80 dB sound pressure level (SPL), and a duration of 50 ms. Each deviant was presented 36 times (6%) and differed from the standard tone in only one sound characteristic. The frequency deviant was 1200 Hz, the duration deviant was 100 ms, and the intensity deviant was 90 db SPL. Participants were tasked to respond using one button to the standard and another button to all deviants. Due to technical issues, data from the intensity deviant were discarded during analysis. EEG recording and preprocessing. Continuous EEG was recorded from 64 Ag/AgCl active electrodes (Biosemi ActiveTwo system) placed according to the extended 10/20 system using an elastic cap. Data were passed through an online bandpass filter of 0.01-100 Hz and referenced to the driven right leg. Data were digitized and saved at 512 Hz. Five external electrodes were recorded with the same settings. Three were placed on the mastoid processes and on the tip of the nose. The last two were placed above and over the outer canthus of the left eye to record eye movements. Stimuli markers were recorded and saved synchronously with the EEG data.
Data were processed offline using a 60 Hz notch and a 0.1-30 Hz (24 dB/oct) bandpass filters before re-referencing to the averaged mastoids. Artifacts were rejected manually using visual inspection followed by www.nature.com/scientificreports www.nature.com/scientificreports/ independent component analysis (ICA) decomposition. The two components found to correlate with horizontal eye movements and blinks were removed before recomputing sensor data. Trials with correct behavioural responses were segmented to 1200 ms intervals starting 200 ms before stimulus onset. Finally, segments were baseline corrected (−200 to 0 ms) and grouped into their respective experimental conditions before exporting the single trials. All EEG preprocessing was conducted using Brain Vision Analyzer (v2.01; Brain Products GmbH). observations collected from concussed subjects on their second day of testing. We denote the main dataset tensor as ∈ × × X T N S main . All EEG data manipulation was conducted using the Python MNE package 46 .
Statistical analyses.
Training and validation. Stratified 10-fold cross-validation was applied to estimate the generalization accuracy of the trained models (see Fig. 1C). X was split into X train and X test before standardizing both sets based on X train , removing the mean and scaling to unit variance for each feature. Observations from one subject were contained exclusively in either X train or X test to ensure no performance inflation due to subject-specific idiosyncrasies. The learner was batch-trained on X train for 500 epochs where each epoch passed a batch of B = 160 randomly-picked observations from X train . The resultant model predicted the labels of each observation in X test to produce the trial accuracy t . A thresholded version of accuracy t evaluated the accuracy s of all trials from a single subject. If more than 50% was achieved, the accuracy s i for subject i tallied as correct. In instances where X test contained one or more subjects that have undergone a second day of testing, the subjects' second set of trials were evaluated in parallel to assess their follow-up test's accuracy similar to what's described above. This procedure was done to ensure an identical training-set for both testing dates as well as to eliminate the possibility of within-subject bias. No training was conducted on data collected at the second day of testing.
Neural network architecture and hyperparameters. Following the notion that a multi-channel EEG signal is the evolution of certain topographies across time 25,29 , TRODNet utilized convolutional layers to learn commonly occurring topographical maps (see Fig. 1B) 27,28 . The present architecture, based on an EEG ConvNet 28 and EEGNet 27 , expanded to account for multiple conditions in the same input observation. Compared to EEGNet, TRODNet did not contain a convolutional layer that provided learned filtering settings, but split the depthwise convolution for each of the experimental conditions to extract topographical maps that best distinguish each condition. TRODNet corresponded in architecture to the shallow ConvNet 28 with the addition of the by-condition split and by limiting the input to time-locked trials (see Fig. 1B). The network had four layers in total (in addition to input).
• L input : This describes the input layer. The input tensor is of size B × N × S and is reshaped to B × N × S × 1 before passing to the next layer. • L 1 : The input tensor was split across three separate convolutional filters such that each was tasked with learning M = 5 maps that are specific to the condition. Kernel size was set to (64, 1). The output from each of the three sub-layers was of size B × 1 × S × M. The outputs were concatenated across the last dimension before passing to the next layer. • L 2 : A maxpooling layer was applied with both a pool size and stride of (1, 10) and (1,5), respectively. • L 3 : Corresponded to a dense feed-forward layer of size 100.
• L output : The output layer acted as the label predictor with softmax activation to separate classes concussed and control.
All layers but L output had a rectified linear activation unit (ReLU). L 2 regularization was applied on all weights with λ = 0.25. The Adam optimizer was used during training with α = 5e − 4. Training for a single cross-validation iteration was stopped after 500 complete epochs. These hyperparameters were set to optimize a separate dataset 10,24 collected using the same EEG/ERP protocol and were not modified throughout training. The code for training, testing, and visualization procedures is made readily accessible (see Data Availability section).
Model interpretation. The Deep Learning Important FeaTures (DeepLIFT) 47 implementation using Shapley values 34 was applied post-hoc on a model trained on all data to explain a model's decision on single-subject averages. An overall estimate of all features' influence on classification was calculated as the mean of the absolute SHAP values for all single-subject averages. The values were overlaid across the head to represent a 64-channel plot as commonly used in EEG/ERP studies. For visual clarity, each experimental condition was plotted independently.
Data availability
The input set was imported and formatted using Python MNE 46 package version 0.16.1 running on Python 3.5.2. Cross-validation and scaling were applied using scikit-learn 0.19.1 48 . Deep learning used Tensorflow 49 (v1.8.0). All code is made available at https://github.com/boshra/TRODNet. Statistical analysis was conducted using R statistical software (v3.5.3) and the ez package (v4.4-0). Result storage, correlational plots, and feature importance visualizations were conducted using the pandas (v0.24.1), seaborn (v0.9.0), and Python MNE packages, respectively. The single-trial data used to train the models of this study are available upon request from the corresponding authors (J.F.C. and R.B.). | 2019-11-22T16:21:15.200Z | 2019-11-06T00:00:00.000 | {
"year": 2019,
"sha1": "67973fdc1be2d5b477ebdf872865e13e0cea5874",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1038/s41598-019-53751-9",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "67973fdc1be2d5b477ebdf872865e13e0cea5874",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
119088400 | pes2o/s2orc | v3-fos-license | Inflationary $\alpha-$attractors and $F(R)$ gravity
We consider a generic class of the so-called inflationary $-\alpha$ attractor models and compute the cosmological observables in the Einstein and Jordan frames, of the corresponding $F(R)-$gravity theory. We find that the two sets coincide (to within errors from the use of the slow-roll approximation) for moderate and large values of the number of e-foldings $N$, which is the novel result of this paper, generalizing previous results on the subject (see e.g. Ref. (\citen{oik1})). We briefly comment on the possible generalizations of these results.
Introduction
Inflationary cosmology is the main theoretical description of the Early Universe in the context of which it was possible to address and solve the main theoretical problems of the Standard Big Bang description of our Universe (see Refs. [1]- [3]). Other theoretical attempts to solve the Early Universe puzzles include the so-called bouncing cosmological models (see Refs. [4]- [10]). In what follows in this paper we will be comparing our results for the cosmological observables, namely the spectral index of primordial curvature perturbations n s and the tensor-to-scalar ratio r, with those of the Planck observational data. 11 Recently an interesting class of models was discovered in Ref. [12], called the α−attractors models with the property that the cosmological observables are identical for all the members of the α−class, in the large N −limit, where N is the number of e-foldings. Subsequently these models were studied more extensively (see Refs. [13]- [23]). Also a recent study is that of Ref. [24] which the present study follows and is a generalization of it.
The above α−attractor inflationary potentials have a large flat plateau for large values of the inflaton scalar field and in the small α−limit are asymptotically quite similar to the hybrid inflation scenarios. 27 Well known inflationary models are special limiting cases of the α−attractors models, such as the Starobinsky model. 28 In this paper we compute the cosmological observation parameters of the spectral index of primordial curvature perturbations n s and the tensor-to-scalar ratio r for a more general class of inflationary potentials with the α−attractors property, than those examined in Refs. [24], [12]. This potential was suggested in. 14 More specifically we compute these cosmological observables in the so-called Einstein frame, 24 where the form of the potential in explicitly given in the action functional. Then we find the same cosmological observables 31 in the so-called Jordan frame, where the action functional is that of the corresponding F (R)−gravity theory (see Refs. [32]- [36]) and we explicitly compute the corresponding F (R) gravity theory.
Then we want to compare these two sets of values for the cosmological observables in order to test 24 whether the two frames' descriptions are equivalent observationaly. The equivalence of the descriptions in the two frames was explicitly shown in Ref. (25). See however Ref. (26) for an important exception. We thus find that these two sets of observables coincide (to within computational errors from the use of the so-called slow-roll approximation), for moderate and large values of the number of e-foldings N , a result that generalizes those of Ref. [24], in a novel way. This occurs for reasonable values of the set of parameters of the inflationary potential and for small to moderate α−values. For more references related to inflation in F (R) gravity theories see Refs. [37]- [42].
This paper in organized as follows: In section 2 we refer to the basic facts about inflationary −α attractor models and compute the corresponding F (R)−gravity theory. In section 3 we compute the cosmological observables in the Einstein frame and in section 4 in the Jordan frame. Finally in section 5 we present the results and discuss them.
In this paper we assume that the metric of the cosmological spacetime is that of the flat FRW model where a(t) is the scale factor andḢ =ȧ/a is the Hubble parameter. Also we assume that the connection is a symmetric, metric compatible and torsion-less affine connection, namely the so-called Leci-Civita connection and the corresponding Ricci-scalar curvature is given by Finally we use units where κ 2 := 8πG = 1 and h = c = 1.
Basics of Inflationary α−attractors and F (R) gravity
In this section we introduce the basic discussion concerning the inflationary α−attractor models, which are classes of inflationary potentials with large flat potential plateaus, and their relation to the cosmological observables. Thus we consider the F (R) gravity action in the so-called Jordan framê paper1 Inflationary α−attractors and F (R) gravity 3 whereĝ µν is the metric in the Jordan frame andR the corresponding Ricci scalar curvature. Introducing the auxiliary scalar field A, one can write the above action as 24Ŝ By varying the actionŜ 1 with respect to A we obtain A =R and verify thus the equivalence of the actions (3), (4). Making the conformal transformation and introducing the canonical transformation corresponding to Eq. (20) of Ref. 24, the action is transformed toŜ 1 −→ S 1 , namely to the action in the Einstein frame where Here the dependence ofR on Φ is found by solving the second of Eqs. (6) with respect to A =R. Also we obtain from this and using Eq. (8)we finally obtain (see Eq. (24) of Ref. (24)) In our conventions and notation we have −∞ < Φ ≤ 0 for the scalar field. For example, for the Starobinsky model, 28 which in our notational conventions is given by we obtain the corresponding F (R) gravity description as 24 The above action of Eq. (7) can also occur from an action, in the so-called φ−Jordan frame, 24 with a non-canonically coupled scalar field of the form Making the transformation we finally obtain the action of Eq. (7), namely In the present paper we consider the potential (−∞ < Φ ≤ 0) This is a generalization of the potential proposed in Fig. 2 of Ref. (12), where the parameter α is introduced, which is inversely proportional to the curvature of the inflaton Kähler manifold. 14 The parameters m, n are not necessarily integers. As in the previous case of the Starobinsky model the slow-roll regime corresponds to The choice of the above potential for our study is partially based on the fact that it is quite generic, possesses a large horizontal flat plateau for large negative Φ−values for the slow-roll inflation and possesses many limiting cases as special cases, for example the Starobinsky model, 28 or the Higgs inflationary model, 29 and so on. The potential of Eq. (16) is shown in Figs. (1)- (2).
Using the expansions (for z := e √ 2/3αΦ ≪ 1) we obtain in the slow-roll regime we then obtain This equation will be needed in the analysis that follows in section 5.
In the main part of the analysis that will follow we will be concerned with the large curvature limit (α −→ 0). Also, although the analysis that follows can be generalized to arbitrary values of m, n, from now on we will focus on the specific subclass of models where m = 2n. This is an exceptional and interesting case that is trackable for analytical treatment. Then A = 0, B = −2n and we have from Eq.
Cosmological Parameters in the Einstein frame
For the potential of Eq. (16) and for m = 2n one can calculate the cosmological observables of the spectral index of primordial curvature perturbations n s and the scalar-to-tensor ratio r, as they occur in the Einstein frame of Eq. (7), from where the slow-roll parameters are given by The calculation is exact as it was done in Eqs. (5.2)-(5.4) of Ref. 14, without invoking the slow-roll hypothesis, but only assuming that inflation ends when ǫ ≃ 1. The results are as follows: We define the number of e-foldings N as where Φ i , Φ e are the initial and end values of the inflaton scalar field. Along with the definitions ξ := tanh(Φ/ √ 6α) and g := √ 3α 2n we obtain On the other hand from the requirement that inflation ends when ǫ ≃ 1, we obtain from the first of Eqs. (22) ξ e = [−(1 + g) + g(g + 2)] < 0 Now remembering that y = tanh −1 (x) = 1 2 ln| 1+x 1−x | and observing that in the limit of ξ i −→ −1 + 0 the sixth term in Eq. (24) dominates the with respect to the fourth and the second terms, we obtain Now using Eq. (25) into Eq. (26) and defining we obtain after a lengthy and careful calculation that and the cosmological observation parameters are given by Eq. (21), using Eqs. (28).
Cosmological Parameters in the Jordan frame
It can be easily shown that an approximate solution to Eq. (20), valid in the slow-roll regime (FR ≫ 1) is given by where Λ is a positive cosmological constant. Now varying the action of Eq. (3) we obtain 32 (R is given by Eq. (2) where H =ȧ/a is the Hubble parameter and we drop hats for simplicity, since it is clear that we work in the Jordan frame of Eq. (3)) Inflationary α−attractors and F (R) gravity 7 These reproduce Eq. (31) of Ref. (24) in the proper limit. In the slow-roll regime, where (Ḣ ≪ H 2 ) we may approximate Using Eqs. (29) we obtain then from the first of Eqs. (30) we obtain after a slightly lengthy calculation where H 0 , t k are arbitrary integration constants of Eq. (32). We assume that the cosmological constant is given by Λc 0 = 108H 2 0 H 1 > 0. Then the third and the last three terms of Eq. (32) are equated to zero and this gives Specifically the time t k is assumed to be the time where the horizon crossing for the comoving wavenumber k = a If we assume that the slow-roll regime (and essentially the inflation also) ends when . Defining the number of e-foldings of inflation as we end up with Thus finally the cosmological observables of Eq. (35), as they occur in the Jordan frame, are given by Although these equations are similar in form with Eq. (50) of Ref. (24), they are essentially different in the fact that they depend additionally on the parameter α and on the freely specified parameter H 0 . Hereafter we choose H 0 = 1. In all the numerical examples below we found that in Eq. (34), H 1 ≤ 0.02 in practically all the cases, so that the slow-roll approximation (H 1 ≪ H 0 ) is indeed satisfied in all of the relevant cases.
Discussion
In this section we are ready to compare the observational indices as they occur in the Einstein and Jordan frames of Eqs. (21) and Eqs. (39) respectively. In order to make compatible our results with Eqs. Fig. 3. This was easily achieved by choosing α = 0.0625, as it is referred to in the caption of Fig (3). This is shown by the black horizontal line. Also we observe that the two curves are practically asymptote to this value for all values of N ≥ 50 (to within errors due to the use of the slow-roll approximation), namely the two sets of observables coincide. This is the main result of the present paper that generalizes the results of Ref. (24). The curves of Fig. 3, as has been referred to, correspond to the case of α = 0.0625, n = 1. This is in our viewpoint a novel result, unlike the case of Ref. (24), where the observational indices in the two frames coincide only in the large-N limit and in the small−α limit. In our case this happens also for α ≤ 0.1 also for moderate N −values. In the same manner we obtain also Fig. 4, where we find that r (E) (N = 60) ≃ 0.00072038 < 0.10, for the parameter value of V 0 = 45. When the value of the α−parameter increases however, (namely for α = 0.125) we obtain the curves of Figs. (5)- (6). Here the observational indices in the two frames do not asymptote to a common value unless the number of e-foldings is larger than N ≥ 300. Therefore the two sets of observational indices, corresponding to the two frames coincide for practically all relevant values of the number of e-foldings N , when α ≤ 0.1 and n ≃ 1. This is also evident from Figs. (7)- (8). It is quite possible, although difficult to ascertain analytically, that without invoking the condition m = 2n in the potential of Eq. (16), equivalence of the two frame descriptions would occur for an even vaster range of parameter values.
Regarding the crucial issue of whether these attractors and the observational indices connected with them can be used to distinguish between the two frames (namely the Einstein and Jordan frames) the author considers this to be a very deep question and a definite answer cannot so easily be given. However according to the authors viewpoint and relevant work on this subject (see Refs. 26,29,43), in the case considered in this paper, although the two frames are mathematically equivalent, as they are connected by a conformal transformation, there exists a physical non-equivalence of the two frames, and by using the results obtained here, regarding the observational indices, one may be able to distinguish between the two frames.
Since it is quite difficult to obtain a definite answer, 24 for the most generic case, regarding the equivalence of the descriptions of the observational indices in the two frames (Einstein and Jordan frames), it would be interesting to try to check this postulate for more realistic and general inflationary potentials, as those for example suggested by appropriate limits of certain supergravity and/or string theory actions (see Ref. (2) and references therein). Work along these lines is in progress. | 2017-08-26T12:50:02.000Z | 2017-08-26T00:00:00.000 | {
"year": 2017,
"sha1": "884e87c9d6e8c160368e843a390aed1ab43f9c19",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1708.07963",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "884e87c9d6e8c160368e843a390aed1ab43f9c19",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
195805807 | pes2o/s2orc | v3-fos-license | Height and timing of growth spurt during puberty in young people living with vertically acquired HIV in Europe and Thailand
Objective: The aim of this study was to describe growth during puberty in young people with vertically acquired HIV. Design: Pooled data from 12 paediatric HIV cohorts in Europe and Thailand. Methods: One thousand and ninety-four children initiating a nonnucleoside reverse transcriptase inhibitor or boosted protease inhibitor based regimen aged 1–10 years were included. Super Imposition by Translation And Rotation (SITAR) models described growth from age 8 years using three parameters (average height, timing and shape of the growth spurt), dependent on age and height-for-age z-score (HAZ) (WHO references) at antiretroviral therapy (ART) initiation. Multivariate regression explored characteristics associated with these three parameters. Results: At ART initiation, median age and HAZ was 6.4 [interquartile range (IQR): 2.8, 9.0] years and −1.2 (IQR: −2.3 to −0.2), respectively. Median follow-up was 9.1 (IQR: 6.9, 11.4) years. In girls, older age and lower HAZ at ART initiation were independently associated with a growth spurt which occurred 0.41 (95% confidence interval 0.20–0.62) years later in children starting ART age 6 to 10 years compared with 1 to 2 years and 1.50 (1.21–1.78) years later in those starting with HAZ less than −3 compared with HAZ at least −1. Later growth spurts in girls resulted in continued height growth into later adolescence. In boys starting ART with HAZ less than −1, growth spurts were later in children starting ART in the oldest age group, but for HAZ at least −1, there was no association with age. Girls and boys who initiated ART with HAZ at least −1 maintained a similar height to the WHO reference mean. Conclusion: Stunting at ART initiation was associated with later growth spurts in girls. Children with HAZ at least −1 at ART initiation grew in height at the level expected in HIV negative children of a comparable age.
Introduction
Although young people living with HIVare at risk for poor height growth [1], treatment with antiretroviral therapy (ART) improves growth, with strongest gains in those treated at a young age [2]. Although initial catch-up growth on ART has been well described [2], there are less data on long term growth, particularly during adolescence.
Delays in pubertal development have been reported in young people with HIV [3][4][5][6][7], with the onset of puberty [5] and sexual maturation [6] occurring 6 months later compared with HIV-exposed uninfected young people (HEU). Earlier puberty in the general population is associated with being taller and having higher BMI throughout childhood [8], and poor growth in children with HIV has been shown to account for much of the delay in reaching sexual maturity [6]. There is also evidence that children starting ARTwith low height-forage z-scores experience delays in the onset of puberty independently of age at ART initiation [3].
Poor growth during childhood can have implications for future health. Height velocity is associated with increased HIV replication [9] and progression to AIDS and death [10] with the association with death being independent of age, viral load and CD4 þ cell count [11]. The timing of puberty is also inversely associated with bone mass and density among HIV-negative adolescents [12] and delayed puberty may increase future risk of osteoporosis among young people with HIV, who themselves are at risk of poor bone health, either caused by HIV infection itself or prolonged exposure to ART [13]. Early growth failure has also been linked to poorer social and economic outcomes in later life in the general population [14].
In this study, statistical models that describe an individual's growth in terms of mean height throughout adolescence, and timing and shape of the adolescent growth spurt were applied to longitudinal height measurements. The overall aim of this study was to explore the association between characteristics at ART initiation, in particular age and height-for-age z-score, and growth during adolescence.
Materials and methods
Seventeen paediatric HIV cohorts from 15 countries contributed individual level data to the European Pregnancy and Paediatric HIV Cohort Collaboration (EPPICC) between September 2016 and March 2017 using a modified HICDEP protocol (www.hicdep.org). Pseudo-anonymized data on all children at participating clinics were included. All cohorts received approval from local and/or national ethical committees. Five cohorts from three countries (Italy, Ukraine and three from Russia) where height data were not routinely collected (each with <20% of children having a height measurement at ART initiation) were excluded. Children from the remaining 12 cohorts were eligible provided they initiated ART with at least two nucleoside reverse transcriptase inhibitors (NRTIs) along with a nonnucleoside reverse transcriptase inhibitor (NNRTI) or boosted protease inhibitor (bPI); were 1-10 years old at ART initiation; not known to have horizontally acquired HIV; and aged at least 8 years at the end of follow-up. We excluded children initiating ART after age 11 years. For those initiating ART at an older age, it would be difficult to distinguish between changes in growth occurring as a result of a pubertal growth spurt and as a result of initiating ART. Children with no height recorded at ART initiation and/or after 8 years of age were excluded.
Height measurements were censored at the earliest of 19th birthday, transfer to adult care, death or loss to follow-up. Height and BMI were converted to heightfor-age z-scores (HAZ) and BMI-for-age z-scores (zBMI), using the WHO Growth Standard for measurements when children were aged under 5 years [15] and the WHO 2007 growth reference when aged 5-18 years [15,16]. Data checks were carried out to detect implausible changes in height and/or HAZ. HAZ was categorized according to WHO definitions as less than À3 SD (severe stunting); À3 to less than À2 SD (stunting); À2 to less than À1 SD; and at least À1 SD. zBMI was categorized as less than À2 SD (underweight); À2 to 1 SD (normal); more than 1 to 2 SD (overweight); and more than 2 SD (obese). HAZ and zBMI nearest to ART initiation (closest within 6 months before to 1 month after) were considered baseline measurements.
Statistical analysis
Characteristics at ART initiation were summarized by HAZ category. Mean height at age 16 years was summarized by age and HAZ at ART initiation and compared with the WHO reference height to quantify differences in height following the growth spurt. It was not possible to assess differences in final height, as many adolescents transfer to adult care from age 16 years, ending follow-up in EPPICC.
Height was modelled using Super Imposition by Translation And Rotation (SITAR) models [18]. SITAR was developed to model growth during childhood and adolescence and quantifies differences in growth via three parameters representing the timing and shape of the adolescent growth spurt, as well as average height. The models can explain up to 99% of the variation between individuals' growth [18] and can be summarized as: where the outcome y it is the height of individual i at age t and h( ) is a natural cubic spline of height over age. The parameters a i , b i and c i are participant specific random effects. a i represents average height throughout adolescence; negative values indicate shorter height overall. b i represents timing of the pubertal growth spurt; negative values indicate earlier puberty. c i represents growth velocity, or the shape of the growth spurt; positive values indicate shorter growth spurts and a steeper growth velocity curve, while negative values indicate the growth spurt occurs over a longer duration. Corresponding growth velocity curves can also be estimated as the first derivative of the modelled growth (height) curve.
Age at peak height velocity (APHV) is correlated with timing of puberty and often used as a proxy for timing of maturation. It commonly occurs in girls in Tanner stage 2 or 3 and in Tanner stage 3 or 4 for boys [19,20], though there is variation in timing across Tanner stages [19]. Differences in the timing of the growth spurt estimated using SITAR models have been shown to be highly correlated with APHV [18].
All height measurements (in cm) from age 8 (or start of ART if after 8th birthday) to 18 years were included. Age and HAZ at ART initiation were added to the SITAR model as fixed effects that could influence the mean of a, b and c. Thus, the estimated random effects a i , b i and c i represent the individual differences in average height, timing and shape of the growth spurt not associated with differences in age or height at ART initiation. Models were fitted separately to boys and girls using a spline with 6 degrees of freedom. Log transformations of both age and height [18] were considered, but the untransformed data provided the best fit. Interactions between baseline height and age were added where appropriate [model comparison carried out using Bayes Information Criteria (BIC)].
To explore other factors (sex, country, initial ART regimen, WHO immunological classification, zBMI at ART initiation) associated with growth after allowing for differences in baseline age and height, the estimated a i , b i and c i random effects from the SITAR model were analysed using multivariable linear regression. Interactions between each of the factors and sex and between immunological classification and HAZ and age at ART initiation were considered. A second model was fitted including zBMI at age 8 years instead of at ART initiation.
Modelling was repeated in countries where more than 5% of children were born abroad and more than 5% born in the country (UK and Ireland, Spain and Netherlands) to explore differences between those born abroad and those born in the cohort country. Three sensitivity analyses were carried out: in the first separate models were fitted for children from Thailand and elsewhere; in the second Thai-specific growth reference data were used for Thai children [21]; and in the third children starting ART after their eighth birthday were excluded.
Patient characteristics
In total, 1943 young people with HIV initiated ART on an eligible regimen age 1-10 years and were at least 8 years old at the end of follow-up ( Fig. 1). After excluding those with missing baseline height (n ¼ 721) and/or height after age 8 years (n ¼ 202), we included 1094 children in the analysis. Children excluded due to missing height data were more likely to be from countries other than Thailand or UK/Ireland, be born abroad and be younger at ART initiation than those who were included (Supplementary Table 1 At ART initiation, median HAZ was À1.2 (À2.3, À0.2) and age was 6.4 (2.8, 9.0) years. Characteristics of children at ART initiation, stratified by baseline HAZ, are described in Table 1. More severe stunting was associated with residence in Thailand, not being born abroad, initiating on an NNRTI based regimen, earlier calendar year of ART initiation, higher viral load, more severe immunodeficiency and lower zBMI at ART initiation.
At the end of the study, 493 (45%) children had reached their 16th birthday while still in paediatric care (Fig. 1), of whom 463 (94%) had their height recorded within 6 months of their birthday. Children who survived to age 16 years but were no longer in follow-up in paediatric care were more likely to reside in Thailand and start ART at a younger age. At age 16 years, the mean (standard deviation) heights of boys and girls were 166 (8.7)cm and 158 (6.9)cm, respectively, significantly shorter than the WHO reference mean height of 173 (7.8)cm for boys and 163 (6.8)cm for girls (both P < 0.001) (Supplementary Table 2, http://links.lww.com/QAD/B501).
Associations between age and height-for-age z-score at antiretroviral therapy initiation and growth from age 8 years Results from the SITAR models are available in Supplementary Table 3, http://links.lww.com/QAD/ B501. Estimated mean height and corresponding growth velocity curves stratified by HAZ and by age are summarized in Fig. 2a and b, respectively, for girls and Fig. 3a and b for boys.
In girls, across each of the baseline HAZ groups (Fig. 2aiiv), children starting ART in the oldest age group had growth spurts on average 0.41 [95% confidence interval (95% CI) 0.20-0.62] years later than those starting ART in the youngest age group. Across the baseline age groups (Fig. 2bi-iii), girls starting ART with low HAZ had later growth spurts; there was a 1.50 (1.21-1.78) year delay in those with baseline HAZ less than À3 compared with baseline HAZ at least À1. The effect of this delay on overall height can be seen in Fig. 2b iv-vi; the differences in height are smaller from age 16 onwards (after the growth spurt) than at age 8 years.
In boys, the association between baseline age and the timing of the growth spurt differed by baseline HAZ (Fig. 3ai-iv); there was no significant difference by age in boys who started ARTwith HAZ at least À1 (Fig. 3ai). In boys with baseline HAZ of À2 to less than À1 (Fig. 3aii), the growth spurt was 0.96 (0. 19-1.72) years later in those starting ART in the oldest compared with the youngest age group. Similarly, for a baseline HAZ of À3 to less than À2 (Fig. 3aiii), the corresponding delay in those starting ART in the oldest age group was 0.92 (0.17-1.66) years, and for baseline HAZ less than À3, it was 0.42 (À0.32 to 1.16) years (Fig. 3aiv). The timing of the growth spurt in boys did not differ significantly by baseline HAZ (Fig. 3bi-iii).
Girls (Fig. 2bv) and boys (Fig. 3bv), who started treatment with a baseline HAZ at least À1, maintained a similar mean height to the WHO reference, regardless of baseline age.
Other factors associated with growth from age 8 years Characteristics associated with variations in growth that remained after adjusting for differences in baseline HAZ and age are summarized in Table 2. Young people from Thailand were smaller throughout adolescence than those from other countries, but did not differ in the timing of the growth spurt. The shape of the growth spurt differed by country and was shorter in children from the UK and Ireland than elsewhere. Lower zBMI at ART initiation was significantly associated with a later growth spurt [a one SD decrease was associated with a 0.07 (0.02-0.11) year delay in the growth spurt]. In a second model (data not shown), a one SD decrease in zBMI at age 8 years was associated with a 0. 16 (0.09-0.22) year delay in the timing of the growth spurt, while other parameters did not change substantially.
There was no evidence of any interactions.
In subgroup analysis (n ¼ 545), there was a significant interaction between sex and being born abroad on timing of the growth spurt (P ¼ 0.038). Girls born abroad experienced a growth spurt 0.24 (0.02-0.46) years earlier than those born in the cohort country, although there was no association in boys. However, after adjusting for zBMI at age 8, the association was no longer significant [growth spurt for girls born abroad was 0.18 (À0.05 to 0.42) years earlier].
In the three sensitivity analyses wherein models were fitted separately to children from Thailand and elsewhere, Thai-specific reference data were used for Thai children and children starting ART age at least 8 years were excluded, overall conclusions were unchanged (data not shown).
Pubertal growth in young people with HIV Crichton et al. 1901 Table 1. Characteristics of 1094 young people living with HIV at antiretroviral therapy initiation stratified by height-for age z-scores.
Discussion
In this study, we described growth throughout adolescence in a large cohort of young people with vertically acquired HIV in Europe and Thailand. Although all adolescents in the study initiated ART before age 11 years, growth deficits remained throughout adolescence. Only children with HAZ at least À1 when starting ART were able to achieve a similar height to the WHO reference at age 16 years, suggesting that for others, catch up growth associated with being on ART long term was not sufficient to restore height to what would be expected in an HIV-negative population.
We observed an association between older age at ART initiation and later growth spurts in boys (with HAZ <À1 at ART initiation) and girls, in line with findings from the Antiretroviral research for Watoto (ARROW) trial wherein attainment of each tanner stage and onset of menarche was delayed in those starting ART at older ages [3]. We also observed an association between stunting and later growth spurts, but only in girls. The potential role of anthropometric parameters in early childhood on growth during puberty was highlighted in a study of 2539 young people with vertically acquired HIV and HIV-exposed uninfected (HEU) young people from the USA [6]. Young people living with HIV reached sexual maturity on average 6 months later than the HEU group, but differences in HAZ prior to puberty accounted for up to 98% of the delay in boys and (together with zBMI) 74% in girls, suggesting much of the delay may be attributable to earlier poor growth [6]. Low HAZ at ART initiation was also associated with delayed attainment of all Tanner stages in boys and girls, and menarche in girls, independently of age at ART initiation in the ARROW trial [3]. However, in boys, the delay was reduced in those who had the greatest initial gains in CD4 þ cell count after starting ART, but there was no similar association in girls. Undernutrition early in life was also found to have a stronger association with adult height in women than men in the Netherlands [23]. Although this suggests that girls may be more sensitive to impairments early in life, and prior to ART, the mechanism underlying potential sex differences remain to be explained.
After accounting for HAZ and age at ART initiation, we found no association between WHO immunological status or viral load at ART initiation and growth. Similarly, the ARROW trial found immune suppression 1904 AIDS 2019, Vol 33 No 12 prior to ART was not associated with delayed puberty or menarche [3]. Other studies have also reported a lack of association between clinical status at start of puberty and age at onset [4,7]. However, in young people in the USA, low CD4 þ cell count and high viral load at first pubertal assessment were associated with later pubertal onset. Among boys, prior CDC C, low nadir CD4% or high peak viral load were also associated with later puberty [5]. However, many of these young people initiated ART on mono or dual therapy and are likely to have substantially different treatment histories compared with our study.
We found zBMI at ART initiation and age 8 to be associated with the timing of the growth spurt, with no evidence of a difference between boys and girls. We also observed that girls born abroad experienced an earlier pubertal growth spurt than girls born in the cohort country, but the differences in the timing of the growth spurt reduced after adjusting for zBMI at age 8. In girls, a relationship between low BMI and delayed puberty has been found in multiple studies [8] and rapid weight gain prior to puberty also linked to early onset [24]. Differences between young people born abroad and those born in the country may therefore be explained by periods of more rapid weight gain in children arriving from abroad, the majority from Africa, compared with those born in the country.
This study had several limitations; as with all observational studies, our findings on the association between age and HAZ at ART initiation and growth should not be over interpreted or assumed to be causative. At ART initiation, stunting was strongly correlated with immunosuppression, viral load and zBMI and may be a marker for poor immunological status and other impairments. Children starting ART at older ages represent a group who have survived without treatment and possibly with limited access to care and so may be subject to a survivor bias. Had ART initiation been delayed in those who started at a young age, the observed delay in the growth spurt associated with starting ART at an older age may have been less in this group who would also have been more likely to have access to healthcare and regular monitoring. Nonetheless, the findings provide insight in to growth patterns among children presenting to care and starting ART at different ages.
Inclusion criteria applied also lead to the potential for selection bias. We excluded children with missing height data. Multiple imputation was not possible, as other data, such as immunological and virological status, at ART initiation, likely to be strong predictors of baseline height were missing in more than half of the children with missing heights. We excluded young people from Russia, Ukraine and Italy where height data were not routinely recorded. Further, the cohorts included in EPPICC range from national coverage to city hospitals leading to potential for bias where children treated in large city hospitals are not representative of others in the country. Our analyses were restricted to children aged 1-10 years at ART initiation. The number of infants initiating ART under age 1 year was small, with high rates of missing baseline data. A further limitation is the lack of quantitative measures of pubertal status such as Tanner stage and date of onset of menarche, which is not routinely collected by the majority of participating cohorts. However, differences in timing of the growth spurt are likely to be indicative of differences in the timing of onset of puberty.
Finally, we used the WHO growth standard [15] and growth reference [16] to derive z-scores at ART initiation. Although the WHO growth standards were developed to assess growth globally, children from Thailand were significantly shorter than those residing in Europe and the WHO reference may overestimate stunting as compared to Thailand's own national growth reference [25]. However, in sensitivity analyses, using Thai reference data, we did not find any difference in the associations between baseline HAZ and growth during adolescence.
Despite these limitations, the study has several strengths. The collaborative nature of the study provides a rich source of longitudinal height measurements from a large sample of young people living with HIV followed during childhood and adolescence and the use of SITAR models provides insight into growth during puberty in the absence of quantitative measures of pubertal status.
In summary, we have shown that children who initiate ART at younger ages are taller. Children who initiate ARTwith a 'normal' height for age z-score (HAZ !À1) remained with a 'normal' height throughout adolescence. Those who initiated ART stunted or severely stunted were less likely to achieve 'normal' height. We also demonstrated that in girls, regardless of age at ART initiation, stunting at time of initiation was associated 1906 AIDS 2019, Vol 33 No 12 Table 2. Association between characteristics at antiretroviral therapy initiation and average height, timing and shape of growth spurt after adjustment for baseline age and height-for-age z-score in 918 young people living with HIV. Individual size, tempo and velocity parameters were estimated using the SITAR model described in the results and table S1, http://links.lww.com/ QAD/B501 and represent the differences in size, tempo and velocity unexplained by age and HAZ at ART initiation. Model included data from 918 of the 1094 children included in the SITAR model for which data on the explanatory variables were complete. CI, confidence interval; NNRTI, nonnucleoside reverse transcriptase inhibitor; PI, protease inhibitor; zBMI, BMI-for-age z-scores.
with a later pubertal growth spurt, and this continued growth into later adolescents may allow those most severely stunted to catch-up somewhat. However, longer-term follow-up is required to understand the potential implications of delayed pubertal growth on outcomes in later life. | 2019-07-06T13:05:09.235Z | 2019-07-01T00:00:00.000 | {
"year": 2019,
"sha1": "2bd287e6cb40339b9934a18ca23d72997e132c4b",
"oa_license": "CCBY",
"oa_url": "https://journals.lww.com/aidsonline/Fulltext/2019/10010/Height_and_timing_of_growth_spurt_during_puberty.10.aspx",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "64787b511bf172be38f5f562c5b01b32f8fe8284",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
248583199 | pes2o/s2orc | v3-fos-license | Introducing a Pole Concept for Nodule Growth in the Thyroid Gland: Taller-than-Wide Shape, Frequency, Location and Risk of Malignancy of Thyroid Nodules in an Area with Iodine Deficiency
Purpose: (i) To examine the criterion taller-than-wide (TTW) for the sonographic assessment of thyroid nodules in areas of iodine deficiency in terms of frequency, anatomical distribution within the thyroid gland and risk of malignancy. (ii) To develop a model for nodule growth in the thyroid gland. Methods: German multicenter study consisting of two parts. In the prospective part, thyroid nodules were sonographically measured in all three dimensions, location within the thyroid gland and contact to a protrusion-like formation (horn) in the dorsal position of thyroid gland was noted. In addition, further sonographic features such as the composition, echogenity, margins and calcifications were investigated. All nodules from the prospective part were assessed for malignancy as part of clinical routine at the decision of the treating physician adhering to institutionally based algorithms. In the retrospective part, only nodules with fine needle aspiration and/or histology were included. The risk of malignancy in TTW nodules was determined by correlating them with cyotological and histological results. Results: Prospective part: out of 441 consecutively evaluated thyroid nodules, 6 were found to be malignant (1.4%, 95% CI 0.6–2.7%). Among the 74 TTW nodules (17%), 1 was malignant (1%, 95% CI 0–4%). TTW nodules were more often located in the dorsal half of the thyroid than non-TTW nodules (factor 2.3, p = 0.01, 95% CI 2.1–2.5) and more often located in close proximity to a horn than non-TTW nodules (factor 3.0, p = 0.01, 95% CI 2.4–3.8). Retrospective part: out of 1315 histologically and/or cytologically confirmed thyroid nodules, 163 TTW nodules were retrieved and retrospectively analyzed. A TTW nodule was 1.7 times more often benign when it was dorsal (95% CI 1.1–2.5) and 2.5 times more often benign when it was associated with a horn (95% CI 1.2–5.3). The overall probability of malignancy for TTW nodules was 38% (95% CI 30–46%) in this highly preselected patient group. Conclusion: TTW nodules are common in iodine deficient areas. They are often located in the dorsal half of the thyroid gland and are frequently associated with a dorsal protrusion-like formation (horn) of the thyroid. Obviously, the shape of benign nodules follows distinct anatomical preconditions within the thyroid gland. The frequency of TTW nodules and their predominant benignity can be explained by a pole concept of goiter growth. The difference between the low malignancy risk of TTW nodules found on a prospective basis and the high risk found retrospectively may be the result of a positive preselection in the latter.
Introduction
Iodine deficiency is an important risk factor in the development of nodular thyroid disease [1]. More than 30% of the German population suffer from mild to moderate iodine deficiency. Although substantial progress has been made in recent decades in eliminating iodine deficiency, functional thyroid disorders and goiter are still prevalent [2]. The prevalence of thyroid nodules ranges from 12.5% in young men to over 80% in older woman [3]. The clinical challenge is to reliably detect malignant nodules while avoiding unnecessary interventions for benign lesions [4].
High-resolution ultrasound (US) is the most important imaging modality for the characterization of thyroid nodules. Different research groups developed US-based tools for stratifying the risk of malignancy of thyroid nodules using a combination of suspicious ultrasound features. In 2009, the terminology of the "Thyroid Imaging Reporting and Data System" (TIRADS) was introduced [5], based on the "Breast Imaging Reporting and Data System" (BIRADS), which has been established for breast tumors for many years. Whereas some US criteria for malignancy in thyroid nodules were already known to the communitysuch as hypoechogenicity and irregular margin-others were newly introduced, such as a taller-than-wide (TTW) shape [5][6][7][8][9][10][11].
Although not synonymous, TTW shape is also referred to as non-parallel orientation. This term indicates the concept of TTW growth as a sign for malignancy. It implicates that, malignancies within the thyroid gland tend to grow against the given orientation of the thyroid gland axis, which is along the long axis of the thyroid gland. In contrast, benign nodules appear to respect this orientation by growing parallel with respect to the long axis [20].
In our study we extended this concept of parallelism to include a known protrusion of the thyroid gland towards the back named Zuckerkandl's tubercle. Throughout this article, Zuckerkandl's tubercle is referred to as posterior horn as was done by its first descriptions [21,22]. Along with the development of goiter, the posterior horn is known to be involved in overall thyroid growth and to harbor thyroid nodules [23]. Our aim was to investigate if this protrusion-which by its nature is TTW-alters the growth pattern of benign nodules in this region, such that benign nodules assume a TTW shape. A TTW shape of benign nodules, however, at such location would still be parallel with regard to the protrusion.
On US, the posterior horn is hard to see as it is often hidden behind the trachea [24,25]. On the other hand, particularly in goitrous growth, a dorsocaudal protrusion at each thyroid gland may be seen. This extension has no proper name in literature. It is indirectly named as "cleft sign" in autoimmune thyroid disease, meaning the fibrous duplicature of the thyroid capsule developing between this dorsocaudal extension and the back surface of the lower thyroid pole [26]. It is unclear to the authors, if such dorsocaudal protrusion may be considered as a variant of the posterior horn. To make a distinction, this dorsocaudal protrusion is herewith introduced as posteroinferior horn.
Throughout this paper, any extremity of the thyroid gland, i.e., the upper and lower end of each thyroid lobe, the thyroid isthmus and any form of horn, is referred to as a pole. Such poles are the basis for a concept for nodule growth to be developed in the Section 4 based on the results of the study. As a hypothesis, we assume the configuration of thyroid poles to channel the shape of nodules. In this context, in an area of iodine deficiency, we investigated the following questions regarding TTW configuration:
Where are TTW nodules located within the thyroid gland? 3.
What is the risk for malignancy in TTW nodules? 4.
Finally, a pole concept for nodule growth is developed.
Materials and Methods
The multicentric data collection was conducted according to the guidelines of the Declaration of Helsinki and approved by the Ethics Committee of the Medical Faculty of the University Hospital of Duisburg-Essen, Germany (protocol code: 16-7022-BO, 04-AUG-2016, date of approval at 4 August 2016).
This study is divided into two parts. The first part is prospective in nature; 441 consecutive thyroid nodules with a minimum diameter of ≥7 mm in each direction were examined for TTW status. Only initial presentations (first US examinations) were considered. The patients were recruited from six centers distributed across Germany, which all belong to the "German TIRADS Study Group" (GTSG, www.tirads.de, accessed on 2 April 2022). Experienced examiners measured the nodules in all three spatial dimensions using US.
The location of each TTW nodule within the thyroid gland was described in three dimensions (Figure 1a-c).
Using a uniform Excel file, investigators were instructed to assign each nodule to a particular location according to the center of the nodule in the respective dimension. The position in which the largest part of the nodule was located was decisive for specifying the nodule location. In cases of doubt, double assignment in each dimension was allowed, e.g., cranio-central. When a nodule was assigned to the thyroid isthmus, no further assignments in the craniocaudal and ventrodorsal dimension were recorded.
In addition, it was noted if the thyroid nodule had contact to a horn at the posterior margin of the thyroid gland. We assumed a horn if the back surface line of a thyroid lobe appeared to be interrupted by an antiparallel protrusion of the thyroid tissue backwards or backwards and downwards. The backward protrusion was named a posterior horn according to literature. In analogy, a protrusion backward and downward was named a posteroinferior horn (Figures 2 and 3). For further analysis and to simplify, both forms were often summarized as horn. We defined a nodule to have contact to a horn, if (i) such protrusion was recognizable on the images and (ii) the nodule extended into such horn, i.e., the dorsal contour of the nodule protruded from the back part of the thyroid gland.
tive thyroid nodules with a minimum diameter of ≥7 mm in each direction were examined for TTW status. Only initial presentations (first US examinations) were considered. The patients were recruited from six centers distributed across Germany, which all belong to the "German TIRADS Study Group" (GTSG, www.tirads.de, accessed on 2 April 2022). Experienced examiners measured the nodules in all three spatial dimensions using US.
The location of each TTW nodule within the thyroid gland was described in three dimensions (Figure 1a Using a uniform Excel file, investigators were instructed to assign each nodule to a particular location according to the center of the nodule in the respective dimension. The position in which the largest part of the nodule was located was decisive for specifying the nodule location. In cases of doubt, double assignment in each dimension was allowed, e.g., cranio-central. When a nodule was assigned to the thyroid isthmus, no further assignments in the craniocaudal and ventrodorsal dimension were recorded. In addition, it was noted if the thyroid nodule had contact to a horn at the posterior margin of the thyroid gland. We assumed a horn if the back surface line of a thyroid lobe appeared to be interrupted by an antiparallel protrusion of the thyroid tissue backwards or backwards and downwards. The backward protrusion was named a posterior horn according to literature. In analogy, a protrusion backward and downward was named a posteroinferior horn (Figures 2 and 3). For further analysis and to simplify, both forms were often summarized as horn. We defined a nodule to have contact to a horn, if (i) such protrusion was recognizable on the images and (ii) the nodule extended into such horn, i.e., the dorsal contour of the nodule protruded from the back part of the thyroid gland. Using a uniform Excel file, investigators were instructed to assign each nodule to a particular location according to the center of the nodule in the respective dimension. The position in which the largest part of the nodule was located was decisive for specifying the nodule location. In cases of doubt, double assignment in each dimension was allowed, e.g., cranio-central. When a nodule was assigned to the thyroid isthmus, no further assignments in the craniocaudal and ventrodorsal dimension were recorded.
In addition, it was noted if the thyroid nodule had contact to a horn at the posterior margin of the thyroid gland. We assumed a horn if the back surface line of a thyroid lobe appeared to be interrupted by an antiparallel protrusion of the thyroid tissue backwards or backwards and downwards. The backward protrusion was named a posterior horn according to literature. In analogy, a protrusion backward and downward was named a posteroinferior horn (Figures 2 and 3). For further analysis and to simplify, both forms were often summarized as horn. We defined a nodule to have contact to a horn, if (i) such protrusion was recognizable on the images and (ii) the nodule extended into such horn, i.e., the dorsal contour of the nodule protruded from the back part of the thyroid gland. The second part of the study was retrospective. It was conducted in order to assess the risk of malignancy in TTW nodules depending on their location within the thyroid gland. In order to achieve this aim, TTW nodules were retrieved from databases of six cooperating diagnostic and therapeutic thyroid centers (GTSG). These databases are continuously maintained (updated) and contain US image data as well as information on the histological or cytological diagnosis and have been previously used for the evaluation of TIRADS criteria in Germany [3,27,28]. The databases are preselected in that only nodules with cytological and/or histological confirmation were included and autonomously functioning thyroid nodules were excluded. The data already assessed (among which width, depth and length of each nodule in mm) were re-used and the according images stored in the image archival system (PACS, picture archiving and computing system) were re-evaluated regarding the location of nodules within the thyroid gland.
Statistical Analysis
All statistical tests were performed using χ²-test and Mann-Whitney U test. Confidence intervals were calculated by using standard formula. Results were considered to be significant with p < 0.05. The statistical software used was WinStat2012.1 for Excel 2019.
Patients
Prospective part: We examined 316 patients with newly diagnosed thyroid nodules, i.e., 632 thyroid lobes, and 441 nodules ≥ 7 mm were found (1.4 nodules per patient). In 40 nodules, as part of clinical routine, fine-needle aspiration cytology (FNAC) was performed, (9.1%) which was benign in 33 patients and unclear (Bethesda 3 or 4) in 7 patients. Five of these seven patients that were operated on their thyroid glands revealed two differentiated thyroid carcinomas. One patient with benign FNA was also operated on revealing benign histology. In addition, 25 patients without FNAC were operated on revealing four more thyroid carcinomas. Overall, 6 carcinomas in 441 nodules (1.4%, 95% CI 0.6-2.7%) were found, 2 microcarcinomas, 1 papillary thyroid carcinoma, 1 follicular thyroid carcinoma and 2 medullary thyroid carcinomas, the latter both in the same patient.
Retrospective part: From a retrospective data set with 1315 histologically and/or cytologically clarified thyroid nodules, 163 TTW nodules were retrieved and assessed with regard to the position within the thyroid gland and correlated with their risk of malignancy. In total, 62 TTW nodules were malignant; the malignancy rate was 38% (95% CI 30-46%). The second part of the study was retrospective. It was conducted in order to assess the risk of malignancy in TTW nodules depending on their location within the thyroid gland. In order to achieve this aim, TTW nodules were retrieved from databases of six cooperating diagnostic and therapeutic thyroid centers (GTSG). These databases are continuously maintained (updated) and contain US image data as well as information on the histological or cytological diagnosis and have been previously used for the evaluation of TIRADS criteria in Germany [3,27,28]. The databases are preselected in that only nodules with cytological and/or histological confirmation were included and autonomously functioning thyroid nodules were excluded. The data already assessed (among which width, depth and length of each nodule in mm) were re-used and the according images stored in the image archival system (PACS, picture archiving and computing system) were re-evaluated regarding the location of nodules within the thyroid gland.
Statistical Analysis
All statistical tests were performed using χ 2 -test and Mann-Whitney U test. Confidence intervals were calculated by using standard formula. Results were considered to be significant with p < 0.05. The statistical software used was WinStat2012.1 for Excel 2019.
Patients
Prospective part: We examined 316 patients with newly diagnosed thyroid nodules, i.e., 632 thyroid lobes, and 441 nodules ≥ 7 mm were found (1.4 nodules per patient). In 40 nodules, as part of clinical routine, fine-needle aspiration cytology (FNAC) was performed, (9.1%) which was benign in 33 patients and unclear (Bethesda 3 or 4) in 7 patients. Five of these seven patients that were operated on their thyroid glands revealed two differentiated thyroid carcinomas. One patient with benign FNA was also operated on revealing benign histology. In addition, 25 patients without FNAC were operated on revealing four more thyroid carcinomas. Overall, 6 carcinomas in 441 nodules (1.4%, 95% CI 0.6-2.7%) were found, 2 microcarcinomas, 1 papillary thyroid carcinoma, 1 follicular thyroid carcinoma and 2 medullary thyroid carcinomas, the latter both in the same patient.
Retrospective part: From a retrospective data set with 1315 histologically and/or cytologically clarified thyroid nodules, 163 TTW nodules were retrieved and assessed with regard to the position within the thyroid gland and correlated with their risk of malignancy. In total, 62 TTW nodules were malignant; the malignancy rate was 38% (95% CI 30-46%).
Anatomy
In the 632 thyroid lobes examined prospectively, 197 horns were found, including 114 posterior horns and 83 posteroinferior horns. In 31 thyroid glands, there was at least one posterior horn and one posteroinferior horn. A posterior horn was located in 74 cases in the right lobe compared to 40 cases in the left lobe (p < 0.01) and a posteroinferior horn was located in 43 cases in the right lobe and 39 cases in the left lobe (p = 0.68).
Anatomy
In the 632 thyroid lobes examined prospectively, 197 horns were found, including 114 posterior horns and 83 posteroinferior horns. In 31 thyroid glands, there was at least one posterior horn and one posteroinferior horn. A posterior horn was located in 74 cases in the right lobe compared to 40 cases in the left lobe (p < 0.01) and a posteroinferior horn was located in 43 cases in the right lobe and 39 cases in the left lobe (p = 0.68).
Location of Thyroid Nodules in Thyroid Gland
TTW nodules were more often located dorsally than non-dorsally compared to non-TTW nodules (relative ratio 2.3 with a 95% CI 2.1 to 2.5; Table 1). Additionally, TTW nodules were more often associated with a horn than they were outside a horn compared to non-TTW nodules (relative ratio 3.0 with a 95% CI 2.4 to 3.8; Table 2, Figure 5).
Location of Thyroid Nodules in Thyroid Gland
TTW nodules were more often located dorsally than non-dorsally compared to non-TTW nodules (relative ratio 2.3 with a 95% CI 2.1 to 2.5; Table 1). Additionally, TTW nodules were more often associated with a horn than they were outside a horn compared to non-TTW nodules (relative ratio 3.0 with a 95% CI 2.4 to 3.8; Table 2, Figure 5).
Figure 5.
Huge TTW nodule extending into a prominent posterior horn (arrowheads). Note the thyroid parenchyma extending along the cranial portion of the nodule (arrowheads) but not along the caudal portion (arrow) arguing for a pre-existing posterior horn. A pre-existing posterior horn may have channeled the way for nodule growth causing its taller than wide shape. The nodule was benign at cytology.
The following figure ( Figure 6) gives the frequency and configuration of a typical benign nodule-excluding the six carcinomas-from the prospective part of the study with regard to its location within the thyroid gland.
(a) Figure 5. Huge TTW nodule extending into a prominent posterior horn (arrowheads). Note the thyroid parenchyma extending along the cranial portion of the nodule (arrowheads) but not along the caudal portion (arrow) arguing for a pre-existing posterior horn. A pre-existing posterior horn may have channeled the way for nodule growth causing its taller than wide shape. The nodule was benign at cytology.
The following figure ( Figure 6) gives the frequency and configuration of a typical benign nodule-excluding the six carcinomas-from the prospective part of the study with regard to its location within the thyroid gland. Table 2. TTW nodules and non-TTW nodules in association with a horn.
Figure 5.
Huge TTW nodule extending into a prominent posterior horn (arrowheads). Note the thyroid parenchyma extending along the cranial portion of the nodule (arrowheads) but not along the caudal portion (arrow) arguing for a pre-existing posterior horn. A pre-existing posterior horn may have channeled the way for nodule growth causing its taller than wide shape. The nodule was benign at cytology.
The following figure ( Figure 6) gives the frequency and configuration of a typical benign nodule-excluding the six carcinomas-from the prospective part of the study with regard to its location within the thyroid gland.
(a) . The green circles represent typical benign nodule in the respective part of the thyroid gland. The size of each circle represents the frequency of nodules in the respective part (all parts together sum up to 100%, apart from rounding differences). The shape represents the typical relation of the sagittal diameter (tall) to the horizontal diameter (wide) of a typical nodule. Note that nodules at or in a horn are typically round (tall = wide) whereas nodules in other locations are elliptic (tall < wide) in particular in the thyroid isthmus (tall << wide). * Frequencies at the cranial, central, and caudal portions were 13%, 41%, and 25%, respectively. Configuration did not differ between these three locations for which reason they are given as one circle. $ including nodules at or in a horn.
Location and Risk of Malignancy for TTW Nodules (Part 2-Retrospective Part)
Retrospective part: From the retrospective data set, 163 TTW nodules were retrieved and assessed with regard to the position within the thyroid gland and correlated with Figure 6. (a,b). The green circles represent typical benign nodule in the respective part of the thyroid gland. The size of each circle represents the frequency of nodules in the respective part (all parts together sum up to 100%, apart from rounding differences). The shape represents the typical relation of the sagittal diameter (tall) to the horizontal diameter (wide) of a typical nodule. Note that nodules at or in a horn are typically round (tall = wide) whereas nodules in other locations are elliptic (tall < wide) in particular in the thyroid isthmus (tall << wide). * Frequencies at the cranial, central, and caudal portions were 13%, 41%, and 25%, respectively. Configuration did not differ between these three locations for which reason they are given as one circle. $ including nodules at or in a horn.
Location and Risk of Malignancy for TTW Nodules (Part 2-Retrospective Part)
Retrospective part: From the retrospective data set, 163 TTW nodules were retrieved and assessed with regard to the position within the thyroid gland and correlated with their risk of malignancy. The results showed that a TTW nodule is 1.7 times more often benign when it is dorsal (95% CI 1.1-2.5; Table 4) and 2.5 times more often benign when associated with a horn (95% CI 1.2-5.3; Table 5). In this highly selected patient group (only histologically/cytologically clarified nodules), the overall probability of malignancy for a TTW nodule was 38% (95% CI 30-46%). Table 4. Association between location and malignancy rate of TTW nodules. Table 5. Relation between association to a horn and malignancy rate of TTW nodules. Malignant TTW nodules more often showed sonographic risk factors in terms of composition, margin, and calcification-but not for echogenicity-than benign TTW nodules ( Table 6).
Discussion
In an iodine-deficient area endemic for goiter, such as Germany, in a prospective approach, TTW nodules are a frequent finding at first presentation of a patient with a rate of 17%. In such a "real-life" setting, TTW nodules were associated with a low risk of malignancy-around 1%. We observed that TTW nodules were more frequently located dorsally in the thyroid gland and more often had contact to or were found in a protrusion, which we called posterior horn or posteroinferior horn-than non-TTW nodules. Retrospectively, our data demonstrated that TTW nodules were 2.3 times more often benign when located dorsally than non-dorsally and 3.7 times more often when growing at or in a horn than without a horn. The overall probability of malignancy of TTW nodules in that retrospectively analyzed and highly preselected patient group, was as high as 39%, which is in a sharp contrast to the low number of malignant TTW nodules found in the prospective part of our study.
The presence of a horn at the back face of each lobe-either strictly posterior or posteroinferior-was noted in roughly one-third of thyroid lobes. A posterior horn was more often found on the right side than on the left side, being in good accordance with literature results on Zuckerkandl's tubercle [24,25]. Most studies described that the incidence of Zuckerkandl's tubercle ranges from 59 to 87% [29][30][31][32][33], but there is one report of a very low incidence of 7% [34]. Most investigators have detected Zuckerkandl's tubercle more frequently in the right thyroid lobe [29][30][31][32][33]. Won et al., also observed Zuckerkandl's tubercle more frequently-nearly twice as often-on the right lobe compared to the left lobe [33]. The posteroinferior form, however, is not known from previously published data. Notably, in this study, it was found as often on the right side as on left side. This may indicate a somewhat different provenance than Zuckerkandl's tubercle and argues against the hypothesis of both forms being variants of the same origin.
On our prospective analysis, TTW nodules had a malignancy risk of 1% (CI 0-4%, see above), which does not justify FNAC based on this single US feature. However, other studies suggest malignancy probability as high as 71% for TTW nodules [5,7]. Of note, in those studies performed in countries without iodine deficiency and also in specialized centers, the a priori malignancy rate of thyroid nodules was fairly high-about 15% (Kwak et al.: 17%; Horvath et al.: 14%), which is significantly higher in magnitude than in a primary care setting. In Germany, in a primary care setting, thyroid nodule malignancy rates are as low as 0.1 to 1.0 per cent [35,36]. The striking differences in the malignancy risk of TTW nodules, therefore, can be attributed to a positive preselection in those studies, i.e., the statistical dependence of the positive predictive value from the prevalence (theorem of Bayes). The absolute malignancy risks reported for TTW nodules in those studies clearly cannot be translated into primary care. In the retrospective part of our study on histologically and/or cytologically confirmed nodules, the high overall malignancy rate for TTW nodules (39%) can also be attributed to a positive preselection.
In the prospective part of this study, the distribution of sonographic features in terms of structure, echogenicity, margins and the presence of calcifications did not differ between TTW nodules and non-TTW nodules. This is in accordance with the above-mentioned finding that TTW is not a major risk factor in areas with a high prevalence of nodular goiter. When comparing malignant and benign TTW nodules in the retrospective part, sonographic risk factors, such as solidity, irregular margins, and microcalcifications were more frequent in malignant nodules than in benign nodules-as was to be expected. Surprisingly, this was not true for hypoechogenicity. Most likely, this observation can be attributed to a positive preselection using hypoechogenicity as a criterion for FNAC and/or thyroid surgery.
Data on the location of a thyroid nodule and a correlation with the risk of malignancy are sparce. Studies have shown that location could be an independent risk factor in predicting the risk of thyroid cancer. One study showed a significantly higher frequency of malignancy of thyroid nodules located at the upper pole (22.2%) compared to the lower pole (4.7%) and middle section of the thyroid (15.4%) [37]. Comparable to those results, Ramundo et al., reported, the upper pole location had a slightly significant association with malignancy using ACR-TIRADS (OR 6.92; 95% CI 1.02-46.90; p = 0.047) [38]. Duman et al., demonstrated a higher risk for malignancy in the lower and similarly upper thyroid poles [39]. Another group analyzed a total of 3241 nodules, 335 (10.3%) of which were malignant. They found a nodule location in the thyroid isthmus to carry the highest risk of cancer diagnosis and lower lobe nodules the lowest risk [40]. So far, our results-showing TTW nodules located dorsally and associated with a horn to be more often benign than at other locations-are in good accordance with these reports. However, none of these reports took the configuration of nodules into consideration.
The common dorsal position of benign TTW nodules and their association with a horn may modify the common assumption of benign nodules to grow parallel with the long axis of the thyroid and malignant ones to grow non-parallel. Our observation of benign nodules in the thyroid isthmus having an elliptic shape in contrast to a rounder shape at the back of the thyroid prompts a pole model for nodule growth. The model (stylized longitudinal section through a thyroid lobe, Figure 7) assumes, that the growth direction of benign nodules is predetermined by the poles of the gland. Poles in this model-besides the lower extremity, upper extremity and thyroid isthmus-are the posterior horn and the posteroinferior horn. The model assumes, that the growth of benign nodules follows the configuration of the poles. In a horn, this means a benign nodule to grow into depth rather than into width. In the center of the thyroid gland nodules follows the longitudinal oval form of the lobe. Since nodules are frequently found dorsally in the thyroid gland, i.e., near any form of a visible or non-visible horn, the pole model not only explains a TTW growth of nodules with contact to a horn but also explains the rather high frequency of TTW nodules in general.
than at other locations-are in good accordance with these reports. However, none of these reports took the configuration of nodules into consideration.
The common dorsal position of benign TTW nodules and their association with a horn may modify the common assumption of benign nodules to grow parallel with the long axis of the thyroid and malignant ones to grow non-parallel. Our observation of benign nodules in the thyroid isthmus having an elliptic shape in contrast to a rounder shape at the back of the thyroid prompts a pole model for nodule growth. The model (stylized longitudinal section through a thyroid lobe, Figure 7) assumes, that the growth direction of benign nodules is predetermined by the poles of the gland. Poles in this model-besides the lower extremity, upper extremity and thyroid isthmus-are the posterior horn and the posteroinferior horn. The model assumes, that the growth of benign nodules follows the configuration of the poles. In a horn, this means a benign nodule to grow into depth rather than into width. In the center of the thyroid gland nodules follows the longitudinal oval form of the lobe. Since nodules are frequently found dorsally in the thyroid gland, i.e., near any form of a visible or non-visible horn, the pole model not only explains a TTW growth of nodules with contact to a horn but also explains the rather high frequency of TTW nodules in general.
Conclusions
TTW nodules are common in an endemic area for goiter and appear to have a low risk for malignancy, at least on a primary care level. Often located in the posterior part of the thyroid gland, they frequently have contact to a dorsally situated posterior or posteroinferior horn. The frequency of TTW nodules and their predominant benignity may be explained by a pole concept for nodule growth.
The striking difference between the discovery of low malignancy risk of TTW nodules at a primary care level and in published studies on TIRADS can be explained by a positive preselection in the latter. On a primary care level, in the absence of other US features, TTW nodules should not a priori be considered as suspicious for malignancy, at least when seen at dorsocaudal locations or when associated with a horn.
Conclusions
TTW nodules are common in an endemic area for goiter and appear to have a low risk for malignancy, at least on a primary care level. Often located in the posterior part of the thyroid gland, they frequently have contact to a dorsally situated posterior or posteroinferior horn. The frequency of TTW nodules and their predominant benignity may be explained by a pole concept for nodule growth.
The striking difference between the discovery of low malignancy risk of TTW nodules at a primary care level and in published studies on TIRADS can be explained by a positive preselection in the latter. On a primary care level, in the absence of other US features, TTW nodules should not a priori be considered as suspicious for malignancy, at least when seen at dorsocaudal locations or when associated with a horn. Informed Consent Statement: Informed consent for clinical investigation was obtained from all subjects involved in the study.
Data Availability Statement:
The data that support the findings of this study are available from the corresponding author upon reasonable request. | 2022-05-10T16:47:51.763Z | 2022-05-01T00:00:00.000 | {
"year": 2022,
"sha1": "28a403a1dd67b0c82573affafc194358b19706a4",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2077-0383/11/9/2549/pdf?version=1651400306",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "05805efaab42731db1cbee9a03e53ca55e1dd729",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
233731870 | pes2o/s2orc | v3-fos-license | Closed incision negative pressure wound therapy versus standard dressings in obese women undergoing caesarean section: multicentre parallel group randomised controlled trial
Abstract Objective To determine the effectiveness of closed incision negative pressure wound therapy (NPWT) compared with standard dressings in preventing surgical site infection (SSI) in obese women undergoing caesarean section. Design Multicentre, pragmatic, randomised, controlled, parallel group, superiority trial. Setting Four Australian tertiary hospitals between October 2015 and November 2019. Participants Eligible women had a pre-pregnancy body mass index of 30 or greater and gave birth by elective or semi-urgent caesarean section. Intervention 2035 consenting women were randomised before the caesarean procedure to closed incision NPWT (n=1017) or standard dressing (n=1018). Allocation was concealed until skin closure. Main outcome measures The primary outcome was cumulative incidence of SSI. Secondary outcomes included depth of SSI (superficial, deep, or organ/body space), rates of wound complications (dehiscence, haematoma, seroma, bleeding, bruising), length of stay in hospital, and rates of dressing related adverse events. Women and clinicians were not masked, but the outcome assessors and statistician were blinded to treatment allocation. The pre-specified primary intention to treat analysis was based on a conservative assumption of no SSI for a minority of women (n=28) with missing outcome data. Post hoc sensitivity analyses included best case analysis and complete case analysis. Results In the primary intention to treat analysis, SSI occurred in 75 (7.4%) women treated with closed incision NPWT and in 99 (9.7%) women with a standard dressing (risk ratio 0.76, 95% confidence interval 0.57 to 1.01; P=0.06). Post hoc sensitivity analyses to explore the effect of missing data found the same direction of effect (closed incision NPWT reducing SSI), with statistical significance. Blistering occurred in 40/996 (4.0%) women who received closed incision NPWT and in 23/983 (2.3%) who received the standard dressing (risk ratio 1.72, 1.04 to 2.85; P=0.03). Conclusion Prophylactic closed incision NPWT for obese women after caesarean section resulted in a 24% reduction in the risk of SSI (3% reduction in absolute risk) compared with standard dressings. This difference was close to statistical significance, but it likely underestimates the effectiveness of closed incision NPWT in this population. The results of the conservative primary analysis, multivariable adjusted model, and post hoc sensitivity analysis need to be considered alongside the growing body of evidence of the benefit of closed incision NPWT and given the number of obese women undergoing caesarean section globally. The decision to use closed incision NPWT must also be weighed against the increases in skin blistering and economic considerations and should be based on shared decision making with patients. Trial registration ANZCTR identifier 12615000286549.
ParticiPants
Eligible women had a pre-pregnancy body mass index of 30 or greater and gave birth by elective or semiurgent caesarean section.
interventiOn 2035 consenting women were randomised before the caesarean procedure to closed incision NPWT (n=1017) or standard dressing (n=1018). Allocation was concealed until skin closure.
Main OutcOMe Measures
The primary outcome was cumulative incidence of SSI. Secondary outcomes included depth of SSI (superficial, deep, or organ/body space), rates of wound complications (dehiscence, haematoma, seroma, bleeding, bruising), length of stay in hospital, and rates of dressing related adverse events. Women and clinicians were not masked, but the outcome assessors and statistician were blinded to treatment allocation. The pre-specified primary intention to treat analysis was based on a conservative assumption of no SSI for a minority of women (n=28) with missing outcome data. Post hoc sensitivity analyses included best case analysis and complete case analysis.
results
In the primary intention to treat analysis, SSI occurred in 75 (7.4%) women treated with closed incision NPWT and in 99 (9.7%) women with a standard dressing (risk ratio 0.76, 95% confidence interval 0.57 to 1.01; P=0.06). Post hoc sensitivity analyses to explore the effect of missing data found the same direction of effect (closed incision NPWT reducing SSI), with statistical significance. Blistering occurred in 40/996 (4.0%) women who received closed incision NPWT and in 23/983 (2.3%) who received the standard dressing (risk ratio 1.72, 1.04 to 2.85; P=0.03).
cOnclusiOn Prophylactic closed incision NPWT for obese women after caesarean section resulted in a 24% reduction in the risk of SSI (3% reduction in absolute risk) compared with standard dressings. This difference was close to statistical significance, but it likely underestimates the effectiveness of closed incision NPWT in this population. The results of the conservative primary analysis, multivariable adjusted model, and post hoc sensitivity analysis need to be considered alongside the growing body of evidence of the benefit of closed incision NPWT and given the number of obese women undergoing caesarean section globally. The decision to use closed incision
Introduction
The use of caesarean section in birthing women varies widely, with Nordic countries reporting low rates and other Western countries such as Australia, Canada, the UK, and the US reporting higher rates (15-17% v 25-32%). 1 Compared with vaginal birth, caesarean section is associated with increased morbidity and mortality. 2 The World Health Organization defines people as obese if their body mass index is greater than or equal to 30.0. 1 Obesity in pregnancy is increasingly common; in Australia, more than 50% of women are overweight or obese on entering pregnancy. 1 Postoperative wound complications such as surgical site infection (SSI), dehiscence (splitting open of a surgically closed wound), and formation of haematoma and seroma are common complications of surgical procedures, 3 particularly among women with obesity, diabetes, or both. 4 SSI is an important global concern that can contribute to re-intervention and treatment, increased length of stay in hospital, delayed wound healing, and, in some cases, death. 5 6 Maternal obesity increases the woman's risk of developing SSI and other wound complications threefold, which delays recovery, increases discomfort, and reduces quality of life. 4 7 Over the past decade, the use of single use closed incision negative pressure wound therapy (NPWT) dressings in high risk surgical incisions has been increasing, with the aim of reducing the risk of SSI and other associated wound complications. 8 Closed incision NPWT is a sealed non-invasive system that applies suction (negative pressure) on the wound site that has been closed, for example, by sutures, staples, or glue. The surgical incision is covered with semiocclusive adhesive dressing connected by tubing to a suction pump. 9 The suction pump exerts negative pressure to the closed incision and removes wound fluid with recommended pressures usually between -50 mm Hg and -125 mm Hg, 10 depending on the manufacturer's instructions. 11 The mechanism of action is unclear but is purported to include reduced bacterial entry into the wound while removing blood and exudate and stimulating granulation.
In 2010-11, two simplified NPWT devices became commercially available (Prevena (KCI) and PICO (Smith & Nephew)). A Cochrane review published before we started this trial and its subsequent update found only low quality evidence in any population, with most studies sponsored by industry. 8 12 Meta-analytic results of the updated Cochrane review reported inconclusive evidence of the effectiveness of closed incision NPWT specifically for obese women undergoing caesarean section (seven studies). 8 At the time we began our research, all other trials in this population were small, single site, and industry funded. In this study, we aimed to compare the effectiveness and safety of prophylactic closed incision NPWT and standard surgical dressings on the cumulative incidence of SSI in obese women undergoing elective and semi-urgent caesarean section.
study design and participants
We conducted a pragmatic, randomised, controlled, parallel group, superiority trial in four large public hospitals in southeast Queensland, Australia. We made no changes to the methods after the start of the trial. We identified potentially eligible women at their routine 36 week antenatal visit. Research nurses at each site screened women in antenatal clinics, antenatal wards, and birthing suites. Women were eligible if they were booked for elective (category 4) or semi-urgent (categories 2-3) caesarean section, 13 recorded a prepregnancy body mass index of 30 or higher, and were able to provide written informed consent. We excluded women who needed an urgent caesarean section (category 1), had an infection in hospital including during labour or immediately before caesarean section, had participated in the trial in a previous pregnancy, or were unable to speak or understand English with no interpreter present. Written informed consent was obtained from all participants. The protocol has been published. 14 randomisation and masking We used a web based central randomisation service to randomly assigned eligible, consenting women (1:1) just before the caesarean procedure to receive either a closed incision NPWT dressing or the standard hospital dressing. To ensure that equal numbers of participants were assigned to each group, we used random block sizes of four, six, and eight, stratified by hospital. Allocation was concealed until after skin closure. The nature of the intervention meant that women, clinical staff, and research staff were not blinded to treatment after allocation. Data were reviewed by two independent, blinded outcome assessors to determine primary and secondary wound endpoints, and discrepancies were adjudicated by a third blinded assessor. Principal investigators, including the trial statistician, were also blinded to group allocation. The clinical trial coordinator trained and supervised research nurses and audited the quality of data and compliance of randomisation.
Procedures
All women received standard care, according to local hospital and national health department guidelines. 15 Before the skin incision, the woman's abdomen was prepared with either alcoholic or aqueous chlorhexidine or betadine. All women received a lower transverse suprapubic skin incision, and two obstetricians, usually a trainee registrar supervised by a consultant, carried out the operation. The method of skin closure (suture or staples) was based on the obstetrician's preference. The operating obstetrician (or delegate) applied the closed incision NPWT and standard dressings under sterile conditions in the operating room immediately after skin closure. Women assigned to the closed incision NPWT group received a PICO dressing (Smith & Nephew, Hull, UK), which was left intact for approximately five to seven days as recommended by the manufacturer. This particular NPWT product was used in two earlier pilot studies. 16 17 The PICO product (size 10×30 cm or 10×40 cm) has a small discrete pump powered by two AA lithium batteries with an absorbent polyurethane foam dressing that holds wound exudate away from the skin. A tube is inserted into the foam, and a continuous negative pressure of 80 mm Hg is applied after application of the dressing. The PICO dressing was reinforced around each of the four edges with four pieces of adhesive tape included in the dressing kit, as per the manufacturer's instructions. All clinical staff providing care received ongoing training and support in the correct application and use of the PICO dressings, as well as monitoring dressing changes and completing documentation daily for assessment of protocol fidelity.
The control group comprised women allocated to the standard hospital dressing. The choice of standard dressings was based on the treating obstetrician's usual choice of dressing (for example, hydrocolloid or transparent), applied according to the manufacturer's recommendations after skin closure in the operating room. Across all hospital sites, the standard dressing was left intact for five to seven days.
We collected clinical data from several sources, including electronic records, direct observation, and self-reporting by women during hospital admission and after discharge. Demographic data (pre-pregnancy body mass index, parity/gravidity, comorbidities, measurement of health status (Health Related Quality of Life Short Form Survey SF-12 v-2) were obtained on enrolment; surgical data (American Society of Anaesthesiologists category, type of anaesthetic, antibiotic administration, hair removal method, surgical approach, wound closure layers, suture materials, length of operation) were obtained on the day of the caesarean section. Research nurses visited women on postoperative day 2 and collected vital signs, SSI related data using a structured tool based on the Centres of Healthcare Related Infection Surveillance and Prevention guidelines identifying signs and symptoms of SSI (that is, redness, swelling, pain/tenderness, watery or purulent discharge), 18 pain associated with the dressing, and women's satisfaction with the NPWT dressing. After discharge from hospital, research nurses conducted telephone interviews with all women weekly (from the day of their surgery) until 28 days after discharge. They asked women a series of questions about SSI symptoms, SF-12 v2, and related resource use including health professional visits. On day 30, research nurses audited all participants' hospital electronic health records to check for documented evidence of SSI and wound complications (chart data documented wound complications, reoperations and hospital readmission due to wound complications, use of antibiotics for wound complications, type of SSI, signs and symptoms of SSI).
Outcome assessors were blinded to group allocation, the intervention and its comparator, and study hypotheses. These assessors were experienced registered nurses and performed outcome assessment of primary (SSI) and secondary wound related outcomes (SSI type, wound complications) for all women enrolled in the study. Each outcome assessor independently ascertained wound outcomes, and regular inter-rater consistency checks were undertaken throughout the trial. Where discrepancies in assessment of signs and symptoms existed, a third outcome assessor (nurse practitioner in wound care) adjudicated decisions. We defined loss to follow-up as lacking both 30 day medical record data and follow-up phone interview data over the four weekly time points on the primary outcome (SSI). Thus, a woman might be missing up to three interviews but would not be considered lost to follow-up unless her 30 day chart was also missing. Each week, nurses attempted to contact women or their contact person up to three times. Therefore, for all women who were not lost to follow-up and did not withdraw their participation after randomisation, we had data on primary outcome, SSI type (where SSI occurred), and wound complications.
All data were entered directly into secure portable tablets using a purpose built research data capture (REDCap) database and form based interface. Research nurses had access to the data at their hospital site only, and clinical staff did not have access to research data. The clinical trial coordinator audited the quality and completeness of data and adherence to the protocol, as well as visiting sites for training and monitoring.
Outcomes
The primary outcome was the cumulative incidence of SSI at 30 days after surgery, as defined by Centers for Disease Control and Prevention (CDC) guidelines. Secondary clinical outcomes included type of SSI (superficial, deep, or organ/body space), 18 any type of wound complication (dehiscence, haematoma, seroma, bleeding), type/number of individual wound complications, length of stay in hospital, and number of wound related hospital readmissions in the 30 days after surgery. Definitions and measures used for primary and secondary outcomes are included in supplementary table A.
Other secondary outcomes, including dressing related adverse events, such as rash, itchiness, and blistering, were assessed by research nurses. Serious adverse events (maternal death, admission to intensive care unit, life threatening condition) were monitored and reported to the human research ethics committee at each site. An independent Data Safety Monitoring Committee was established to assess the safety of the intervention. This committee, comprising an obstetrician, a statistician, and an infection control nurse specialist, oversaw the trial and reviewed interim analyses, undertaken twice during the life of the trial. The trial would not be stopped unless the committee deemed that significant safety problems were present during safety monitoring of the trial intervention.
statistical analysis
We calculated the sample size on the basis of the proportion of women who developed an SSI within 30 days of caesarean section. On the basis of previous work in this area, 19 we conservatively estimated that 15% of women in the control group were likely to develop an SSI. Following discussions with infectious disease experts and obstetricians, we determined that an absolute reduction in the rate of SSI of 5 percentage points would be clinically important. The sample size needed to detect a reduction in the cumulative incidence of SSI at 30 days from 15% to 10% was doi: 10.1136/bmj.n893 | BMJ 2021;373:n893 | the bmj 950 per group (90% power and 5% significance level; Power Analysis & Sample Size system (PASS, V.12), NCSS). We inflated the sample size by 10% to allow for loss to follow-up (n=1045 per group; total sample size 2090 women).
We summarised baseline characteristics comprising binary data by using counts and proportions and continuous data as mean and standard deviation or median and interquartile range, depending on the distribution. We used Cohen's κ to calculate inter-rater consistency between outcome assessors.
The pre-specified primary outcome analysis was by intention to treat. For women lost to follow-up and withdrawn from the study post-randomisation who were missing the primary outcome, we conservatively (favoured standard treatment, as it had higher levels of missing data) assumed that they did not develop an SSI (worst case analysis). As per the protocol, we explored differences in prognostic variables between groups. The prognostic factors assessed were identified in the literature 20 21 and based on expert opinion (body mass index, age, diabetes, smoking, rupture of membranes, parity, caesarean section elective/semiurgent, and length of procedure). We found differences between groups relative to body mass index and group allocation. Thus, following the protocol, we analysed the primary outcome by using a logistic regression model, adjusting for these.
We used a planned per protocol analysis of treatment for device related and serious adverse events. We compared binary outcomes (that is, SSI, wound complications, adverse/serious adverse events) by using a χ 2 test or Fisher's exact test and risk ratios with 95% confidence intervals. We reported continuous variables with non-normal distribution (that is, length of surgery, length of stay in hospital) by using medians and interquartile ranges and compared them by using a Mann-Whitney U test. For all inferential tests, we considered a P value below 0.05 to be statistically significant.
Post hoc analyses
To check for the robustness of conclusions to the effect of assumptions around missing primary outcome data, we repeated the intention to treat analysis as described above assuming that all women missing the primary outcome did have an SSI (favouring closed incision NPWT; that is, best case analysis) and excluding women missing the primary outcome (complete case analysis). Additionally, we did a per protocol analysis excluding women lost to follow-up, women withdrawn after randomisation, and women treated against their randomised allocation (for example, treated with closed incision NPWT when in the standard dressing arm).
Secondary outcomes (type of SSI, wound complications, length of stay in hospital, readmissions, pain, reoperations) were analysed by complete case analysis (excluding women without primary outcome) and by per protocol analysis (as with the primary outcome: excluding women lost to follow-up, women withdrawn post-randomisation, and women treated against their randomised allocation).
Patient and public involvement
Patients were not involved in defining the research question or outcome measures or in the interpretation or writing up of results of this study. This study was conceived in 2013, when the patients as co-researchers movement had not widely been adopted in Australia.
results Between 26 October 2015 and 1 November 2019, 8558 of the 12 077 women screened were excluded, leaving 3519 women who were eligible. However, 338 (9.6%) could not be recruited as their caesarean section occurred after hours, and 1072 women were not enrolled for various reasons, including refusals, vaginal delivery, or delivery at another facility; 2109 (60%) women were enrolled. We randomly assigned 2035 women to receive NPWT (n=1017) or standard surgical wound dressings (n=1018) (fig 1). Followup concluded on 1 December 2019. Intention to treat analysis of primary and secondary outcomes (SSI, type of SSI, wound complications, length of stay in hospital, readmissions, pain, reoperations) included the 2035 women randomly assigned to the intervention and control groups.
Baseline demographic and obstetric characteristics were similar between groups (table 1). The average age of participants was 31 (SD 5.5; range 16-54) years. Half of all women (1012; 50%) had a pre-pregnancy body mass index of 35 or higher (range 30-72), and most (1472; 72%) had an elective caesarean section. One third of women (657; 32%) across the sample had either gestational diabetes or diabetes mellitus. At the time of caesarean section, most women (1729; 85%) had intact membranes. Most women (1942; 95%) had subcutaneous layer closure in addition to subcuticular (skin) closure; staples were rarely used (27; 1%). Interrater reliability between outcome assessors for the primary outcome SSI and the secondary outcome type of SSI yielded κ=0.764 (95% confidence interval 0.72 to 0.81), and κ=0.712 (0.66 to 0.76), respectively.
In the primary analysis, our "worst case" intention to treat analysis assumed that women whose primary outcome was missing did not develop SSI (table 2). The SSI rate across the entire sample was 8.6% (n=174). We observed a 3 percentage point reduction in the absolute risk of SSI in women treated with NPWT compared with standard dressings; this difference was not statistically significant (7.4% v 9.7%; risk ratio 0.76, 95% confidence interval 0.57 to 1.01; P=0.06) (table 2). In terms of SSI type, only 1 (<1%) woman in the NPWT group developed an organ/ space SSI (table 2). The rates of all types of wound complications in the intervention and control groups were comparable. Wound dehiscence was the most common complication in both groups ( We did a planned per protocol analysis for dressing related adverse events and serious adverse events (table 3). Dressing related adverse events reported included skin blistering, itchiness, and rash. We observed a 2 percentage point increase in the absolute risk of skin blistering among women in the closed incision NPWT group, which was statistically significant (4.0% (40) v 2.3% (23); risk reduction 1.72, 1.04 to 2.85; P=0.03). Overall, 17 serious adverse events occurred, including three neonatal deaths. Rates of serious adverse events were low and did not differ between intervention and control groups (intensive care unit admission, life threatening condition: 1.2% v 0.5%; risk ratio 2.57, 0.92 to 7.17; P=0.06). Most of the admissions to intensive care related to the lack of available high dependency unit beds. One woman developed a pulmonary embolism. All serious adverse events were reported to the ethics board, and none was deemed related to the intervention.
Post hoc sensitivity analyses
Post hoc sensitivity analyses of the cumulative incidence of all types of SSI favoured closed incision NPWT therapy compared with our main crude analysis (reported above). The "best case" intention to treat analysis assumed that women with missing outcome data developed SSI (supplementary table C). The SSI incidence across the entire sample was 9.9% (n=202). We observed a 4 percentage point reduction in the absolute risk of SSI in women treated with closed We also did a per protocol analysis of primary and secondary outcomes based on 1979 women (supplementary table E). In this analysis, we excluded the 56 women; 29 (1.4%) did not receive the allocated treatment, 16 (<1%) withdrew consent after randomisation (this included one woman who did not receive the allocated treatment), and 12 (<1%) women were lost to follow-up. The exclusion of 56 (2.7%) women (with missing data) in the per protocol analysis yielded results consistent with the intention to treat analysis for the SSI incidence (7.4% (74) v 10% (98); risk reduction 0.75, 0.56 to 1.0; P=0.05).
discussion
On balance, the results of the four analytic scenarios suggest that closed incision NPWT may be effective in reducing SSI in obese women undergoing caesarean section. Our pre-specified primary analysis indicated that 9% of women in this trial developed an SSI of any type-7% in the closed incision NPWT group and 10% in the control group. This difference was close to statistical significance. The results of the best case, complete case, per protocol sensitivity, and multivariable analyses were consistent, favouring the closed incision NPWT intervention. The primary analysis was based on a conservative assumption that women lost to follow-up did not develop an SSI; this result showed a significant relative reduction of 29% in the cumulative incidence of SSI in the closed incision NPWT group. It is therefore possible that our primary comparison with other studies Our results across all analytic scenarios were consistent, showing no significant differences in the incidence of superficial and deep SSI by trial arm. The results of other studies using closed incision NPWT in this population have yielded mixed results. [22][23][24] Variations in SSI rates as reported in other studies in this population are likely related to the different definitions used to classify and detect SSI, 20 smaller samples, 16 25 and use of pilot and cohort designs, 16 22 26 which carry a high risk of bias and uncertainty in the results. The results of several smaller trials in this population, some of which were non-blinded and industry funded, showed significant reductions of up to 50% in superficial SSI rates. 23 26 27 A recently updated Cochrane review of use of closed incision NPWT in primary wounds included a subgroup analysis of seven studies involving 1886 obese women undergoing caesarean section. 8 The results of that subgroup analysis indicated a 27% reduction, albeit non-significant, in superficial SSI incidence. The results of our trial, the largest in this field, suggest that closed incision NPWT may reduce superficial SSI incidence in this patient population.
Given that approximately 29.7 million births occur through caesarean section globally, 2 this result is clinically important. However, the decision to use closed incision NPWT in this population needs to be considered alongside any economic benefit. We found no statistically significant differences in organ/space SSI. Notably, this study was not powered to detect potential differences. Our results are similar to previous research in this population. 23 26 28 We also found no significant group differences in wound complications in relation to bleeding, dehiscence, haematoma, or seroma.
implications of findings
The finding of a 72% relative increase in blistering associated with closed incision NPWT may have implications for healthcare decision making. The recently updated Cochrane review highlighted very low certainty evidence around blistering when comparing closed incision NPWT and standard dressings. 8 Whether blistering (under the adhesive dressing and tape) occurred because of the dressing itself or the adhesive tape that was applied (per manufacturer's instructions) around the dressing to reinforce the dressing and help to maintain suction is not clear. Results of several previous trials in this population reported adverse skin reactions including blistering, erythema, and bruising. 16 22 28 The occurrence of a minor treatable adverse event such as blistering that we found in this trial needs to be balanced with probable reductions in the incidence SSI. Thus, informing women about the potential risks of closed incision NPWT, and providing targeted training to clinicians in its application, may reduce the potential for blistering. Importantly, patients should be partners in the decision to use closed incision NPWT as an alternative wound management therapy.
The generalisability of our results needs to be considered relative to the inclusion criteria applied and the low rates of SSI in our study. We excluded women undergoing emergency caesarean section because they are a different population and their risk factors for SSI are not similar to those women undergoing elective and semi-urgent caesarean section. 21 Also, emergency caesarean section as a surgical procedure is much less "standardised" than other more "routine" caesarean procedures. Given the greater heterogeneity of women undergoing emergency caesarean section and of emergency caesarean section procedures, and wanting to increase internal validity to more precisely detect the potential impact of closed incision NPWT, we had to control for potential confounding variables as much as possible. Therefore, excluding these women meant that the caesarean procedure was more consistent in its technique and associated processes such as skin preparation and antibiotic use. In terms of SSI event rates, the baseline infection rate we found was much lower than we had assumed in our sample size calculation. Our trial was underpowered given the low event rate and thus may not be generalisable to other clinical settings. The women in this trial probably received a high standard of clinical care, based on clinical practice guidelines. However, the "true" rate of SSI is often underestimated using routinely collected surveillance data. 29 With the body of evidence for the effectiveness of closed incision NPWT growing, our findings may be useful for physicians' and women's decision making regarding dressing type irrespective of the centre's SSI rates.
strengths and limitations of study Strengths of this study include its sample size, the rigorous randomisation, and prospective data collection, including weekly follow-up by dedicated research staff. The results of a per protocol analysis were consistent with the intention to treat analysis, indicating minimal effect of missing data and loss to follow-up and the robustness of our results. Across both intervention and control groups, the time that dressings were left in situ was consistent, with both being intact for five days. Furthermore, SSI and wound complication outcomes were based on the definitions in the CDC's guideline. 18 The pragmatic nature of this trial and the characteristics of the dressings precluded blinding of participants, clinical staff, or data collec tors. However, outcome assessors were blinded to group allocation and the intervention/comparator. The process of outcome ascertainment was rigorous: two blinded outcome assessors independently ascertained SSI and wound complication data, and a third outcome assessor adjudicated any discrepancies. Additionally, agreement among outcome assessors was moderate. A Data Safety Monitoring Board provided oversight in terms of safety checks. This trial is also one of the few in this area that was not funded by industry, thus reducing potential biases relative to its conduct and reporting.
However, we note several limitations. Firstly, women undergoing urgent (category 1) caesarean section were excluded, despite this population having an even higher risk of developing a SSI. 21 We excluded these women because of ethical concerns related to trying to obtain valid consent. In most instances, these women would not have enough time to consider participation in this trial. Secondly, 60% of eligible women were enrolled and, of these, 73% had an elective caesarean section, affecting generalisability. Generalisability was maximised by recruiting women from four large public hospitals, who underwent both elective and semi-urgent caesarean section. The proportions of women undergoing elective versus semi-urgent caesarean section in Queensland public hospitals typically reflects the proportions recruited in this trial. Thirdly, over the four week data collection period, we were able to collect 30 day follow-up outcome data for all women except for the 16 women who withdrew their consent after randomisation. Fourthly, the potential exists for false positive or false negative outcome assessments of SSI and wound complications; however, blinding and use of two outcome assessors adjudicated by a third minimised this risk.
Fifthly, we followed up the women with telephone interviews. The decision to use telephone interviews was pragmatic; to bring women in weekly to assess SSI would have created an increased burden on participants and likely resulted in huge loss to followup. We used this approach in preference to having missing data. Also, we know that data from routine surveillance are inferior in quality to those from dedicated follow-up. The survey tool we used was a previously validated patient reported tool to assess for SSI. 30 It had a series of questions about signs and symptoms of SSI such as redness, pain/tenderness at the incision, and discharge, as well as questions related to involvement of health professionals in the management of the wound and antibiotics prescribed for the wound. To ensure the quality and consistency of the data, research nurses used an interview script based on SSI symptoms and related resource use. Additionally, to minimise loss to follow-up, the research nurses contacted women on three separate occasions for each week if the women did not answer. Other research has shown that self-report of wound related complications is accurate when validated tools are used. 17 30 We cannot rule out the possibility that trial participants may have incorrectly reported their wound characteristics, but we have no reason to think that this was likely to occur.
Sixthly, we did not do time analysis because reporting time to SSI using weekly data provides limited information. Seventhly, despite the large sample size, the cumulative incidence of SSI was lower than expected; thus, given the wider 95% confidence intervals (less precision) for the primary outcome, a false negative result (type II error) is still possible. Furthermore, underestimation of the incidence of SSI is possible, given the way that missing data have been treated in the analysis of primary analysis. Finally, we did not have access to general practice data or information as to whether women went back to different hospitals or on any use of antibiotics for wound infection. Therefore, some wound complications and infections may have been missed, leading to an underestimation of SSI incidence. Nevertheless, the women in this study were able to accurately self-report any wound related complications and treatments (for example, antibiotics).
conclusions On the basis of our primary intention to treat analysis, assigning no SSI to missing data, prophylactic closed incision NPWT for obese women after caesarean section resulted in a 24% reduction in the relative risk of SSI compared with standard dressings (3% reduction in absolute risk). This difference, although close to statistical significance, possibly underestimates the effectiveness of closed incision NPWT in this population. On balance, the results of the conservative primary, multivariable adjusted model, and post hoc sensitivity analyses should be considered alongside the growing body of evidence of the benefits of closed incision NPWT and given the number of obese women undergoing caesarean section globally. However, the decision to use closed incision NPWT needs to be weighed against the increase in skin blistering and economic considerations and based on shared decision making. We thank all the women who took part and the staff members involved in the trial at the recruitment sites. The members of the Data Safety Monitoring Board were Michael Peek, Peta-Anne Zimmerman, Suhail Doi, and Evelyn Kang.
Contributors: BMG, JW, and WC conceived of the study. NC, LT, DE, and JAW contributed to the study design and assisted with implementation. BMG, WC, JW, DE, LT, JAW, NC, and KM applied for funding. LT provided methodological expertise in clinical trial design. LT led the primary statistical analysis, and AW led the secondary and post hoc analyses. EK was responsible for project management and assisted in data analysis. JW, DE, KM, VC, and EK were responsible for data quality. JW, KM, DE, VC, and EK recruited patients, collected data, and supervised research nurses. EK was responsible for data management. All authors contributed to refinement of the study protocol, critically revised the manuscript for important intellectual content, and approved the final manuscript. The corresponding author attests that all listed authors meet authorship criteria and that no others meeting the criteria have been omitted. BMG is the guarantor.
Funding: The trial was funded by a competitive peer reviewed grant (APP1081026) from the Australian National Health and Medical Research Council. The funders had no role in considering the study design or in the collection, analysis, or interpretation of data, the writing of the report, or the decision to submit the article for publication. The views expressed are those of the authors and not necessarily those of the NHS, the National Institute for Health Research (NIHR), or the Department of Health and Social Care.
Competing interests: All authors have completed the ICMJE uniform disclosure form at www.icmje.org/coi_disclosure.pdf and declare: grant funding from Australian National Health and Medical Research Council for the submitted work; JAW and APW are supported by the NIHR Applied Research Collaboration East of England; no other relationships or activities that could appear to have influenced the submitted work. Data sharing: Access to individual patient level data is not available for this study. The published protocol can be found at https:// bmjopen.bmj.com/content/6/2/e010287.
The lead author affirms that this manuscript is an honest, accurate, and transparent account of the study being reported; that no important aspects of the study have been omitted; and that any discrepancies from the study as planned (and, if relevant, registered) have been explained.
Dissemination to participants and related patient and public communities: The results have been and will be presented at national and international conferences. Dissemination plans to inform the patient community of this study's results include social media, press release, and the hospital's newsletter. Study results will be disseminated to the trial participants by email or letter upon their request.
Provenance and peer review: Not commissioned; externally peer reviewed. This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is noncommercial. See: http://creativecommons.org/licenses/by-nc/4.0/. | 2021-05-05T13:14:26.890Z | 2021-05-05T00:00:00.000 | {
"year": 2021,
"sha1": "88e9620d4faa1b00edee5102af1a010ad2e8f928",
"oa_license": "CCBYNC",
"oa_url": "https://www.bmj.com/content/bmj/373/bmj.n893.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "f09724e95af51a945fd630ebf040dc7376b92d6c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
135028179 | pes2o/s2orc | v3-fos-license | Evaluation of Social Attraction Measures to Establish Forster’s Tern ( Sterna forsteri ) Nesting Colonies for the South Bay Salt Pond Restoration Project, San Francisco Bay, California—2017 Annual Report
Summary Forster’s terns ( Sterna forsteri ), historically one of the most numerous colonial-breeding waterbirds in South San Francisco Bay, California, have had recent decreases in the number of nesting colonies and overall breeding population size. The South Bay Salt Pond (SBSP) Restoration Project aims to restore 50–90 percent of former salt evaporation ponds to tidal marsh habitat in South San Francisco Bay. This restoration will remove much of the historical island nesting habitat used by Forster’s terns, American avocets ( Recurvirostra americana ), and other waterbirds. To address this issue, the SBSP Restoration Project organized the construction of new nesting islands in managed ponds that will not be restored to tidal marsh, thereby providing enduring island nesting habitat for waterbirds. In 2012, 16 new islands were constructed in Pond A16 in the Alviso complex of the Don Edwards San Francisco Bay National Wildlife Refuge, increasing the number of islands in this pond from 4 to 20. However, despite a history of nesting on the four historical islands in Pond A16 before 2012, no Forster’s terns have nested in Pond A16 since the new islands were constructed. In 2017, we used social attraction measures (decoys and electronic call systems) to attract Forster’s terns to islands within Pond A16 to re-establish nesting colonies. We maintained these systems from March through August 2017. To evaluate the effect of these social attraction measures, we also completed waterbird surveys between April and August, where we recorded the number and location of all Forster’s terns and other waterbirds using Pond A16, and monitored waterbird nests. We compared bird survey and nest monitoring data collected in 2017 to data collected in 2015 and 2016, prior to the implementation of social attraction measures, allowing for direct evaluation of social attraction efforts on Forster’s terns.
Executive Summary
Forster's terns (Sterna forsteri), historically one of the most numerous colonial-breeding waterbirds in South San Francisco Bay, California, have had recent decreases in the number of nesting colonies and overall breeding population size. The South Bay Salt Pond (SBSP) Restoration Project aims to restore 50-90 percent of former salt evaporation ponds to tidal marsh habitat in South San Francisco Bay. This restoration will remove much of the historical island nesting habitat used by Forster's terns, American avocets (Recurvirostra americana), and other waterbirds. To address this issue, the SBSP Restoration Project organized the construction of new nesting islands in managed ponds that will not be restored to tidal marsh, thereby providing enduring island nesting habitat for waterbirds. In 2012, 16 new islands were constructed in Pond A16 in the Alviso complex of the Don Edwards San Francisco Bay National Wildlife Refuge, increasing the number of islands in this pond from 4 to 20. However, despite a history of nesting on the four historical islands in Pond A16 before 2012, no Forster's terns have nested in Pond A16 since the new islands were constructed.
In 2017, we used social attraction measures (decoys and electronic call systems) to attract Forster's terns to islands within Pond A16 to re-establish nesting colonies. We maintained these systems from March through August 2017. To evaluate the effect of these social attraction measures, we also completed waterbird surveys between April and August, where we recorded the number and location of all Forster's terns and other waterbirds using Pond A16, and monitored waterbird nests. We compared bird survey and nest monitoring data collected in 2017 to data collected in 2015 and 2016, prior to the implementation of social attraction measures, allowing for direct evaluation of social attraction efforts on Forster's terns.
2 To increase the visibility and stakeholder involvement of this project, we engaged in multiple outreach activities, including the development of a project web site (https://apps.usgs.gov/shorebirds/) and educational video (https://www.youtube.com/watch?v=-IaZD0YlAvM&feature=youtu.be); publication of a popular article (http://www.sfestuary.org/estuary-news-caspian-push-and-pull/); and public presentations to relay findings to managers, stakeholders, and the general public.
The relative number of Forster's terns using Pond A16, after adjusting for the overall South San Francisco Bay breeding population each year, was higher during the nesting period in 2017 (after social attraction was used) than in 2015 and 2016 (before social attraction was used). Furthermore, in 2017, more Forster's terns were observed in the areas of Pond A16 where decoys and call systems were deployed during the pre-nesting and nesting periods. Although no Forster's tern nests were recorded in Pond A16 before (2015,2016) or after (2017) implementation of social attraction measures, bird survey results indicate that Forster's terns were attracted to areas within Pond A16 where decoys and call systems were deployed, suggesting that terns may have been prospecting for future breeding sites. As social attraction efforts often benefit from multiple years of decoy and call system deployment, these first-year results suggest that continued implementation of social attraction measures could help to reestablish Forster's tern breeding colonies in Pond A16 and other areas of South San Francisco Bay.
Introduction
The South Bay Salt Pond (SBSP) Restoration Project aims to restore 50-90 percent of former salt evaporation ponds to tidal marsh habitat in South San Francisco Bay, California, including many wetlands within Santa Clara County (Goals Project, 1999). This restoration is expected to benefit the South San Francisco Bay ecosystem, including improved water quality, fish habitat, and flood protection. However, numerous waterbirds use former salt ponds for nesting and foraging habitat, and islands within these managed ponds are critically important nesting habitat (Strong and others, 2004;Hartman, Ackerman, Takekawa, and others, 2016). For this reason, the remaining 10-50 percent of former salt ponds that are not being restored to tidal marsh habitat are being enhanced to support breeding, migratory, and wintering birds.
The primary pond enhancement feature has been the construction of islands to attract and support nesting birds. As part of the SBSP Restoration Project, 30 islands were constructed at Ravenswood Pond SF2 (near the west end of the Dumbarton Bridge) in 2010 at a cost of $9 million, and 16 islands were constructed at Alviso Pond A16 in 2012 at a cost of $4 million. Previous work has established the preferred location, size, shape, slope, and other features of islands that are well suited for nesting waterbirds Hartman, Ackerman, Takekawa, and others, 2016). However, since new island construction, there has been little use of these 50 newly constructed islands by nesting waterbirds, and no use by nesting Forster's terns (Sterna forsteri) at Pond A16, an at-risk species in San Francisco Bay that was a target species of the island construction.
Project Goals
Social attraction is a wildlife restoration technique whereby decoys of nesting birds, along with bird sound recordings, are deployed to look and sound like a real nesting colony in order to attract birds to nest at specific sites (Arnold and others, 2011;Jones and Kress, 2012). Because of their colonial nature, terns and many other seabirds are attracted to nesting sites by the presence of conspecifics, making the deployment of decoys and colony sound recordings a promising method for establishing new breeding colonies and re-establishing historical breeding colonies (Kress, 1983;Roby and others, 2002). We previously showed the effectiveness of social attraction measures (decoys and call systems) in establishing Caspian tern (Hydroprogne caspia) breeding colonies at locations in South San Francisco Bay where they had never bred previously (Hartman and others, 2017). In just 3 years, we increased the number of Caspian tern nests between two sites on the Don Edwards San Francisco Bay National Wildlife Refuge (DENWR) from zero to at least 664 nests following implementation of social attraction measures. The objective of this project was to implement similar social attraction measures targeting Forster's terns to re-establish the historically large breeding colony at Pond A16.
Forster's terns are an at-risk species in San Francisco Bay. In recent years, the breeding population of Forster's terns in South San Francisco Bay has decreased greatly, from more than 1,600 nests in 2010 to fewer than 500 nests in 2017 (J.T. Ackerman, M.P Herzog, and C.A. Hartman, U.S. Geological Survey, unpub. data, 2018). Moreover, the number of large Forster's tern breeding colonies in South San Francisco Bay has decreased from 10-20 colonies historically to only 4 colonies in 2017. Some of these losses can be traced to loss of historical island nesting habitat due to changes in pond management associated with the SBSP Restoration Project. For example, large colonies of Forster's terns previously nested in Ponds A7 and A8 of the Alviso pond complex, but the islands in these ponds are now flooded, preventing nesting. Pond A16, also in the Alviso pond complex, historically supported about 200-300 Forster's tern nests annually (Ackerman and Herzog, 2012). However, in 2012, Pond A16 was temporarily drained to construct 16 new nesting islands, and no Forster's terns have nested in the pond since. Instead, Forster's terns have been nesting in New Chicago Marsh, which is directly adjacent to Pond A16 ( fig. 1). However, New Chicago Marsh is a shallow-water marsh habitat that does not afford the same protection from terrestrial predators as islands within deep-water ponds, and waterbird nest success in New Chicago Marsh typically is low (Ackerman and others, 2014).
The colonial nature of Forster's terns, the fact that Pond A16 historically supported large numbers of breeding terns, and the large potential source population of terns in adjacent New Chicago Marsh make social attraction a viable restoration option for re-establishing Forster's tern breeding colonies in Pond A16. After nesting is established, these colony sites likely will be used for decades. Additionally, because the presence of nesting Forster's terns can attract other nesting waterbirds such as American avocets (Recurvirostra americana; Hartman, Ackerman, Takekawa, and others, 2016), re-establishing Forster's tern breeding colonies to Pond A16 also could increase use of Pond A16 by other nesting waterbirds.
In 2017, we implemented Forster's tern social attraction measures (decoys and call systems) in Pond A16 to re-establish breeding colonies. The objectives of this project were to: 1. Deploy and maintain social attraction measures (decoys and call systems) for Forster's terns on six islands within Pond A16; 2. Monitor and evaluate prospecting and nesting by Forster's terns and other waterbird species in Pond A16; 3. Conduct outreach activities to advertise the project and promote social attraction efforts as a tool for waterbird management in South San Francisco Bay, and to relay findings to managers, stakeholders and the general public.
Outreach to Stakeholders and the General Public
We did multiple outreach activities to promote our social attraction efforts as a tool for waterbird management in South San Francisco Bay, and to relay findings to managers, stakeholders, and the general public. These activities included the development of a project web site hosted by the U.S. Geological Survey, three public presentations, one publication of a popular article in a local outlet highlighting project activities, and two visits with local elementary school classes to explain the project and enlist students in painting tern decoys.
Social Attraction Measures for Forster's Terns
In early March 2017, we deployed Forster's tern social attraction measures (decoys and call systems) on six islands at the south end of Pond A16 of the DENWR ( fig. 1). We chose these six islands based on their nearness to New Chicago Marsh, a site that in recent years has had numerous nesting Forster's terns but where nest success has been low because of easy access to nest sites by terrestrial predators. Thus, by placing decoys and call systems close to the adjacent New Chicago Marsh, Forster's terns nesting there may be attracted to nest instead in Pond A16. In addition to their nearness to New Chicago Marsh, we selected islands based on their size and shape. Forster's terns prefer linear-shaped and elongated islands to more rounded islands . Five of the six islands in which we deployed decoys and call systems were elongated and highly linear ( fig. 1).
We arranged 50 Forster's tern decoys spaced 1-1.5 m apart on each of the six islands (300 total decoys; figs. 2-4). Decoys (Duck Trap Woodworking, Lincolnville. Maine) were carved of wood and painted to resemble Forster's terns in an incubation posture ( fig. 2). We installed a call system (Murremaid Music Boxes, South Bristol, Maine) on each of the six islands with decoys and broadcasted Forster's tern colony calls continuously through two omnidirectional outdoor speakers. Each call system was powered by two 6-volt Optima ® AGM batteries and charged by a 135 W Kyocera © solar panel, enabling it to broadcast continuously throughout the breeding season. Call box and solar panels were deployed about 20 m from the decoy arrangement. The two omni-directional speakers were deployed amongst the decoys and connected to the call box by speaker wire. We used a 30-minute recording of a Forster's tern colony recorded at Pond A16 in 2009 (Borker and others, 2014). Decoys and call systems, broadcasting on a continuous loop, remained on each island until they were retrieved in August.
In 2015, 2016, and 2017, we also deployed Caspian tern and western snowy plover (Charadrius alexandrinus nivosus) decoys and call systems on three islands on the north end of Pond A16 ( fig.1).
Bird Surveys
We did biweekly bird surveys at Pond A16 beginning in early March 2017 (shortly after decoy and call system deployment) and continuing through August 2017. Surveys occurred in the early morning or early afternoon, with the time of day alternating during consecutive surveys (that is, one survey in the morning, and one survey in the afternoon each week). Each survey consisted of driving around the levee surrounding Pond A16 and stopping at five set vantage points ( fig. 5) to record the number and location of all Forster's terns and other prominent waterbirds known to nest in South San Francisco Bay. Surveys were done using binoculars and a 20-60× spotting scope. We recorded bird locations by assigning each observation to 1 of 26, 250×250-m grid cells within Pond A16 (fig. 5). Each survey was completed within 60 minutes to limit double-counting of individuals and avoid biasing abundance estimates at each pond.
Nest Monitoring
We visited 18 of the 20 islands in Pond A16 weekly during the nesting season (April-June) to record nesting activity of Forster's terns, American avocets, and other waterbirds. We did not visit two islands (Island 11 and Island 12) at the north end of the pond owing to our ongoing study of Caspian terns on these islands (Hartman and others, 2017). During each island visit, we systematically searched for nests. For each new nest found, we recorded Universal Transverse Mercator coordinates and marked the nest with a uniquely numbered aluminum tag held in place just outside the nest bowl with a garden staple and a 40-cm flag placed 2 m north of the nest. We then revisited nests weekly until failure or hatch, and documented if the nest was active or inactive (abandoned or depredated), recorded the number of eggs in the nest, and floated eggs to determine the stage of development (Ackerman and Eagles-Smith, 2010).
Statistical Analyses
Forster's Tern Use of Pond A16 Before and After Social Attraction Implementation For all analyses, we compared 2 years of data collected prior to implementation of Forster's tern social attraction measures (2015,2016) to 1 year of data collected after implementation of Forster's social attraction measures (2017). From our bird-survey data, we calculated the high count of Forster's terns and American avocets observed in Pond A16 during each week of the breeding season (April-August) of 2015, 2016, and 2017. We then used general linear models to examine the fixed effects of month (April, May, June, July, and August) and year (2015, 2016, and 2017) on the number of Forster's terns or American avocets observed in Pond A16. However, we observed a substantial decrease in the total number of Forster's tern nests in South San Francisco Bay in 2017 relative to 2015 and 2016 (see section, "Results and Discussion"). Because the number of birds using Pond A16 in any given year is dependent on the number of birds present in South San Francisco Bay, we needed to account for the overall decrease in the Forster's tern population in South San Francisco Bay. We, therefore, adjusted the number of Forster's terns observed during each survey by multiplying this value by the number of Forster's tern nests in South San Francisco Bay in 2017 divided by the total number of nests observed in South San Francisco Bay in each year. By making this adjustment, we examined the relative abundance of Forster's terns in Pond A16 after accounting for annual differences in the overall breeding population in South San Francisco Bay. As with terns, we adjusted the number of American avocets observed each year. We then evaluated the fixed effects of month, year, and a month×year interaction on the adjusted number of Forster's terns and American avocets in Pond A16. Additionally, we compared the years 2015 and 2016 to 2017 during each of the months of April, May, June, July, and August, enabling us to test for differences in Forster's tern and American avocet numbers in Pond A16 before (2015, 2016) compared to after (2017) implementation of social attraction measures. Adjusted number of Forster's tern and American avocet values were not normally distributed, so we used a natural log data transformation to meet the assumption of normality.
Forster's Tern Use of Pond A16 Locations with and without Social Attraction
We did a second analysis in which we examined if Forster's tern use of the eight 250×250-m grid cells within Pond A16 with decoys and call systems (fig. 5) varied before (2015, 2016) versus after (2017) implementation of social attraction measures. First, we assigned each grid cell in Pond A16 to one of two treatments: (1) with social attraction (grid cell included one or more islands with decoys and call systems deployed in 2017), or (2) without social attraction (grid cell did not include islands with decoys and call systems in 2017). We then tested whether the number of Forster's terns within grid cells varied by year (2015, 2016, or 2017), treatment (with social attraction or without social attraction in 2017), and a year×treatment interaction. For this analysis, we only included April-June survey data, as this represents the pre-nesting and nesting periods for Forster's terns and other waterbirds in San Francisco Bay. We again adjusted the number of Forster's terns observed during a survey by the nesting population for that year. We also included two individual covariates (pond area and island area) that we hypothesized could influence Forster's tern use of a given grid cell, but we could not control for it in our experimental design. Although each grid cell was 250×250 m, not all grid cells were solely within Pond A16 ( fig. 5). Because we did not count Forster's terns outside Pond A16, grid cells with little area within Pond A16 could be expected to have fewer Forster's terns within them than grid cells completely within Pond A16. Additionally, the amount of island area may influence Forster's tern use of a particular grid cell, as cells with more island area may offer more nesting and roosting opportunities for terns. By including the pond area and island area of each grid as covariates, we accounted statistically for these differences. Grid cell survey data were not normally distributed and data transformation was not possible because of the large number of zeros. We, therefore, used a generalized linear mixed model with a Poisson distribution with the adjusted number of Forster's terns as the response variable and year, treatment, a year×treatment interaction, and pond area (continuous covariate) and island area (continuous covariate) as fixed effects, and the individual grid cell as a random effect. We did two additional identical analyses, with either the adjusted number of American avocets as the response variable or the adjusted number of American avocet nests as the response variable. For the analysis of American avocet nests, we omitted 2015 data because only four nests, all in one grid cell, were recorded in that year. All analyses were done using SAS/STAT software (release 9.4, SAS Institute, Cary, North Carolina). We report back-transformed least squares means and estimated standard errors using the delta method (Seber, 1982).
Spatial Distribution of Forster's Tern Observations within Pond A16
We summed the total number of Forster's terns observed within each grid cell in Pond A16 between April and June (pre-nesting and nesting periods). We then calculated the proportion of all observations that occurred within each grid cell and plotted these proportions using ArcGIS ™ 10.4.1 (Environmental Research Systems Institute, Redlands, California) to create maps of Forster's tern activity. In this way, we could ascertain whether Forster's tern distribution within Pond A16 during the pre-nesting and nesting periods was affected by the presence of social attraction measures.
Outreach to Stakeholders and the General Public
We engaged in eight outreach activities for this project, including development of a web site and an outreach video, three public presentations, one publication of a popular article, and two visits with local elementary school classes.
In June 2017, the story map web site entitled "Re-establishing Waterbird Breeding Colonies in San Francisco Bay" was published, and is accessible to the general public (https://apps.usgs.gov/shorebirds/; fig. 6). In addition to a detailed description of the Forster's tern social attraction project at Pond A16, the web site serves to place those efforts in the broader context of waterbird research, conservation, and management in South San Francisco Bay and the SBSP Restoration Project. The story map includes an overview of the benefits and challenges of the SBSP Restoration Project to breeding waterbirds, recommendations for the construction of nesting islands, descriptions and results of our social attraction efforts for Forster's terns and Caspian terns, as well as population changes and management of American avocets and California gulls (Larus californicus). Included in the Forster's tern social attraction component of the web site is an educational outreach video describing the need for the project and how it was implemented (available at https://www.youtube.com/watch?v=-IaZD0YlAvM&feature=youtu.be).
We gave three presentations associated with our tern social attraction efforts in South San Francisco Bay. On March 23, 2017, we presented at the SBSP Restoration Project researchers and management team meeting at the DENWR where we gave updates on ongoing efforts to promote nesting by waterbirds in the SBSP Restoration Project area. On October 11, 2017, we presented an invited talk at the 13th Biennial State of the San Francisco Estuary Conference, entitled "Waterbird Nesting Ecology and Management in San Francisco Bay". This conference focused on management and health of the San Francisco Bay-Delta Estuary and was attended by more than 800 people. Our presentation focused on the urgency of addressing the decreasing waterbird nesting populations in San Francisco Bay, the importance of island nesting habitat to waterbirds, and how social attraction can be an effective tool for establishing nesting colonies. On November 16, 2017, we presented a talk entitled "Using Social Attraction to Establish Tern Breeding Colonies in South San Francisco Bay" at the San Francisco Bay Bird Observatory Science Talk forum. This forum was attended by San Francisco Bay Bird Observatory staff, members, and the general public. Our presentation focused on our efforts in establishing Forster's tern and Caspian tern breeding colonies on the DENWR.
In February 2017, we visited two elementary schools to talk to students about waterbird conservation in San Francisco Bay and how social attraction can be used to attract birds to nest. We then worked with students in a hands-on activity of repainting tern decoys that were used in our Caspian tern social attraction efforts in 2017. This outreach effort gave young students a unique opportunity to contribute to waterbird conservation efforts close to home.
In June 2017, the article "Caspian Push and Pull" was published in Estuary News, a publication of the San Francisco Estuary Partnership (http://www.sfestuary.org/estuary-newscaspian-push-and-pull/). This article focused on our successful efforts using social attraction to establish tern nesting colonies in South San Francisco Bay, and our engagement of local schoolchildren in repainting decoys for our social attraction efforts.
Forster's Tern Use of Pond A16 Before and After Social Attraction Implementation
We completed a total of 40 bird surveys at Pond A16 between April and August of 2017 and compared these data to surveys done over the same period in 2015 and 2016. The adjusted weekly high count of Forster's terns observed in Pond A16 varied significantly by month (F4, 56 = 2.97, P = 0.03) and by the month×year interaction (F8, 56 = 4.09, P = 0.0007). Least squares mean comparisons indicated that adjusted Forster's tern numbers were greater (F1, 56 = 7.50, P = 0.008) after the implementation of social attraction measures (2017) than before implementation (2015,2016) in May ( fig. 7), the month during which Forster's tern first begin nesting. Adjusted Forster's tern numbers also were higher in April ( fig. 7), but this difference was not statistically significant (F1, 56 = 3.51, P = 0.07). In contrast, adjusted Forster's tern numbers were lower in 2017 during the post-nesting period in July (F1, 56 = 7.61, P = 0.008) and August (F1, 56 = 8.91, P = 0.004; fig. 7), likely due to the substantially smaller South San Francisco Bay nesting population in 2017 producing fewer juvenile terns, compared to 2015 and 2016.
Forster's Tern Use of Pond A16 Locations with and without Social Attraction
We completed a total of 23 bird surveys at Pond A16 between April and June of 2017 (pre-nesting and nesting periods) and compared these data to surveys done over the same period in 2015 and 2016. The adjusted number of Forster's terns observed within individual grid cells of Pond A16 during April-June varied by treatment (with social attraction in 2017 compared to without social attraction in 2017, F1, 44 = 5.88, P = 0.02) and by the amount of pond area within the grid cell (F1, 44 = 5.84, P = 0.02), but did not vary by year (F2, 44 = 2.34, P = 0.11), the amount of island area within the grid cell (F1, 44 = 0.39, P = 0.54), or the year×treatment interaction (F2, 44 = 2.61, P = 0.08). Least squares mean comparisons indicated that in 2017, Forster's tern numbers were 567 percent greater in grid cells where social attraction measures 9 were implemented than in grid cells where they were not implemented (F1, 44 = 8.20, P = 0.006; fig. 9). However, Forster's terns also were more numerous during 2015 in grid cells where social attraction would be implemented in the future (F1, 44 = 5.09, P = 0.03), but not during 2016 (F1, 44 = 0.02, P = 0.90; table 1, fig. 9). A comparison of the spatial distribution of Forster's tern observations among years showed that birds occupied a smaller area closer to islands with social attraction measures in 2017 than they did in years prior to social attraction implementation ( fig. 10).
Waterbird Nests in Pond A16
No Forster's tern nests were recorded in Pond A16 before (2015,2016) or after (2017) implementation of Forster's tern social attraction measures. There were four American avocet nests in Pond A16 in 2015, 89 nests in 2016 (before social attraction implementation) and 79 nests in 2017 (after social attraction implementation). During 2015-17, American avocets nested on all 6 islands where Forster's tern social attraction was implemented in 2017, and on 7 of the 14 islands where Forster's tern social attraction was not implemented in 2017, as well as on some mudflats between islands (in 2016 only; fig. 12). The adjusted number of American avocet nests within individual grid cells of Pond A16 varied by treatment (F1, 23 = 10.37, P = 0.004), the year×treatment interaction (F1, 23 = 9.38, P = 0.006), and the amount of island area within the grid cell (F1, 23 = 14.68, P = 0.0009), but not by year (F1, 23 = 3.20, P = 0.09) or the amount of pond area within the grid cell (F1, 23 = 0.43, P = 0.52). Least squares mean comparisons indicated that before implementation of Forster's tern social attraction measures in 2016, the number of American avocet nests was 40 times greater within grid cells that would later receive Forster's tern decoys and call systems (2.8±1.6 nests per grid cell) than within grid cells that would not receive them (0.07±0.06 nests per grid cell, F1, 23 = 13.40, P = 0.001). Similarly, after implementation of Forster's tern social attraction measures in 2017, the number of American avocet nests was 16 times greater within grid cells with Forster's tern decoys and call systems (1.3±0.8 nests per grid cell) than within grid cells without them (0.08±0.8 nests per grid cell; F1, 23 = 7.34, P = 0.01). However, the number of American avocet nests within grid cells with Forster's tern social attraction was lower after implementation (2017-1.3±0.8 nests per grid cell) than before implementation (2016-2.8±1.6 nests per grid cell; F1, 23 = 10.26, P = 0.004). Thus, although more American avocets nested in grid cells at the south end of Pond A16 before (2016) and after (2017) Forster's tern decoys and call systems were deployed, fewer American avocets nested overall after implementation. Additional years of social attraction implementation would help to determine if this decrease was due to the presence of the Forster's tern decoys and call systems or natural annual variation in nesting numbers.
Conclusions
Implementation of Forster's tern (Sterna forsteri) social attraction measures in 2017 led to changes in Forster's tern use of, and distribution within, Pond A16, in the Alviso complex of the Don Edwards San Francisco Bay National Wildlife Refuge, South San Francisco Bay, California. Compared to 2015 and 2016 (before implementation), the relative abundance of Forster's terns was higher in 2017 during May when Forster's terns first begin prospecting for nest sites and initiating nests. Moreover, Forster's terns were much more prevalent in the areas of Pond A16 where social attraction measures were implemented during the pre-nesting and nesting periods in 2017 than in areas where it was not implemented. Furthermore, the overall distribution of Forster's terns within Pond A16 was more localized to the areas of the pond where social attraction was implemented (the southern and southeastern ends of the pond) in 2017, whereas Forster's terns were more dispersed throughout the pond in 2015 and 2016. Taken together, these results suggest that implementation of social attraction measures was successful in attracting prospecting Forster's terns to Pond A16, and specifically to the areas of Pond A16 where decoys and electronic call systems were present.
We observed little evidence of changes in American avocet (Recurvirostra americana) use of or distribution within Pond A16 following implementation of Forster's tern social attraction measures in 2017. However, we observed differences in the number of and distribution of American avocet nests. First, the nesting population of American avocets in Pond A16 decreased slightly from 2016 to 2017. Second, there were fewer avocet nests per grid cell with Forster's tern decoys and call systems after implementation (1.3 nests per grid cell) than before implementation (2.8 nests per grid cell). However, even with this decrease observed after implementation, there were still considerably more avocet nests per grid cell with Forster's tern decoys and call systems (1.3 nests per grid cell) than without them (0.8 nest per /grid cell). American avocets in San Francisco Bay often are drawn to areas where there are nesting Forster's terns, and the presence of Forster's tern decoys and calls in Pond A16 may have had a similar effect.
Although Forster's tern decoys and calls were not successful in establishing Forster's tern breeding colonies in the first year (2017) of this effort, the observed changes in Forster's tern use of Pond A16 are encouraging and suggest that continued deployment of social attraction measures can help to establish breeding colonies. Establishment of waterbird breeding colonies using social attraction does not typically occur immediately, and often benefits from multiple years of effort. For example, in coastal Maine, social attraction measures (decoys and calls) were first deployed in 1978 in an attempt to re-establish arctic tern (Sterna paradisaea), as well as common tern (S. hirundo), nesting colonies on Eastern Egg Rock, a major historical nesting site on which terns had not bred since the 1930s. In the first year of the effort, tern sightings on Eastern Egg Rock nearly doubled, but the first nests were not recorded until 1980, the third year of the effort. By 1983, 5 years after the project started, more than 1,000 terns were nesting on Eastern Egg Rock, making it the largest common tern breeding colony in Maine. Repeatedly exposing Forster's terns to social attraction efforts in subsequent years may similarly lead to the re-establishment of breeding colonies at Pond A16.
In 2017, we observed a substantial decrease in the number of large Forster's tern breeding colonies (from 10 to 20 in previous years to only 4 in 2017) and the overall Forster's tern breeding population (from more than 1,600 in 2010 to fewer than 500 in 2017) in South San Francisco Bay. Some of this decrease may be linked to the loss of historical island nesting habitat due to the conversion of managed ponds to tidal action as part of the South Bay Salt Pond (SBSP) Restoration Project. As future phases of the SBSP Restoration Project convert more managed ponds to tidal action, and the islands within these ponds are lost, Forster's tern nesting opportunities will become even more limited, potentially reducing the breeding population further. The decreasing Forster's tern breeding population, and the projected loss of additional nesting habitat, highlights the urgency in re-establishing colonies at historical nesting sites such as Pond A16 that will continue to be managed as ponds into the future. Increased Forster's tern uses of Pond A16 after only 1 year of social attraction, as well as the successful establishment of Caspian tern (Hydroprogne caspia) breeding colonies in a related effort, suggest that social attraction is a viable means for re-establishing Forster's tern breeding colonies in South San Francisco Bay. [Least squares means were generated from a generalized linear mixed model with a Poisson distribution in which year, treatment (grid cell contained islands with Forster's tern social attraction measures in 2017 or grid cell did not contain islands with Forster's tern social attraction measures in 2017), a year×treatment interaction, the amount of pond area within each grid cell (continuous covariate) and the amount of island area within each grid cell (continuous covariate) were fixed effects, and grid cell was a random effect. The estimated breeding population size was used to calculate the adjusted number of birds (see body text for details | 2018-12-14T11:01:34.467Z | 2018-01-01T00:00:00.000 | {
"year": 2018,
"sha1": "af477dd73bca7fa9778595ffed27690df8cbbd83",
"oa_license": null,
"oa_url": "https://pubs.usgs.gov/of/2018/1090/ofr20181090.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "7f05b6d3f88845e7149da1d38549fdbca3f18813",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Geography"
]
} |
225048877 | pes2o/s2orc | v3-fos-license | The B‐mode sonographic evaluation of the post‐caesarean uterine wall and its methodology: A study protocol
The aim of this study is to utilize the niche measurement guidelines outlined by Jordans et al. in order to establish normal values and accurate description of caesarean section scars in a normal population. After defining the normal distribution, abnormal pregestational scar characteristics will be identified for predicting adverse pregnancy outcomes.
Introduction
There is no doubt that caesarean section (CS) is an important surgical intervention which improves both maternal and fetal obstetrical outcomes given the right circumstances. These circumstances are met when an indication for performing a CS due to either fetal or maternal causes is fulfilled. 1,2 Even though the immediate postoperative maternal morbidity is decreasing because of improved perioperative management, 3 there are severe long-term risks to a CS including extrauterine pregnancies, subfertility, abnormally invasive placenta (AIP), repeated CS, as well as uterine rupture and hysterectomy. 4 The past two decades show continuous increase of CS rates especially in middle-and high-income countries without a parallel improvement in maternal and fetal outcomes. Despite efforts to limit unnecessary CS, many developed countries fail to keep their rate below 30%. 5 Deeper knowledge of the CS scars and their healing is required in order to appropriately advise the increasing number of pregnant women with a history of CS. Thus far, our recommendations have been based on statistical data, but we believe that tailored risk-assessment for each patient can be achieved. Several groups have studied the sonographic assessment of the lower uterine segment for a safe vaginal delivery after a CS. A normal lower uterine segment after CS was associated with a sonographic anterior wall thickness of more than 3.2 mm around delivery and thus assumed to be safe for a trial of labor. 6 Similar results were shown by Basic et al. where scar thickness of more than 3.5 mm was regarded as a quality of good healing that can withstand vaginal delivery. 7 Naji et al. studied the scar during subsequent pregnancy and considered an anterior myometrial wall thickness of 2.5 mm as a cut-off point for normal thickness. They concluded that the scar was visible in 88.8% and the reproducibility of the scar measurement decreased with advancing gestational age. 8 Similarly, a cut-off value of less than 2.5 mm was associated with a translucent lower uterine segment. 9 The measurement of the lower uterine segment with ultrasound was shown to be highly reproducible when predetermined standardized measuring criteria are implemented. 10 The value of ultrasound in predicting uterine rupture and mode of delivery in a pregnancy following a CS remains controversial, thus current guidelines do not recommend ultrasound for this purpose. 11 The CS scar is easy to visualize in a non-pregnant uterus and its measurement is more accurate than during pregnancy. While a hypoechogenic triangular defect 'niche' at the site of the scar is the most commonly described change, a universally accepted definition of the normal scar appearance remains lacking. Transvaginal ultrasound represents the gold standard method for evaluating niches which are present in 24-70% after a CS. 12 Large niches are associated with gynecological complaints such as chronic pelvic pain, postmenstrual spotting, and abnormal uterine bleeding. 13 Especially large niches are expected to be associated with obstetrical complications in subsequent pregnancies such as uterine rupture and abnormal placentation, however, this effect has not been established because a definition of a large niche does not exist. 14 Jordans et al. published guidelines in order to standardize the examination, measurement and description of the niche in non-pregnant women. These guidelines were based on a Delphi method and should help to generate universally understandable evaluation of CS scars for future clinical studies. Transvaginal ultrasound with either 2D or 3D can be used for the measurement, and a niche is defined as an indentation of more than 2 mm at the site of the CS scar. The length, width and depth of the niche should be measured on the planes where they reach their maximum, while the residual myometrial thickness (RMT) should be measured on a sagittal plane. Additional information such as the adjacent myometrial thickness, the distance between the niche and the external os or the vesicovaginal fold are also considered to be important in the evaluation. 15 The aim of this study is to utilize the measurement guidelines outlined by Jordans et al. in order to establish normal values and accurate description of CS scars in a normal population.
Methods
This is a prospective observational multicenter clinical study where consenting women over the age of 18 with a history of only one CS, regardless of reason for the CS or the gestational age at delivery, and yet open family planning are enrolled. Exclusion criteria are completed family planning and a history of more than one CS or other uterine surgeries. The study was approved by the Ethics Committee at the Hesse State Chamber of Physicians, reference number 2019-1138-evBO. Voluson E10 with a 5-13 MHz GE RIC6-12-D microconvex transvaginal transducer as well as a curved array 8 MHz GE RAB4-8-D transabdominal probe are used for the examinations (GE Healthcare GmbH, Munich, Germany). Vaginal ultrasound will be utilized to visualize the uterus and the CS scar with an empty bladder one year postoperatively. Three-dimensional volumes from each uterus are saved for an offline assessment, during which several measurements will be acquired. The uterine length (UL), cervical length (CL), niche length (L), niche depth (D), niche width (W), RMT, endometrial thickness (EM), scar to internal os distance (SO), anterior myometrial thickness superior (sAMT) and inferior (iAMT) to the scar and the posterior myometrial thickness opposite the scar (PMT), superior (sPMT) and inferior to it (iPMT) as shown in Figure 1 are documented and their reproducibility will be tested.
A survey of gynecological findings such as dysmenorrhea, postmenstrual spotting and abnormal uterine bleeding is conducted at the time of this examination.
Furthermore, the study participants will undergo serial ultrasound examinations at the 5th-8th gestational week, first, second and third trimester upon starting a subsequent pregnancy. The first examination includes measurements similar to those shown in Figure 1 in addition to identifying scar pregnancies which are believed to be precursors for AIP. 16 The rest of the follow-up examinations will be performed with a combination of transvaginal and transabdominal transducers. The lower uterine segment will be measured over a length of 3 cm starting from the most inferior identifiable part of the myometrium as shown in Figure 2. All of the transabdominal examinations are performed with a full bladder and bladder volume will be noted.
The myometrium is identified as a relatively hypoechogenic layer between two bright hyperechogenic lines representing the peritoneum and the chorioamniotic membrane. 10 The RMT at the scar location will be documented if the CS scar is identifiable during pregnancy such as in Figure 3.
Pregnancy outcome, mode of delivery and adverse events such as AIP, uterine rupture during labor in a subsequent pregnancy and uterine dehiscence during repeated CS are documented and will be correlated to the sonographic properties of the scar. Our consensual definition of uterine dehiscence is an unruptured translucent lower uterine segment during a repeated CS.
Results
Data from 500 patients will allow the definition of a 95% reference interval where the upper and lower bounds will have a precision of at least 2% with a probability of 95% pregestationally and during the first trimester. It is expected that only part of the patients yields reliable measurements during the second and third trimester, the precision of the respective reference interval bounds will then still be at least 2.4% with a probability of 95%. If possible, parametric approaches will be preferred for defining reference intervals. A bar-chart will be established in order to demonstrate the means and the 95% confidence intervals for the measurements collected from the 500 patients.
Furthermore, inter-and intra-observer variability will be evaluated. Moreover, interclass correlation will demonstrate the congruence between the transvaginal and transabdominal measurements of the lower uterine segment during pregnancy.
ROC-curve analysis will be performed to evaluate the predictive information of different measurements and adverse events, such as dysmenorrhea, abnormal uterine bleeding, subfertility, subsequent AIP, uterine rupture, dehiscence and emergency CS. Furthermore, a multivariable logistic regression model will be used to assess and combine the diagnostic and predictive value of the measurements for aforementioned outcomes.
Discussion
In order to recognize abnormal CS scars, a definition of normal scarring needs to be created. A populationwide screening for all women after a CS is essential in order to define the real prevalence of niches and determine their size and RMT. This study can be a departure point for establishing the normal distribution of these variables. Recognizing a correlation between deviations outside the normal distribution and adverse outcomes, such as pelvic pain or spotting, would be instrumental for counseling women with these complaints and eventually for planning their management. Absolute measurements of niches and RMT are not expected to be beneficial for predicting outcomes, rather relative measurements because women have different sized uteri and uterine walls. This is the reason why our study protocol includes the predefined measurements in Figure 1 so that ratios can be assessed.
Other studies similarly utilized ratios for calculating the degree of thinning at the scar level. A ratio of more than 50% was classified as severe deficiency. 17 Previous work measured the CS scars longitudinally during the pregnancy and showed that the measurements are reproducible, but they utilized transvaginal ultrasound throughout second and third trimesters. 8 Transabdominal ultrasound is more practical in the later stages of pregnancy; thus it should be the preferred method for evaluating the lower uterine segment. The measurement of the lower uterine segment at term with transvaginal ultrasound has been reported to be more accurate than with transabdominal transducer. 18 Therefore, both transabdominal and transvaginal transducers will be utilized for measuring the lower uterine segment during pregnancy. A strong correlation between these measurements can indicate equivalent accuracy, while weak correlation might invalidate our preference of transabdominal ultrasound. The published studies with proposed cut-off values for normal lower uterine segment, whether with transvaginal or transabdominal ultrasound, do not precisely show how the measurement was taken and leave several unanswered questions regarding standardization. 19 This study describes and shows exactly how the lower uterine wall is measured over a 3 cm segment and takes into account the urinary bladder volume. The fullness of the bladder affects the evaluation of uterine wall; therefore, we document the bladder volume during the examination. The ultrasound examinations are performed by experienced sonographers with level 2 certification from the German Society for Ultrasound in Medicine. 20 Blinded cross evaluations of the performed scans will be crucial for testing the interobserver variability and the validity of the method.
It has been shown that the scar changes throughout pregnancy, and scars with the largest initial dimensions show bigger change and thinner RMT at third trimester. 21 Moreover, it is believed that the appearance of the scar in a non-pregnant uterus can affect its performance in a subsequent pregnancy and effectively predict successful vaginal delivery after CS. 22 This is the first study that demonstrates the changes of the scar longitudinally throughout pregnancy starting from a non-pregnant uterus. This is especially important after the guidelines for niche assessment were published by Jordans et al. in 2019. 15 These measurements are standardized and might confirm the importance of the scar characteristics in predicting pregnancy outcome. This thinking is in line with inverting the prenatal care pyramid, and concrete findings from this study can lead to the integration of pregestational sonographic uterine assessment at the base of the pyramid for every woman with a history of CS. 23 The developing countries are faced with increasing CS rates that depict challenging consequences in the years to come. It is essential to construct evidencebased knowledge about CS scars in order to respond to the needs of our patients. Exploring the characteristics of these scars is fundamental for establishing norms, upon which future research can be found, and this study is a step in that direction. | 2020-10-24T13:05:49.615Z | 2020-10-22T00:00:00.000 | {
"year": 2020,
"sha1": "921880e1891c0901661b31165d5a71d954eea8ab",
"oa_license": "CCBY",
"oa_url": "https://obgyn.onlinelibrary.wiley.com/doi/pdfdirect/10.1111/jog.14492",
"oa_status": "HYBRID",
"pdf_src": "Wiley",
"pdf_hash": "34e5b28c6736c614b6a674d031431156497b617a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
261505185 | pes2o/s2orc | v3-fos-license | CREATIVE VENTURE INITIATIVES IN MICRO-BUSINESSES
India being seventh largest country in the world is empowered by its human resource with 65% of its population below the age of 35 years. Unfortunately, India even today is considered as developing country. Creative venture initiatives in micro business involve innovative and imaginative approaches to starting or expanding small-scale businesses. It is the prevailing situation that only the giant business houses exploit the opportunities of globalization to a greater extent which caused concentration of money in the hands of few. In spite of the openings of various cells by the government India lag behind in developing strategies to go in for promotional activities in entrepreneurship especially to strengthen small scale enterprises. As a result many talented entrepreneurs find it very hard to transform themselves in to innovation practitioners. Execution of talent is discouraged due to the lack of integrated inclusion. A platform is to be created to make innovative/ creative small scale enterprises to go in for internationalization. Therefore innovative practices and its practitioners of small scale enterprises are to be recognized at the earliest. A system must be developed by the government to assess the quality in meeting the global requirement and educate entrepreneurs in this regard. The paper highlights on awareness level of entrepreneurs about the worth of globalization and the way out to go in for internationalization. These initiatives often focus on leveraging unique ideas, products, or services to stand out in competitive markets and possible difficulties to be faced while entrepreneurs go in for internationalization of their innovative entrepreneurial activities. The impact of Internationalization of innovative entrepreneurial activities of small enterprise over Indian economy is also highlighted and an effort has been made to provide suggestions in materializing the said concept.
INTRODUCTION
Undoubtedly the Small Scale Enterprises are considered to be the vibrant sector in promoting Indian Economy. Because of its unique organizational characteristics Small Scale Enterprises have become the reason for uplift in Indian Economy by playing a significant role in local employment creation and balanced resource utilization. Both registered and unregistered small scale enterprises contribute to the economic development of the country in various ways. Unfortunately on the other hand most of the unregistered small scale enterprises lag behind in finding out the place in the international market though their entrepreneurial activities are capable of meeting the global requirement. Therefore an effort must be put forward to recognize innovative entrepreneurs and provide them both financial and non financial assistance in materializing the concept of reaching the unreached. Lack of integrated inclusion could be one of the reasons for non participation of innovative entrepreneurs in the global market. An honest action should be taken by the government to recognize and promote innovative entrepreneurial activities to the global market post of which India would definitely become a developed country.
Unregistered small scale enterprises
All MSME engaged in the activities of manufacturing or in providing/ rendering of services, not registered permanently or not filed Entrepreneurs Memorandum Part-II/ [EM-II] with State Directorates of Industries' District Industries Centers on or before 31-3-2007 are called unregistered MSME. Those enterprises that are temporarily registered on or before 31-3-2007 as also the units that are temporarily or permanently registered or filed EM-II after 31-3-2007 till the date of Sample Survey, conducted as part of Fourth All India Census of MSME, 2006-07, were treated as unregistered MSME.
OBJECTIVES
• To study the difficulties faced by entrepreneurs to go in for internationalization of their innovative entrepreneurial activities. • To study the impact of Internationalization of innovative entrepreneurial activities of small enterprise over Indian economy • To fostering innovation, diversifying product or service offering, enhancing competitiveness reaching new markets • To ultimately increase profitability and sustainability.
• To provide suggestions in materializing above said objectives.
RESEARCH METHODOLOGY
The present paper keeping in mind the objectives set for the study employs research design of descriptive type of study. The secondary data was widely used for the study. Researcher drew the required data through secondary survey method. Different news articles, books and e-resources were used and recorded.
Need for Focus Towards Innovative Entrepreneurial Activity of Small Scale Enterprises
India needs to create on an average 1.5 crore jobs per annum to its young population. Such a large-scale employment generation is possible only by accelerating innovative small scale enterprises.
Export decides the Indian Economy to a greater extent. As such there is need for the consideration of promotional activities in making small entrepreneurs to find place in global market.
And small scale enterprises throughout the nation have become the role player in determining the GDP rate.
Observations
Though many small scale enterprises are found to be unregistered the statistics reveal the fact that growth rate of employment is very high in unregistered small enterprises as compared to registered small enterprises. This clearly indicates that India need to focus on these unregistered small enterprises and do developmental activities in finding a quick way out for removing economic imperfections.
As there could be many obstacles to promote all the unregistered small scale entrepreneurial units it is advisable to go in search of only innovative entrepreneurial activities and support them for time being.
Though many small scale enterprises are capable of indulging in innovative activities they find it very hard to materialize their ideas due to following problems: a. Absence of adequate fund and inadequate or no availability of credit facilities b. Poor quality and inadequate raw material have adversely affected the innovative entrepreneurs. c. Lack of awareness about insurance as a tool for risk management. It is quiet common that to execute an idea some amount of risk should be borne by the entrepreneurs. But unfortunately with the less or no awareness about the said risk management tools entrepreneur still lag behind in implementing ideas and finding place in international market. d. Most of the unregistered innovative entrepreneurs belong to rural India and they face the problem of transportation. e. Even in many circumstances illiterate entrepreneurs come up with an effective, innovative and highly productive activities which would match with the global market. But their illiteracy has made them to stay far from global market as they are not capable of understanding the system developed by the government for getting an entry to the world market.
Other Challenges of small scale entrepreneurs:
Most of the innovative small scale entrepreneurs of rural place are not equipped with Research & Development unit without which they cannot update their entrepreneurial activities in meeting the future needs of the customer and that will result in business liquidation. Seminars, workshops, conference and other training programmes on Entrepreneurship being conducted by the government are not reaching many innovative entrepreneurs.
SUGGESTIONS
• Pro-active role of government is very much essential to strengthen the small scale enterprises by liberalizing export restrictions and enable entrepreneurs to go in for internationalization of their business. • Awareness of internationalization of innovative entrepreneurial activities and its scope should reach large sections of people being participants and also beneficiaries. • Innovative approaches made by the small scale enterprises keeping in mind local needs and desires should be recognized by the government and are to be assessed and modified if necessary in order to meet global requirement. Such innovative approaches should be continuously adopted and adapted. • A proper media must be developed to provide information about the ongoing issues of international market and global market requirements especially to the innovative/ creative small scale entrepreneurs who always think of new. This in turn would create demand for the products or services of these small scale entrepreneurs at international market. Finally the export gets an additional value and on the other hand India may observe sound economic condition in near future. • Development-friendly environment can be brought into picture which ensures progressive entrepreneurial innovations to improve competitive abilities. • Transportation in rural places is to be developed.
CONCLUSION
Creativity and Innovation are the core aspects of an enterprise. Innovation helps to do existing things in extraordinary ways. Creativity and Innovation therefore steer the organizational activities in new directions and thereby promotes Indian Economy. Innovation on the other hand should also be regarded as anticipation of the needs of the market, offering additional quality or services, and keeping cost under control. No doubt, the current economic condition demands dynamism in entrepreneurial activity. Therefore smalls scale enterprises are to be given an edge to participate in global market. | 2023-09-04T15:07:58.124Z | 2023-09-02T00:00:00.000 | {
"year": 2023,
"sha1": "4e7d401754c7c042e6500fa16aad067ddbf23bff",
"oa_license": null,
"oa_url": "https://eprajournals.com/IJCM/article/11262/download",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "f7a313ef4f3fd508c40d19e81ade8a44c5628b0c",
"s2fieldsofstudy": [
"Business",
"Economics"
],
"extfieldsofstudy": []
} |
204032738 | pes2o/s2orc | v3-fos-license | Economic Growth and Cardiorespiratory Fitness of Children and Adolescents in Urban Areas: A Panel Data Analysis of 27 Provinces in China, 1985–2014
With rapid economic development in China, cardiorespiratory fitness (CRF) of children and adolescents is on a decline. However, this appears to have slowed down, reaching stagnation in certain areas. However, it is unclear if the change in CRF is related to economic growth and development or not. This study describes trends in CRF of Chinese children and adolescents, and empirically tests the relationships between China’s macro-economic developments and cardiorespiratory fitness of children and adolescents over the past 30 years using provincial panel data collected from one million samples. We used per capita disposable income as the economic indicator. CRF was assessed by using running tests: 50 m × 8 for boys and girls (7–12 years), 1000 m for boys (13–22 years), and 800 m for girls (13–22 years). The results show that economic growth has a U-shaped relationship with CRF of children and adolescents (both boys and girls). It appears that as incomes increased, CRF of urban male and female students in China gradually decreased to its lowest point, after which it showed an upward trend. From a horizontal perspective, it can be inferred that for low-developed provinces, increases in incomes cause a decrease in CRF levels. In contrast, for highly developed provinces, as incomes increase, CRF levels increase. This study provides the first empirical evidence of the relationship between macro-economy and CRF of youth, based on provincial panel data. The results presented here can be used to formulate health policies targeting the cardiorespiratory fitness of children and adolescents from middle-income provinces in China. This study also provides a reference for developing countries.
Introduction
Low cardiorespiratory fitness (CRF) is a strong and independent predictor of cardiovascular disease [1], cancer [2], and diabetes [3]. A previous study reported that global physical activity is on a decline [4]. In recent decades, the US, Canada, France, Australia, Italy, South Korea, Finland, and Norway, among other countries, have witnessed a significant decrease in cardiorespiratory fitness of children and adolescents [5][6][7][8]. Sedentary lifestyles and lack of physical activity are thought to be the main causes for this decline [9,10]. Moreover, social-economic environments, technology development, urbanization, and urban development influence CRF more than other public health factors [11,12].
A cross-sectional study on cardiorespiratory fitness in children and adolescents from several countries concluded that there is a strong negative correlation between national income inequality (GiNi index) and CRF of children and adolescents [13,14]. In low-and middle-income countries, high urbanization and incomes are risk factors for chronic cardiovascular related-diseases [15]. However, the growth of national economies has altered the relationship between urbanization and individual physical activity. For example, from 1991 to 2009, the level of physical activity of individuals living in highly urbanized areas in China was lower than that of those living in less-urbanized areas. However, research has revealed that this difference diminishes with rise in incomes in less-urbanized areas [15]; intensive construction of residential buildings [16], sports facilities, and transportation facilities [17] also contributes to this change.
Previous studies have explored the impact of socio-economic development on physical activity and cardiopulmonary function, primarily using cross-sectional data. Due to the lack of time series data, the impact of various CRF factors has not been tested. Moreover, few studies have investigated CRF from the perspective of socio-economic development. Although China's economy has grown rapidly over the past 30 years, the country is facing several challenges, especially in the social security, and medical and health sectors, which require new reforms and policies to improve the health of the population. Based on panel data from 27 provinces in China collected over the past 30 years, this study analyzed the CRF of one million children and adolescents aged 7 to 22 years. The authors aimed to describe the trends in CRF of children and adolescents, evaluate the impact of rapid development of social economy on CRF using provincial panel data, and explore the relationship between economic development and CRF of children and adolescents in a developing country. To the best of our knowledge, this perspective has not been reported before. Our results provide empirical evidence of the association between economy and children's CRF, and explore the impact of economy on students' CRF. These findings provide a scientific basis for policy formulation and effective intervention, and also serve as a reference for developing countries.
CRF Data
CRF data were derived from the Chinese National Surveys on Students' Constitution and Health (CNSSCH) conducted in 1985,1991,1995,2000,2005,2010, and 2014 [18][19][20][21][22][23][24]. These series of surveys were conducted by the Ministries of Education, Health, Science and Technology; the State Ethnic Affairs Commission; and the State Sports General Administration of the People's Republic of China. This study only included students of Han ethnicity, which constitutes 92% of the total Chinese population. The respondents were from 27 of the 31 provinces in China, excluding Hainan and Chongqing, both of which were founded after 1985. Qinghai and Tibet autonomous regions were also excluded because Qinghai was not included in the 1995 survey and Tibet autonomous region was not covered in nearly all surveys. Each study enrolled an equal number of students from each province. The participants were aged 7-22 years (primary to college level), and were selected from the same areas in each province from 1985 to 2014. The students were selected using the stratified cluster sampling method from certain classes, and clusters were randomly selected from each grade in the selected schools. Cardiorespiratory fitness was assessed using running tests: 50 m × 8 for girls and boys aged 7-12 years, 1000 m for boys aged 13-22 years, and 800 m for girls aged 13-22 years. Table 1 shows the sample sizes at each examination period. The research protocol was reviewed and approved by the Ethics Committee of College of Education at Zhejiang University.
Socioeconomic Data
The data of per capita disposable income (PCDI) were derived from the respective statistical yearbooks of China's provinces (1985,1991,1995,2000,2005,2010, and 2014) and the statistical database of the Chinese economic network [25].
Variables
To overcome heteroscedasticity and enhance the stationarity of data, the logarithm of all variables was used [26]. The dependent variables were L50 × 8, L1000 and L800, which was the log of the student's running test time of 50 m × 8, 1000 m, and 800 m. As the core independent variable, PCDI served as the economic indicator. The control variables were urbanization rate (URBAN) and consumer price index (CPI). Considering that urbanization correlates with physical activity [14,15], the rate of urbanization was used to determine its impact on CRF. In addition, over-nutrition was associated with a decreased cardiorespiratory endurance level [27]; thus, CPI was utilized to assess the indirect impact of nutrition on CRF. Table 2 illustrates the variables used in this study.
Estimation Approach
Panel econometric models were employed to estimate the relationship between students' CRF status and PCDI. To reduce endogeneity and increase the scientific soundness of the model, several control variables were introduced. The models used are as follows: Equation (1) is used to estimate the relationship between economic development and the students' CRF. Empirical analysis of the static model was performed using Stata 12.0 software (StataCorp, College Station, Texas, USA): where LY is log of the CRF status of students at each age in each province and survey, LPCDI is the log of economic level of each province and survey, LURBAN stands for log of urbanization rate of each province and survey, and LCPI represents the log of the consumer price index of each province and survey. LURBAN and LCPI are control variables that are expected to be related to the CRF status, α represents the fixed effects of a province, t is the time-specific effects, and ε is the error term. These equations employ the fixed effect regression model, which controls for time-invariant characteristics, such as climatic conditions and unmeasured cultural factors, and any time-varying differences common to all provinces. Equation (2) is the quadratic model used to investigate the marginal effect of economic growth on the students' CRF, that is, to verify whether a non-linear relationship exists between the two:
Secular Trends in CRF from 1985 to 2014
The secular trends of mean CRF test time for boys and girls from 1985 to 2014 are shown in Table 3. The surveys focused on 7, 13, 16, and 19-year-old students as the specific ages for first-grade, junior high school, senior high school, and university levels, respectively. The CRF of boys at almost all ages showed a continuous decreasing trend from 1985 to 2014, except for the seven-year-old boys, whose
Empirical Analysis
The fixed effects model regression was applied to all models. The fixed effects model was chosen because time-invariant variables were not included in the study. It offers the advantage of not assuming the relationship between the error term and the explanatory variable [26]. On the basis of the natural logarithm of each variable, robust command was used to correct the standard error with white heteroscedasticity to ensure robust results. The subsequent regressions eliminated the outliers.
Linear Analysis
Columns (linear) in Tables 4 and 5 represent the estimated results for Equation (1), which show the performance of linear specification between economic growth in China and CRF of children and adolescents after controlling for other variables, such as urbanization rate and CPI. In Table 4, the coefficients of LPCDI for the CRF of boys and girls (7-12-year-old) are −0.006 and −0.001, small and not significant (p > 0.1), indicating that there is no linear relationship between PCDI and CRF of children living in urban areas. Table 5 shows that the model yielded a significantly positive coefficient of LPCDI for boys (13-22-year-old): −0.027. Given the log-log specification, the coefficients represent the elasticity of the mean running time with respect to PCDI. A 1% increase in per capita PCDI is accompanied by a decrease in mean running time by 0.027% for boys (aged 13-22 years). This finding implies that PCDI has a positive impact on the cardiopulmonary fitness of boys (aged 13-22 years), contrary to the descriptive results in Table 3, and theoretical reasoning. This seems to be related to the fact that part of the decrease in CRF is attributed to a positive time trend for any nationwide factor (e.g., national economic development and national policies) that could affect CRF. The elasticity of the impact of PCDI on CRF of urban girls (aged 13-22 years) is −0.022 and not significant (p > 0.1), indicating that the relationship between PCDI and CRF of urban girls (aged 13-22 years) is not linear.
Quadratic Model
To further explore trends of influence of economy on students' CRF levels, the square term of LPCDI was added to test the marginal effect [26]. Columns (quadratic) in Tables 4 and 5 show the results of the nonlinear relationship test between economic growth and CRF of children and adolescents, as estimates of Model (2). The results show that β1 > 0 and β2 < 0 and both are statistically significant, indicating that PCDI has an inverted U-shaped relationship with the mean running test time, that is, it has a U-shaped relationship with cardiopulmonary endurance levels, especially for children, boys (β1 = 0.328 (p < 0.01); β2 = −0.018 (p < 0.01)) and girls (β1 = 0.347 (p < 0.01); β2 = −0. 019 (p < 0.01)). It appears that as incomes increased, the CRF of urban male and female students gradually decreased to the lowest point, after which it showed an upward trend. From a horizontal perspective, it can be inferred that for less-developed provinces, increases in incomes cause a decrease in CRF levels; subsequently increasing health risks. In contrast, for highly developed provinces, as incomes increase, CRF increases.
Discussion
This study shows a pattern of CRF-Kuznets curve, a U-shaped relationship between economy and CRF of children and adolescents based on provincial data from 1985-2014. The analysis reveals that the negative impact of economic development on cardiorespiratory fitness of children and adolescents gradually decreased over the years. Similarly, another study found a non-linear relationship between a country's income per capita and weight-related health status (Obesity Kuznets curve) for men and women according to the country-level panel data of 130 countries from 1975 to 2010 [28]. The Obesity Kuznets curve has also been reported for state-level panel data from 1991-2010 [29]. However, few studies have explored the relationship between macroeconomics and cardiopulmonary fitness.
Our results reveal that as an economy grows, the cardiorespiratory fitness level of students aged 7-22 years gradually decreases; this decline tends to slow down and stagnate at some point. Indeed, from 1984 to 2014, the CRF of Chinese urban students showed a U-shaped trajectory. From 1984 to 2005, national survey reports on students' physical health showed that the CRF of children and adolescents continuously decreased [22,30,31], as shown in Table 3. However, in 2010, a national survey on student physical fitness showed that the continuous decline in CRF of primary and middle school students had been contained [32]. This is because running time of the 50 m × 8 was shorter by 0.05 s on average for girls and unchanged for boys aged 7-12 years, compared to 2005. The running time of boys and girls aged 13-15 years in junior middle schools decreased by 3.03 s and 3.58 s on average, respectively. The average performance of boys and girls aged 16-18 years decreased by 0.48 s and 0.46 s, respectively, compared with 2005 [32], as shown in Tables 2 and 3. A national student physical fitness survey conducted in 2014 showed that the cardiorespiratory fitness of primary and middle school students remained stable [33].
The decrease in CRF of children and adolescents from 1984 to 2005 may be explained by the following factors. First, over-nutrition affects cardiorespiratory endurance. A study found that the CRF of overweight and obese students was significantly lower than that of students with normal weight [27]. Moreover, a higher BMI is associated with decreased cardiorespiratory endurance levels [27]. This indicates that being overweight and obese negatively affects CRF of students. It is conceivable that the decline in Chinese students' CRF over the past 30 years is due to an increase in overweight and obesity rates. In addition, students' sedentary lifestyles may affect their cardiopulmonary fitness [34,35]. Owing to rapid developments in science and technology, China has entered an era of automation-private cars, buses, subways, and other means of transportation have decreased walking. The diversification and modernization of modes of transport have decreased participation in physical activity among teenagers, leading to underutilization of human energy [36]. In addition, the exam-oriented education system and high academic demands have reduced the amount of leisure time for students. In addition, several students tend to choose electronic games over physical activity for leisure [37]. These factors are likely to affect CRF of students.
However, data has shown that from 2005 to 2014, the deterioration in cardiorespiratory endurance levels of students has been contained, especially for middle and high school students. This can be explained by the following reasons: First, several community sports facilities, national fitness centers, fitness paths, fitness squares, and parks have been built, providing teenagers with better platforms to engage in physical activity [38]. Second, family incomes and parents' education levels have also increased. Some studies have shown that the education level, health awareness, and financial support of parents have a positive impact on children's and adolescents' participation in physical activity or physical education [39][40][41]. This is because as the social economy develops and living standards improve, people tend to pay more attention to physical and mental health, especially with respect to children.
This study examined the relationship between economic development and CRF of children and adolescents. This analysis is based on one million data collected from surveys conducted in five-year interval periods between 1985 and 2014 covering 27 provinces in China. We show that the relationship between economic development and CRF of children and adolescents is U-shaped. For low-income provinces, increases in income causes a decrease in the CRF-related health status. In contrast, for high-income provinces, as incomes increase, the CRF-related health status improves. Our findings support the possibility that youths living in middle-income provinces may be at risk of poor health, which calls for health policies targeting prevention and intervention in China and other developing countries.
The findings of this study may be affected by endogeneity. Another limitation is that cardiorespiratory fitness was measured by a long-distance running test, which may be affected by several factors, such as environmental conditions and running surfaces. However, in order to collect time series data for further studies, physical fitness in China was measured over a long period. Previous studies suggest that running tests are valid and reliable. The main strength of this study is the large sample size used, based on five-year assessments of CRF for children and adolescents aged 7-22 years over a 30-year period. This study also provides evidence of the impact of social environmental factors (such as economy and policy) on cardiorespiratory fitness of children and adolescent based on panel data.
Future multi-level studies based on the theory of social ecological model should be conducted. Other studies should explore the impact of national economy on the health of rural Chinese students, to compare cardiorespiratory fitness between urban and rural areas. In order to formulate more effective and scientific intervention measures, studies should be conducted from a socio-ecological microsystem perspective, such as correlation and collaboration among schools, communities, and families.
Conclusions
The main finding of this study is that a U-shaped relationship exists between China's economic development and the cardiorespiratory fitness of children and adolescents. The negative effect of economic development on cardiorespiratory fitness of urban students is seen to decrease and eventually reach stagnation, especially in highly developed provinces. This analysis provides evidence that students living in middle-income provinces may be at risk of developing health problems, calling for effective health policies targeting prevention and intervention.
Author Contributions: All authors read and approved the final manuscript. X.G. designed the study, collected the data, participated in statistical analysis and drafted the manuscript. K.Y. and X.W. designed the study, collected the data and participated in statistical analysis. Y.J. played a role in data collection.
Funding: This study is funded by the National Social Science Foundation key program project (funding number 17ATY009). Its content is solely the responsibility of the authors and does not necessarily represent the official views of the funders.
Conflicts of Interest:
The authors declare no conflict of interest. | 2019-10-10T09:31:58.394Z | 2019-10-01T00:00:00.000 | {
"year": 2019,
"sha1": "3f92f1ef352f27f8fcb41cde94f10baaffe700ec",
"oa_license": "CCBY",
"oa_url": "https://res.mdpi.com/d_attachment/ijerph/ijerph-16-03772/article_deploy/ijerph-16-03772.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "75065da5b1050e3b17bda4e142ba90ae9adbfa78",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Geography",
"Medicine"
]
} |
8767043 | pes2o/s2orc | v3-fos-license | Identification of Epigenetically Altered Genes in Sporadic Amyotrophic Lateral Sclerosis
Amyotrophic lateral sclerosis (ALS) is a terminal disease involving the progressive degeneration of motor neurons within the motor cortex, brainstem and spinal cord. Most cases are sporadic (sALS) with unknown causes suggesting that the etiology of sALS may not be limited to the genotype of patients, but may be influenced by exposure to environmental factors. Alterations in epigenetic modifications are likely to play a role in disease onset and progression in ALS, as aberrant epigenetic patterns may be acquired throughout life. The aim of this study was to identify epigenetic marks associated with sALS. We hypothesize that epigenetic modifications may alter the expression of pathogenesis-related genes leading to the onset and progression of sALS. Using ELISA assays, we observed alterations in global methylation (5 mC) and hydroxymethylation (5 HmC) in postmortem sALS spinal cord but not in whole blood. Loci-specific differentially methylated and expressed genes in sALS spinal cord were identified by genome-wide 5mC and expression profiling using high-throughput microarrays. Concordant direction, hyper- or hypo-5mC with parallel changes in gene expression (under- or over-expression), was observed in 112 genes highly associated with biological functions related to immune and inflammation response. Furthermore, literature-based analysis identified potential associations among the epigenes. Integration of methylomics and transcriptomics data successfully revealed methylation changes in sALS spinal cord. This study represents an initial identification of epigenetic regulatory mechanisms in sALS which may improve our understanding of sALS pathogenesis for the identification of biomarkers and new therapeutic targets.
Introduction
Amyotrophic lateral sclerosis (ALS) is a progressive and terminal neurodegenerative disease characterized by the selective degeneration of motor neurons within the motor cortex, brainstem and spinal cord [1]. In the United States, approximately 14 cases of ALS are diagnosed each day and 30,000 people are living with the disease. The average time from disease onset to death is 3 years and no treatment that substantially improves the clinical course of the disease is currently available [1].
Proposed pathogenic mechanisms of ALS include oxidative stress, glutamate excitotoxicity, impaired axonal transport, neurotrophic deprivation, neuroinflammation, apoptosis, altered protein turnover, and mitochondrial dysfunction [1,2]. Moreover, influences from astrocytes and microglia in the motor neuron microenvironment contribute to pathogenesis [3]. In the last 20 years, a search for genetic factors has identified several genes associated with familial ALS (fALS) and a few with sporadic ALS (sALS) [4][5][6]. Because fALS only accounts for 5-10% of all cases of ALS, the causes leading to the vast majority of ALS (sALS) are poorly understood [1].
Environmental exposure to toxins, excessive physical activity, dietary factors, and changes in immunity increase the risk of developing sALS [7]. These factors may drive epigenetic changes, which are well suited to explain disease onset and progression in sALS, as they may be acquired throughout life. Epigenetic modifications, including covalent modifications of DNA and histones as well as RNA editing, dynamically regulate gene expression without altering the genetic code [8,9]. These modifications are important in chromosome integrity, cellular differentiation, development, and aging [8,10]. Two such modifications, 5-methylcytosine (5 mC) and 5-hydroxymethylcytosine (5 HmC) are associated with repression or activation of gene expression, respectively, in response to environmental and developmental factors linked to age-related diseases [11]. 5mC at CpG (cytosine nucleotide separated by a phosphate from a guanine nucleotide) sites is a reversible mechanism facilitated by DNA (cytosine-5)-methyltransferases (DNMTs). Conversely, the Fe(II) and a-ketoglutarate (a-KG)-dependent ten-eleven translocation (TET) family of proteins catalyze oxidation and decarboxylation reactions of 5mC leading to 5hydroxymethylcytosine (5 HmC), 5-formylcytosine (5 fC) and 5carboxylcytosine (5 caC) [12,13]. 5 HmC may be an intermediate for passive (during DNA replication) and active demethylation and/or serve as a docking site for proteins with high affinity for 5 HmC, thereby dissociating interactions between the transcriptional repression machinery and 5 mC [14].
In addition to the identification of alterations in global 5 HmC associated with sALS, this study represents one of the first methylation assessments of sALS by integrating methylome and transcriptome profiles of postmortem frozen human spinal cord samples. We identified differentially methylated sALS spinal cord genes exhibiting concordant mRNA expression overrepresented in functional categories implicated in sALS. These data support a role for epigenetic regulation in sALS and it may provide a better understanding of disease pathogenesis and facilitate the discovery of new therapeutic targets.
Results
A workflow of our data analysis is provided in Fig. 1.
Global 5mC is Increased in sALS Spinal Cord
Chestnut et al. recently reported an increase in DNMTs and 5mC immunoreactivity in ALS brain and spinal cord, suggesting that a global increase in 5mC is associated with the pathogenesis of ALS [15]. We assessed global 5mC of genomic DNA extracted from postmortem human spinal cord samples (sALS, n = 11; matching controls, n = 11; Tables 1, S1) using a colorimetric ELISA approach. We observed a modest but significant 1.4-fold increase in global 5mC in sALS (3.5860.18) compared to controls (2.5660.18) (p = 0.0006, Fig. 2), confirming previous observations [15].
SciMiner identified 4,128 genes from ALS-related publications (as of 7/23/2012), which were compared to our 112 concordant epigenes. Fourteen genes were identified in two or more ALSrelated publications with frequencies that were significantly different from those in over 20 million abstracts in PubMed (p,0.05). Fifty-one genes demonstrated $2-fold altered expression, including the chitinase 3-like protein 2 (CHI3L2), the triggering receptor expressed on myeloid cells-2 (TREM2), cathepsin Z (CTSZ), the lumican precursor protein (LUM), H19, and TRAIL/TNFSF10 (Tables 2, 3). Thus, bioinformatics evaluation of the concordant epigenes identified by integrating methylomic and transcriptomic analyses detected both novel and previously known ALS-related genes.
Experimental Confirmation using Real-time Polymerase Chain Reaction (RT-PCR)
Expression of 14 concordant epigenes selected either from the ALS-related literature or from the expression array data was confirmed by RT-PCR (Figs. 6 and S2, Table 4). Of the genes previously related to the ALS literature, NRN1, FMO1, and the lumican precursor protein (LUM) were under-expressed, while the lysosomal protease CTSZ was over-expressed in sALS. No significant difference between sALS and control subjects was observed for FES-upstream region (FURIN). Novel sALS-associ- ated epigenes such as STAT5A, TREM2, the high-affinity IgE receptor (FCER1G), CHI3L2, and the proton-couple divalent metal ion transporter solute carrier family 11 member 1A (SLC11A1) were over-expressed in sALS. SLC11A1 presented the highest increase by 12.7-fold. Down-regulation was validated for gap junction ß-2 (GJB2)/Connexin-26 as well as imprinted genes such as H19, NNAT, and the paternally expressed 10 (PEG10). In summary, the RT-PCR expression data indicate high concordance with the microarray expression data, validating our results.
Global 5 HmC Increases in sALS Spinal Cord
5 HmC, an alternate epigenetic modification of DNA, is increased in brain compared to other human tissues, and alterations in global 5 HmC are associated with age-related neurodegenerative disorders, suggesting an important role of 5 HmC in neuronal the function [31][32][33][34]. We measured global 5 HmC for sALS and control spinal cord samples previously analyzed for global 5mC. We observed an approximately 3.0-fold increase in global 5 HmC in sALS (0.3160.02) compared to controls (0.1160.03) (p,0.0001) (Fig. 7). This is the first report of
Global 5mC and 5 HmC in sALS Whole Blood
High correlation of epigenetic marks in spinal cord and blood may be useful for diagnostic and therapeutic application in ALS. We investigated whether global 5mC and 5 HmC would be altered in sALS whole blood similarly to spinal cord. Whole blood genomic DNA from a different cohort (Tables 5, S1) was subjected to global 5mC and global 5 HmC by ELISA. The levels of percent global 5mC and 5 HmC in whole blood were 10 fold lower compared to spinal cord, in agreement with recent reports [33,35]. Contrary to spinal cord, no differential percent global 5mC (controls, 0.40860.300; ALS, 0.40560.027 p = 0.941) and global 5 HmC (controls, 0.03360.005; ALS, 0.03260.004; p = 0.401) (Fig. 8) were observed in whole blood.
Discussion
Although several genes have been implicated in the pathogenesis of ALS, the causes leading to most cases remain unknown. Environmental factors may be associated with the onset and development of sALS by altering epigenetic regulation [7,8]. The aim of this study was to identify sALS-associated epigenetic marks resulting in aberrant gene expression. Abnormal 5mC patterns of repetitive elements such as Alu and LINE1, as well as altered function of methylation regulators such as the DNMTs, lead to changes in global 5mC or 5 HmC associated with neurodegeneration [15,36]. We demonstrate increased global methylation in sALS spinal cord, perhaps due to an increase in DNMT activity [15]. Furthermore, we report for the first time an increase in global 5 HmC in sALS spinal cord. Increased 5mC and 5 HmC may be due to 5mC providing more substrate for the TET proteins [10], TETs are not differentially expressed in spinal cord sALS according to our microarray data (data not shown). TET should decrease the amount of 5mC only if 5mC is not increasing at a faster rate than the oxidation reaction. Although normal aging leads to increased global 5 HmC in mouse hippocampal DNA independently of increased levels of oxidative stress [31], in ALS, increased oxidative DNA damage and free radicals may contribute to global 5 HmC dysregulation. The base excision repair (BER) pathway responsible for oxidative DNA damage restoration and one of the active demethylation pathways, is deficient in ALS [14,37,38].
Methylomics and transcriptomics analyses identified potential biologically relevant epigenes in postmortem sALS spinal cord. These epigenes were enriched with biological functions related to inflammation and the immune responses, previously linked to ALS [39][40][41]. Our data suggest that alterations in gene expression of immune-related genes in sALS may be regulated by methylation.
Immune-related concordant epigenes including TREM2, chemokine (C-C motif) receptor 1/RANTES receptor (CCR1), SLC11A1, the transmembrane receptor C-type lectin domain family 4 member A isoform 1 (CLEC4A), and the IgE receptor (FCER1G) were found to be over-expressed in sALS. Our findings suggest an infiltration of myeloid cells, mast cells, or natural killer cells to the damaged area and/or activation of resident microglia [42][43][44]. Supporting our observations, neuro-inflammation was recently associated with systemic macrophage activation independent of Tcell activation and the recruitment of activated inflammatory monocytes to the spinal cord in ALS [45,46]. Although immunosuppressive and anti-inflammatory therapies have shown to delay disease onset in ALS animal models, clinical trials have not revealed a major effect on disease progression or survival [46][47][48][49][50]. This suggests that continuous activation of microglia leading to neuronal damage surpasses the capacity of the nervous system to respond to immunosuppressive and anti-inflammatory therapies at later stages of ALS, implicating a need for biomarkers identifying early immune-related changes in sALS.
Co-citation network and literature mining approaches identified connections among novel and previously implicated ALS-related epigenes and pathways [51,52]. The transcription factors STAT5A and C/EBPB are highly connected in our co-citation network and their interplay promote activation of various genes including interleukin-6 (IL-6) [53]. Moreover, recent reports implicate C/ EBPB and STAT5A in ALS pathogenesis and neurodegeneration. For instance, expression of C/EBPB in ALS microglia from spinal cord suggests an important role of C/EBPB in the regulation of neurotoxic genes in the ALS neuronal microenvironment [45,54]. Furthermore, changes in STAT5A expression may reflect an altered inflammatory response contributing to the pathogenesis of ALS. Over-expression of STAT5A reduces neuronal degeneration associated with spinal muscular atrophy, a neurodegenerative disease with similar pathogenesis as ALS, and it provides oligodendrocyte protection, which in turn favors neuronal environment preservation [55][56][57]. Whether positive regulation of STAT5A in sALS is due to an anti-apoptotic response to compensate for the degeneration of the nervous system, or its overexpression is responsible, in part, for the pathogenesis of the disease remains to be determined. Interestingly, we observed potential transcription factor binding sites (TFBSs) for STAT5A and C/EBPB in 40% and 48% of the promoters of our identified DEGs, respectively; the binding sites for STAT5A and C/EBPB are 1.2 (p = 4.1E-12) and 1.3 (p = 3.8E-13) times more frequent in the DEGs than in the vertebrate promoters, respectively. Our observations suggest epigenetic mechanisms, in part, drive the expression of central regulators of downstream targets in sALS.
Our study identified ALS-dependent methylation dysregulation of several genes previously implicated in neuronal development, differentiation, and proliferation, including Slit-Robo Rho [61]. Interestingly, most of these genes were identified by our literature-based association network of concordant epigenes and were connected to C/EBPB and STAT5A. Analysis of the promoter region of these genes indicates a high incidence of potential TFBSs for these two transcription factors, suggesting a potential role of STAT5A and C/EBPB in the regulation of neuronal genes in the pathogenesis of sALS. Our observations suggest sALS-related alterations in methylation may lead to aberrant expression of genes required for neuronal homeostasis. Nevertheless, more studies need to be done to address the role of methylation, STAT5A, and C/EBPB in the regulation of neuronal genes. Another sALS-related epigene, CTSZ warrants further investigation since its aberrant expression is associated with neurodegeneration by promoting neurotoxin elimination in the damaged cellular environment [62]. Expression of CTSZ as well as two other members of the cathepsin family, cathepsins B and D, increases in human and rodent ALS spinal cord and mutant SOD1 (G86R, G93A) mouse skeletal muscle suggesting they play an important role in ALS [63,64].
Except for optineurin (OPTN) [65], which was identified as a hypo-methylated DMG without demonstrating changes in gene expression, loci known to be mutated in fALS were not present in our concordant epigenes [4]. This agrees with recent studies indicating that promoter regions of SOD1, VEGF, and metallothioneins I and II are not differentially methylated in sALS [66,67]. When compared with the ALS Online genetics Database (ALSoD)-reported genes and other ALS-dependent methylation/ gene expression of profiling studies [39,41,[68][69][70][71][72], we observed a modest overlap of four concordant epigenes; Purkinje cell protein 4 (PCP4), catenin (CTNNAL1), fibroblast growth factor 18 (FGF18), and flavin containing monooxygenase 1 (FMO1). Furthermore, five of our concordant genes presented opposite direction of expression when compared to known ALS-dependent differentially expressed gene. Our data indicate that epigenetic mechanisms are potential regulators of these key genes in ALS.
Based on the large number of genes identified in the methylation (3,574 genes) and expression (1,182 genes) arrays, relatively few sALS-associated genes presented concordant direction between methylation and gene expression. The low occurrence of a small subset of genes potentially regulated by CpG modification in such a way that hyper-methylation promotes gene silencing and hypo-methylation promotes gene expression has been previously documented [73]. 5mC within promoter regions is associated with repression of gene expression by interfering with transcription factor binding or by providing a binding site for transcriptional repressors [10]. Interestingly, over half (55%) of the 251 common DMGs/DEGs presented same direction of 5mC and Table 4. Confirmation of microarray differential expression in spinal cord using RT-PCR. expression. In some cases, 5mC positively regulates gene transcription by promoting transcription factor binding at promoter regions [74] or, more commonly, by modifying intragenic CpG sites facilitating transcription efficiency, histone conformation, and regulating levels of sense and antisense mRNA [75]. Furthermore, 5 HmC, a highly enriched modification in brain, correlates with increased gene expression [10]. HM27K does not differentiate between 5mC and 5 HmC; therefore, some of the common epigenes presenting same direction of methylation and expression may be regulated by 5 HmC.
Although the high incidence of same direction sALS concordant epigenes parallels the high levels of global 5 HmC in spinal cord, loci specific 5 HmC modifications associated with sALS remain to be identified. Gene expression of non-common DEG (non-DMGs) could be determined, in part, by 5mC-dependent regulation of transcription factors. In addition to STAT5 and C/EBPB, we identified several transcription factors as concordant genes such as the transcription factor 7 (TCF7), RUNX3, IKAROS family zinc finger 1 (IKZF1), MSX2, and hypoxia inducible factor 3, alpha subunit (HIF3A). Furthermore, regulation of gene expression is a dynamic and complex mechanism and the interplay of several epigenetic pathways has been reported to modulate adult neurogenesis [76]. Therefore, alterations to epigenetic networks in conjunction with genetic predisposition may result in the development of sALS.
The prospect of identifying sALS epigenetic biomarkers in blood is exciting as it provides a minimally invasive alternative for sALS diagnostic and prognostic assessments. Although we did not detect significant global 5mC and 5 HmC differences in blood and inflammation-related epigene biomarkers may reflect systemic inflammatory changes rather than neuronal changes, further investigation of individual loci may provide potential epigenetic biomarkers for sALS.
There were several limitations to our study. First, a relatively small number of samples were analyzed and loci-specific 5 HmC analysis is still needed. Nevertheless, this is an initial step towards Figure 6. RT-PCR confirmation of concordant epigenes in spinal cord. RNA was extracted from the postmortem human spinal cord tissue that was used for the methylation analysis from sALS (n = 8-11) subjects and controls (n = 8-11) and subjected to RT-PCR. Results were normalized to glyceraldehyde-3-phosphate dehydrogenase (GAPDH) except for STAT5A which was normalized to TATA box binding protein (Tbp) and presented as fold changes calculated by the 2 2DDC T method. Similar results were obtained when using different housekeeping genes (Fig. S2); *p,0.05, **p,0.01, ***p,0.001 compared to the control group (Ctrl). Data mean 6 SEM is plotted using box and whiskers vertical bars plotting minimum to maximum values. doi:10.1371/journal.pone.0052672.g006 identifying epigenetic mechanisms altering key pathways leading to sALS, which will be validated in larger cohorts. Second, sALS postmortem tissue reflects the terminal disease stage rather than the pathogenic mechanisms leading to disease onset and progression. As sALS-affected motor neurons deteriorate at the terminal stage and heterogeneous tissue consisting of both gray and white matter was analyzed, our results may represent epigenetic regulation of the neuronal microenvironment, including microglia activation and the scarce neurons surviving the degenerative process [54,72]. This may explain, in part the discrepancy in the direction of expression of common and concordant genes reported here with other sALS genome-wide expression profiles, as well as the heavily represented inflammation-related genes, in our concordant epigenes, which are not differentially expressed specifically in sALS motor neurons or ventral horns [68]. Finally, more studies are needed to concretely identify whether or not the genes identified in this study are involved in ALS pathogenesis.
Advances in identifying epigenetic regulators in disease states have led to new therapeutic approaches. Interestingly, demethylating agents have been extensively studied to reverse aberrant epigenetic changes associated with cancer [77] and more recently, histone deacetylase inhibitors have shown to have neuroprotective properties in animal models of neurodegenerative diseases [78]. These observations suggest reversible epigenetic modifications carry the potential for therapeutic treatment in sALS. We contend that environmental life exposures result in failure to maintain epigenetic homeostasis in the nervous system microenvironment leading to global and loci specific aberrant regulation of gene expression in sALS-affected tissue. Ascertaining the role of epigenetic regulation may provide a better understanding of the pathogenesis of sALS and new therapeutic targets.
Subjects and Tissue
Frozen human spinal cord samples from 12 Caucasian sALS subjects and 11 age and gender-matched neurologically-normal controls were obtained from the National Center for Child Health and Human Development (NICHD) Brain and Tissue Bank for Developmental Disorders at the University of Maryland, Baltimore, MD (Table 1). Whole blood was collected in EDTA tubes from a different cohort of 11 Caucasian sALS and 11 age-(56years) and gender-matched neurologically-normal control subjects at the University of Michigan ALS Consortium (Table 4). Table S1 summarizes the samples used for each assay.
Ethics Statement
The participants donating blood reviewed and signed a written informed consent under a protocol reviewed and approved by the Table 5. Characteristics of sALS and control subjects used for global 5mC and 5 HmC in whole blood.
Nucleotide Extraction and DNA Bisulfite Conversion
Genomic DNA was extracted from 50 mg of frozen postmortem spinal cord tissue (mostly grey matter including the ventral horn, but some white matter included) using the Promega Maxwell 16 Tissue DNA Purification kit and a Maxwell instrument (Promega Co, Madison, WI). Genomic DNA (1 mg) was bisulfite converted with an EZ DNA Methylation Kit (Zymo Research, Irvine, CA) according to the manufacturer's instructions. Total RNA was extracted from the same tissue for methylation profiling (Table S1) using the RNeasy kit and treated with RNAse-free DNase1 according to the manufacturer's instructions (Qiagen, Valencia, CA). Automated genomic DNA extraction from whole blood was performed at the Michigan Institute for Clinical & Health Research (MICHR) at the University of Michigan using Autogen FlexStar (Autogen, Holliston, MA) and Qiagen Flexigene reagents. Nucleotide concentration was assessed using a Nanodrop 2000 (Thermo Scientific) and RNA integrity was determined by microfluid electrophoresis with a 21000 Bioanalyzer (Agilent Technologies, Palo Alto, CA).
Global 5mC and 5 HmC
Differences in genomic DNA global methylation (Global 5mC) and hydroxymethylation (Global 5 HmC) from sALS and control spinal cord or whole blood were determined in duplicate using the colorimetric enzyme-linked immuno-sorbent assay (ELISA) MethylFlash (Methylated or Hydroxymethylated) DNA Quantification Kits according to the manufacturer's directions (Epigentek Group Inc., Farmingdale, NY). The absorbance at 450 nm was captured in a Fluoroskan Ascent microplate reader (Labsystems, Vienna, Va). The percentage of Global 5mC and 5 HmC is expressed as mean 6 standard error mean (SEM). The two-tailed t-test was used for statistical comparison. Graphs and statistical analysis were obtained with GraphPad Prism 5.
Methylation Profiling and Identification of DMGs
For high-throughput methylation profiling, 200 ng of bisulfiteconverted DNA was whole-genome amplified (WGA), enzymat-ically fragmented, purified, and hybridized to the Infinium Human Methylation27 DNA BeadChip array (HM27K; Illumina, Inc., San Diego, CA) following the manufacturer's instructions at the University of Michigan Sequencing Core. The HM27K quantitatively determines DNA methylation for 27,578 CpG sites spanning 14,495 genes. DMGs were identified using Illumina's GenomeStudio software [79]. Single-base resolution corresponding to DNA methylation levels for each locus was reported and the methylation level is given by a beta (ß) value describing the percentage of the degree of methylation ranging from 0 (no methylation) to 1 (complete methylation). Any methylation value with a detection P-value .0.05 was excluded. Differential methylation of the selected CpG target regions of autosomal chromosomes between sALS and control groups were tested using Illumina Custom algorithm with multiple testing corrections applied. DiffScore, GenomeStudio's statistical significance score for differential methylation, of .13 for hyper-methylation or ,-13 for hypo-methylation, equivalent to False Discovery Rate (FDR) ,5%, were used.
Genome-wide Expression Profiling
Microarray gene expression analysis was performed as previously described in our published protocols [80]. Briefly, RNA samples with an RNA integrity number (RIN) .6.4 were used for further microarray and real-time PCR analysis. Total RNA (75 ng) was amplified and biotin-labeled using the Ovation Biotin-RNA Amplification and Labeling System (NuGEN Technologies, Inc., San Carlos, CA) according to the manufacturer's instructions. Amplification and hybridization was performed at the University of Michigan DNA Sequencing Core Affymetrix and Microarray Core Group (Ann Arbor, MI) using the Affymetrix GeneChip Human Genome U133 Plus 2.0 Array measuring over 47,000 transcripts representing over 20,000 human genes.
Affymetrix CEL files were analyzed using a local version of the GenePattern genomic analysis platform from the Broad Institute [81]. Samples were Robust Multi-array Average (RMA) normalized using the BrainArray Custom CDF HGU133Plus2_Hs_EN-TREZG version 14 [82]. Microarray quality was assessed as previously published [80]. Briefly, probe-level modeling (PLM) and quality metrics provided by the BioConductor affy package were used to identify low-quality arrays [83][84][85]. Outlier arrays, Figure 8. Changes in global 5 HmC and 5mC are not detected in ALS whole blood. Genomic DNA extracted from control or sALS human whole blood was analyzed for 5mC (Ctrl n = 11, ALS n = 11; p = 0.94) and 5 HmC (Ctrl n = 11, sALS n = 11; p = 0.40). Percent (%) 5mC and 5 HmC is presented as mean 6 SEM using a two-sample equal variance t-test and graphed using box and whiskers vertical bars plotting minimum to maximum values. doi:10.1371/journal.pone.0052672.g008 skewed away from other arrays, identified by Principal Component Analysis (PCA) were excluded from further analyses. Intensity-Based Moderated T-statistic (IBMT) [86] was employed to identify DMGs with a 10% FDR cut-off between sALS and control samples.
Identification of Differentially Expressed Genes (DEGs)
Concordant epigenes are those exhibiting significant differential methylation (hyper-or hypo-methylation) and a parallel change of gene expression (under-or over-expression, respectively) between sALS and control. Differentially methylated (DMGs)/expressed (DEGs) genes were subjected to bioinformatics analyses.
Bioinformatics Analysis of Concordant Epigenes
Functional enrichment analysis. Database for Annotation, Visualization and Integrated Discovery (DAVID; http://david. abcc.ncifcrf.gov/) [87,88] was used to identify enriched molecular biological functions and ALS-relevant pathways of concordant epigenes. A Benjamini-Hochberg corrected P-value of 0.05 was used as the cut-off for statistically significant over-representation.
Literature mining analysis. A literature mining approach was used to obtain a comprehensive list of potential ALS-associated targets (genes/proteins). SciMiner, a web-based literature mining tool [89,90], retrieves, processes documents, and identifies potential ALS-associated targets from the ALS-related literature, defined by a PubMed-style query of ''Amyotrophic Lateral Sclerosis''. The concordant epigenes were compared against the literature-derived ALS-associated targets that were observed in at least 2 or more papers and whose frequency (in terms of the number of papers) was significantly different from the background. Fisher's exact test (pvalue ,0.05) was used to determine whether each gene's frequency was significantly different from the complete collection of abstracts of over 20 million papers in PubMed. The concordant genes identified by the high-throughput arrays were compared with these literature-derived ALS-related genes to identify which known disease-relevant genes are most highly methylated/expressed and, consequently, likely involved in disease pathogenesis. The resulting genes were designated as literature-derived ALS-associated epigenes.
Transcriptional network analysis. To elucidate the functional relationships among the concordant epigenes, we generated transcriptional networks using Genomatix Pathway Systems (GePS; Genomatix Software GmbH, Munich, Germany) with a sentence-level co-citation filter. Two genes co-cited at the sentence level in the literature are linked, resulting in a co-citation network. Additionally, transcriptional regulatory information of predicted transcription factor binding sites (TFBS) in promoter regions of genes could be further incorporated. The network allows the visualization of concordant epigenes, their potential associations, and transcriptional regulation with each other. Therefore, it helps in the identification of key genes that are highly connected to genes, and which play potentially important roles in the pathogenesis of sALS. Potential TFBSs of two highly connected genes in the network, STAT5A and C/EBPB, were searched among the promoters of the concordant epigenes using MatInspector (Genomatix) [91].
Pyrosequencing
To validate the HM27K arrays, we assessed gene-specific methylation of three selected cytokine genes based on the fact that immune response is associated with the pathogenesis of sALS [92] and two transcription factors. Amplicons of the promoter regions of the genes coding for the CKLF-like MARVEL transmembrane domain-containing proteins 2 and 3 (CMTM2 and CMTM3), the chemokine (C-X-C motif) ligand 12 (CXCL12), signal transducer and activator of transcription 5A (STAT5A), and CCAAT/ enhancer binding protein beta (C/EBPB) were generated in 30 ml reactions using the PyroMark kit (Qiagen, Valencia, CA) with 4.8 pmol of the forward non-biotinylated primer, 2.4 pmol of the reverse biotinylated primer (Table S3), and 25 ng of bisulfite converted genomic DNA as previously described [93]. PCR conditions: 95uC for 15 min, 50 cycles [95uC for 30 s, 40-50uC for 30 s, 72uC for 20 s], 72uC for 10 min. Ten ml of the amplicon was Streptavidin Sepharose (Amersham Bioscience, Uppsala, Sweden) were purified, denatured with 0.2 M NaOH, and pyrosequenced using 0.5 mM of sequencing primer in a PSQ96 HS System (Qiagen) following the manufacturer's protocol. Percent methylation of the region analyzed containing the identified Illumina methylation site or individual sites are presented as mean 6 SEM with a two-sample equal variance ttest using GraphPad Prism 5.
RT-PCR
cDNA was generated by reverse transcription from total RNA isolated for microarray analysis using an iScript cDNA Synthesis Kit (Bio-Rad, Hercules, CA). RT-PCR was performed in triplicate using sequence-specific primers (Table S4) with SYBR Green PCR reagents (Bio-Rad, Hercules, CA). The PCR amplification profile was as follows: 95uC for 5 min, [denaturation at 95uC for 30 s, annealing at 55-60uC for 60 s, and extension at 72uC for 30 s] x40 cycles, and a final phase of 72uC for 5 min. The fluorescence threshold C T value, representing mRNA expressed in sALS samples, was calculated by the iCycler iQ system software. mRNA levels were normalized to an endogenous reference (DC T ) and then relative to the control group (DDC T ). Levels of PCR products are demonstrated as mean 6 SEM and a two-sample equal variance t-test was performed using GraphPad Prism 5 to confirm that mRNA levels were significantly different between sALS and control. Figure S1 Validation of HM27K arrays using pyrosequencing. Amplicons to the promoter regions identified by HM27K of cytokines CXCL12, CMTM3, CMTM2, C/EBPB, and STAT5A were generated using bisulfite-converted genomic DNA from human postmortem spinal cord and used as templates for pyrosequencing (ALS n = 11; Ctrl n = 11) Results are presented as mean of percent methylation of all CpG sites within the area tested on each gene (A) or as percent methylation of individual sites for STAT5A; site 2 was identified with the HM27K (B). Results are presented as mean 6 SEM and a two-sample equal variance t-test was used. *p,0.05, **p,0.01 compared to control group (Ctrl). (EPS) Figure S2 RT-PCR confirmation of concordant epigenes in spinal cord. Total RNA was extracted from postmortem human spinal cord tissue used for the methylation analysis from sALS subjects (n = 8-11) and controls (n = 8-11) and subjected to RT-PCR. Results were normalized to housekeeping genes [TATA-box Binding Protein (TBP) for CTSZ, FCER1G, TREM2, NRN1 and NNAT; ribosomal 18S subunit for CHI3L2, H19, PEG10, and LUM], and are presented as fold-changes calculated by the 2 2DDC T method. *p,0.05, **p,0.01, ***p,0.001, ****p,0.0001.
(EPS)
Table S1 Samples used for methylation and expression analyses. (DOC) | 2016-05-31T19:58:12.500Z | 2012-12-26T00:00:00.000 | {
"year": 2012,
"sha1": "b9a13b4d0e5e5dc4b0a1af99832d5b942277e3af",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1371/journal.pone.0052672",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b9a13b4d0e5e5dc4b0a1af99832d5b942277e3af",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
204800867 | pes2o/s2orc | v3-fos-license | Tree-gated Deep Mixture-of-Experts For Pose-robust Face Alignment
Face alignment consists of aligning a shape model on a face image. It is an active domain in computer vision as it is a preprocessing for a number of face analysis and synthesis applications. Current state-of-the-art methods already perform well on"easy"datasets, with moderate head pose variations, but may not be robust for"in-the-wild"data with poses up to 90{\deg}. In order to increase robustness to an ensemble of factors of variations (e.g. head pose or occlusions), a given layer (e.g. a regressor or an upstream CNN layer) can be replaced by a Mixture of Experts (MoE) layer that uses an ensemble of experts instead of a single one. The weights of this mixture can be learned as gating functions to jointly learn the experts and the corresponding weights. In this paper, we propose to use tree-structured gates which allows a hierarchical weighting of the experts (Tree-MoE). We investigate the use of Tree-MoE layers in different contexts in the frame of face alignment with cascaded regression, firstly for emphasizing relevant, more specialized feature extractors depending of a high-level semantic information such as head pose (Pose-Tree-MoE), and secondly as an overall more robust regression layer. We perform extensive experiments on several challenging face alignment datasets, demonstrating that our approach outperforms the state-of-the-art methods.
Introduction
Face alignment refers to the process of localizing a number of landmarks on a face image (lips and eyes corners, pupils, nose tip). This is an important research field of computer vision, as it is an essential preprocess for applications such as face recognition, tracking, expression analysis as well as face synthesis.
Recently, regression-based methods appeared among the most successful ones, achieving high accuracies on images with reasonable variations in head pose, illumination as well as occasional facial expressions or partial occlusions. These methods address the face alignment problem by directly learning a mapping between shape-indexed face texture and the landmark positions. This regression is often performed in a cascaded manner: starting from an initial guess, a first stage predicts a displacement for every landmark coordinate. This prediction then gets progressively refined through successive regressors that are trained to improve the predictions of the previous stages. In the frame of such a coarse-to-fine strategy, the first cascade stages usually capture large deformations, while the last stages focus on more subtle variations. The work of Xiong et al. [24] proposes to use successive linear regressions based on SIFT descriptors extracted around each landmark at the current position of the model. Similarly, Ren et al. [15] propose to learn shape-indexed pixel intensity difference features with random forests in order to speed up the feature extraction step.
Deep learning techniques have also been investigated to tackle the face alignment problem. For example, in [19] each cascade stage is modeled using deep convolutional networks (CNNs) in order to jointly learn the representation and regression steps in an end-to-end fashion instead of training the regressor upon handcrafted features. Mnemonic Descent Method [21] improves the feature extraction process by sharing the CNN layers among all cascade stages, and the landmark trajectories through the successive cascade stages are modeled using recurrent neural networks. This results in memory footprint reduction as well as more efficient representation learning and a more optimized cascaded alignment process.
Using deep architectures within the cascaded regression framework allows to achieve high-end alignment precision. However, despite the success of these methods, these are very sensitive to extreme conditions such as large head pose, facial expression, illumination changes, or occlusions induced by objects in front of the face (e.g. glasses, hands, hairs). The appearance of the face can then drastically change and corrupt the input features fed to the displacement regressor in the first stages of the cascades, causing errors that will be hard to overcome later on. In order to address these limitations, ensemble methods can be combined with the use of deep learning techniques: using a committee of expert layers instead of just a single, strong layer improves the diversity of possible responses, which leads to an increased overall robustness. Furthermore, an adaptive combination of the outputs of these expert layers can be learned jointly by the use of gates. Using a tree gate structure allows to learn a hierarchical clustering of the expert layers, which leads to a more efficient selection and therefore specialization of the experts. These gates can be based on either an extracted representation (e.g. when applied for regression), or a highlevel semantic information (such as head pose when applied for learning representation layers). The representation layer, regression layer and the corresponding tree gates can be trained jointly in an end-to-end manner using neural trees. The proposed architecture is summarized on Figure 1. The contributions of this paper are thus three-folds: • We show that integrating ensemble methods within a deep architecture is beneficial to the overall robustness of face alignment methods. In particular, we use a committee of expert layers instead of a single, strong layer for representation and regression layer respectively. This allows each expert network to be geared towards a specific alignment case.
• We propose an adaptive weighting strategy based on gates that learn to combine the contribution of each expert network. Weights can be estimated from a high level semantic information such as head pose, or di-rectly from an embedding extracted by the representation layer. In addition, tree-structured gates allow to learn a set of hierarchically clustered expert layers. It can be learned as neural trees [11] to allow joint end-toend training of expert networks and gates.
• We propose a real-time face alignment system that outperforms state-of-the-art approaches on several databases and is particularly robust to large poses and occlusions.
Related work
The main current challenge of face alignment problem is the robustness to strong variations, such as large pose or occlusion. A first approach may consist in better conditioning the training in order to increase generalization capability. Landmark localization can be improved by simultaneously learning other related tasks such as attributes detection [27] [7]. Indeed, learning to detect attributes such as the presence of glasses on the face can improve the model robustness to occlusions. However, these techniques require additional data, either unlabelled or annotated with auxiliary attributes.
Other models aim to address robustness to only one source of variation. The architecture and the training procedure are then designed to specifically address this variation.
For example, some models are specialized in occlusion handling and explicitly predict the occluded part of the face [2] [6][25] [26]. However, learning such models generally requires data with occlusion labels, which is a major requirement. Head pose variations can also drastically change the appearance of the face, and some landmarks can be selfoccluded. Taking this information into account can improve the face alignement process. Zhu et al. [31] proposed to integrate head pose information to condition and adapt Convolutional Neural Networks. Kumar et al. [13] proposed a Bayesian formulation in which head pose estimation allows to condition heatmaps extracted to estimate landmarks localization, with the constraint that the estimated face shape must follow a dendritic structure for effective information sharing. These approaches have demonstrated that the head pose information significantly improves the localization performance. However, in this case, head pose is treated as a post-hoc multiplicative variable. Conversely, we argue that taking this information into account upstream in the network leads to better representation learning, bringing more robustness, as head pose can dramatically affect face appearance.
Other approaches explicitly seek to build models that are specific to each variation type, and combine them together. Wu et al. [22] proposes a global framework, trained in a cascaded manner, which simultaneously performs facial landmarks localization, occlusion detection and head pose estimation with separate modules. Relationships between these allows the modules to benefit from each other. However, each module requires additional annotations in the trainset. By contrast, the proposed method only relies on facial landmarks and head pose information that can be inferred from the landmark positions.
In addition, the architecture of our model combines advantages of ensemble methods with those of deep learning techniques. Such models have already been explored in the deep learning literature: a first approach for combining deep learning and ensemble methods is to craft a differentiable ensemble architecture in order to allow end-to-end parameter learning. Kontshieder et al. [11] designed a differentiable deep neural forest, by unifying the divide-and-conquer principle of decision trees (allowing to cluster data hierarchically) with the representation learning from deep convolution networks. Each predictor is a binary tree, whose split nodes contain routing functions, defining the probability of reaching one of the sub-trees. The probabilistic routing functions are differentiable, allowing these neural trees to be integrated into a fully-differentiable system. The forest corresponds to a set of trees, who's final output is the simple average of the output of each tree. Since then, other models have sought to generalize neural forests [20] [8], by integrating upstream convolutionnal or multi-layer perceptron layers within routing functions to learn more complex input partitionings, leading to higher performance. Dapogny et al. [4] used neural forests for face alignment, with promising results. However, their model uses handcrafted SIFT descriptors. In addition, to adapt neural forest for regression purposes, the authors use neural trees whose leaves are fixed and correspond to a sampling of the remaining displacements from the training data (with small variations). Their model then seeks to optimally combine fixed leaves. This training procedure can theoretically lead to rigid responses, reducing the expressiveness of the model.
A second approach may be to parallelize a set of small networks instead of a single large network. The idea of using a set of regressors within an end-to-end system was firstly introduced by Jacobs et al. [9] and more recently taken up by Eigen et al. [5]. It shows promising results and is well adapted to our problem. Eigen et al. design a Mixture-of-Experts (MoE) layer, consisting in jointly learning a set of expert subnetworks with gates, allowing to learn to combine a number of experts depending on the input. In the same vein, Shazeer et al. [18] introduce sparsity in MoE in order to save computation and to increase representation capacity.
Framework overview
In this section, we introduce our Pose-Tree-MoE model for face alignment: first, in Section 3.1 we describe the head pose estimation module. We then detail in Section 3.2 the representation layer, which select relevant experts based on this head pose estimate, and extract features from patches extracted around a current feature point localization. Then, Section 3.3 shows how we predict landmark coordinate displacements from this pose specific representation. In Section 3.4 we detail the architecture of the gates used to weight the contribution of each expert network for both the representation and regression layer respectively. The whole architecture is integrated into a cascaded regression framework 3.5. This section also provides implementation and architectural details to ensure reproducibility of the results.
Head pose estimation
Following [16] we use a truncated pre-trained ResNet-50 network to extract head pose from the raw face image I. We note φ the embeddings (2048 units) of the last fullyconnected layer. A naive approach would consist in using a single deep network Ω(φ (I)) that directly predicts the 3 head pose angles, as it was done in [16]. However, as pointed out in [14], sharing all the representations layers may or may not be optimal, depending on the tasks at stake. In our case, we obtained better performance by regressing yaw γ, pitch β and roll α values with separate networks: with 3 and || the concatenation operator. Lastly, we also fine-tune the ResNet-50 backbone to improve the head pose regression accuracy.
Representation layer
Let's note s ∈ R 2P an initial shape guess (with P landmarks). As it is classical in the frame of cascaded regression for face alignment, we extract shape-indexed patches, i.e. patches centered at each landmark current localization. If we denote h the output of the feature extraction layer, we thus have: As it is ubiquitous among recent face alignment methods, we use CNNs to model function f . Furthermore, in order to limit the number of parameters, we share the convolution kernels between the different patches, as it was done in [21]. Instead of using one large CNN to extract features around each landmark current location estimation [1], we use a committee of smaller expert CNNs. The idea is that head poses or partial occlusions can dramatically alter the appearance around specific landmarks (e.g. cheeks landmarks in case of large poses) and that we can learn expert CNNs to extract more relevant shape-indexed features in such cases. Specifically, we define L expert CNN { f l (I, s 1 )} l=1...L and the output of each expert CNN h l as follows: with h l ∈ R Pn and n the number of features per landmark at the output of the last CNN layer. We denote H = [h 1 , . . . , h L ] the responses of all the expert CNNs. More precisely, H is a Pn × L matrix whose columns are the responses of the L expert CNNs. Now that we have extracted expert features relatively to the neighbouring of each landmark, we want to aggregate these features: to do so a naive approach would be to simply sum these features, i.e. sum over all the columns of H. However, a better solution would be to use a high level semantic variable, such as head pose, to select the most relevant experts based on the output of a gate function g : Ω ∈ [−π, π] 3 → g(Ω) ∈ [0, 1] L such that ∑ L l=1 g l (Ω) = 1. In such a case, the output h of the representation layer can be written as the sum of the contributions of the L experts, weighted by the gate value relatively to that expert:
Regression layer
Given the extracted feature vector h ∈ R Pn , we now aim at regressing a displacement δ s ∈ R P between s and the ground truth landmark localization s * . Such displacement is usually estimated using a single, large deep fully-connected network, i.e. δ s = r(h).
Once again, instead of designing one such large network, we can use a committee of L several smaller expert networks δ s l = r l (h) l=1...L . Let's note δ S a R 2P×L matrix containing all the predictions of these expert regressors: The columns of δ S contain displacements predicted by each expert regressor. More specifically, each displacements predicted by each expert indexed by l corresponds to the output of a fully connected layer with ReLU activation: with Θ l r = {w l 0 , b l 0 , w l 1 , b l 1 } the set of parameters of the l -th expert. Let's suppose we now have access to the output of a gating network g : h ∈ R L → g ∈ [0, 1] L based on the extracted features h. In such a case, The output of the regression layer can be written as:
Gating network
In what follows, we note γ : z ∈ Z → x ∈ X any mapping function and g : x ∈ X → g ∈ [0, 1] L a gate function with L the number of expert networks associated to this gate, such that ∑ L l=1 g l (x) = 1. In order to learn an adaptive combination of expert networks, two types of gates can be designed.
Softmax gate
The most straightforward way to design a gating function is to use a softmax activation function: with Θ g = {w l , b l } l∈{1,...,L} the parameters of the gate function. While this function is very simple, it doesn't allow to learn hierarchical partitions of X.
Tree gate
In order to learn a more potent gating network we use a single neural tree. A neural tree [11] is composed of subsequent soft, probabilistic routing functions d n , that represents the probability to reach the left child of node n. Formally, d n is defined as a single neuron: d n (x) = σ (w n .x + b n ) = e w n .x+b n 1 + e w n .x+b n with Θ g = {w n , b n } n∈N parameters to learn. For an input x, the probability µ l to reach a leaf l ∈ {1, . . . , L} is computed as a product of the successive activations d n down the whole tree. Therefore: where l n is true if l belongs to the left subtree of node n, and l n is true if l belongs to the right subtree. We define our tree-gate as the concatenation of the 2 D leaves probabilities of a single neural tree of depth D: For extracting representations with a committee of experts layers, a naive solution would be to use the raw image as the gate input, i.e. setting γ = Identity. However, in this case, the raw image is too low-level and cannot be used directly, thus it is preferable to use high-level semantic information computed from I, such as head pose estimation, by setting γ = Ω (see Section 3.1). By contrast, in the regression layer, the information extracted by the representation layer is already semantically abstract, thus it can directly be used as the gate input, by setting γ = Identity. Last but not least, the feature representation, regression and the gate parameters are all optimized jointly for each cascade stage.
Architectures
Similarly to other cascaded regression approaches, we learn a cascade of mappings s (0) +∑ k δ s (k) , where s (0) ∈ R 2P is an initial guess (usually an average shape computed over the whole train set) and each s (k) is a displacement estimated by first computing representations from shape-indexed patches centered at the current landmark estimate (as provided by the displacements applied so far) for each expert CNN, and using head pose estimate to weight the expert CNNs according to their relevance via the gating function. From these representations, we compute the landmark displacement, once again by using a gated mixture of experts. Then, the landmark localization is updated and the subsequent representation/regression stages can be applied sequentially.
Regarding the regression layer: we define several architectures whose differences only lie in the architecture of the regression layer. In such a case, we use a single CNN (L = 1, g l = 1∀l) for the representation layer of each cascade stage k, which is composed of 5 strided convolution Regarding the representation layer: furthermore, instead of using a single deep CNN for learning representations, we use a committee of L h = 8 expert CNNs, each being composed of 5 strided convolution layers with fewer feature maps but as many features per landmark in the last layer We use a tree-gate based on the previously computed head pose estimate as the gating network for learning representations and refer to this model as Pose-Tree-MoE. We can also use softmax-gate and refer to this model as Pose-Softmax-MoE.
Implementation details: with this configuration, each model has roughly the same total number of parameters (18 million parameters total for each cascade stage and with representation/regression layers), allowing a fair comparison between the models. Training is done by optimizing a L 2loss with 150 × 150 grayscale images and 32 × 32 patches centered around the 68 landmarks. The images are normalized so that they take values in [−1, 1]. We train 4 cascade stages and apply data augmentation as it is traditionally done in the literature: for each image we augment the initial mean shape by a random translation factor t ∼ N (0, 10) and a random scaling factor s ∼ N (1, 0.1), and half the time a horizontal flip of the image is performed. The parameters (for the representations and regression layers, as well as the gating function) are optimized jointly in an end-to-end manner by applying ADAM optimizer [28] with a learning rate of 0.001.
Experiments
In this section, we validate our model both qualitatively and quantitatively. First, in Section 4.1 we present the datasets that we use to train or test the proposed approaches. We validate the architectural choices in Section 4.2 on frontal head poses. We then compare our model with state-of-the-art approaches in Section 4.3 for both 2D and 3D face alignment. In Section 4.4, we qualitatively assess the relevance of the proposed approach both for the representation layer and in the regression layer. Finally, in Section 4.4.4 we evaluate the runtime of our model.
Datasets
We evaluate the effectiveness of the proposed approach both for 2D and 3D face alignment. In the first case, the ground truth landmarks correspond to projections on the visible part of the face. In the latter case, the ground truth corresponds to the real 2D coordinates (without the depth component) of the landmarks, which are often occluded due to large pose variations.
2D Face alignment
The 300W dataset was introduced by the I-BUG team [17] and is considered the benchmark dataset for training and testing face alignment models, with moderate variations in head pose, facial expressions and illuminations. It also embraces a few occluded images. The 300W dataset consists of four datasets: LFPW (811 images for train / 224 images for test), HELEN (2000 images for train / 330 images for test), AFW (337 images for train) and IBUG (135 images for test). As it is classically done in the litterature for 2D face alignment, we train our models on a concatenation on AFW, LFPW and HELEN trainsets, which makes a total of 3148 images for train. For comparison with state-of-the art methods, we refer to LFPW and HELEN test sets as the common subset and I-BUG as the challenging subset of 300W, as it is commonly done in the literature.
The COFW dataset [2], is an "in-the-wild" dataset containing only occluded data. It is a benchmark dataset to test the robustness of models w.r.t. partial occlusions. COFW contains 500 images for train and 507 images for test. The models are trained with 68 landmarks annotated for each image of the datasets. However, COFW only contains images with 29 annotated landmarks. Thus, we use the method proposed in [6] to perform a linear mapping between the predictions made on the 68 landmarks to the 29 landmarks, as it is a common practice on this dataset.
For 2D face alignment, the evaluation metric used is the normalized mean error (NME), corresponding to the average point-to-point distance between the ground truth and the predicted shape, normalized by the inter-pupil distance, as it is classically done in the literature: whereŝ i the prediction, s i the ground truth, and g i,l , g i,r the left and right pupil centers respectively.
3D Face alignment
The 300W-LP database is a large-pose dataset synthethized from 300W, and contains face images with large variations in pose on the yaw axis, ranging from −90 • to +90 • . The database contains a total of 61225 images obtained by generating additional views of the images from AFW, LFPW, HELEN and I-BUG, using the algorithm from [30]. As it is done in state-of-the-art approaches [31], we train on the augmented images corresponding to 300W trainset as well as their flipped counterparts, making a total of 101144 images for train.
The AFLW2000-3D dataset consists of fitted 3D faces and large-pose images for the first 2000 images of the AFLW database [12]. As it was done in [31], we evaluate the capacities of our method to deal with non-frontal poses by training on 300W-LP and testing on AFLW2000-3D. [31], we report accuracy for each pose range separately, as well as the mean across those three pose ranges.
3D face alignment consists in localizing the (x, y) coordinates of the "true" landmarks, as opposed to 2D alignment in which the landmarks are projected on the visible part of the face (e.g. cheeks in case of rotations around the yaw axis). In such a case, the evaluation metric used is also the normalized mean error, but the normalization is the size of ground truth bounding box, as it is introduced in [31]: where h i , w i the height and width of the face bounding box, respectively. Table 1. In particular, the non gated regressor ensemble is more robust than a single regressor: performance is improved by 3.8% on 300W-Common (LFPW + HELEN). Moreover, adding gates improves performance, especially with treegates. Robustness to pose is improved thanks to softmaxgates (8.95 → 8.8 on I-BUG) and significantly improved thanks to tree-gates (8.95 → 8.38 on I-BUG). Robustness to occlusions is slightly improved thanks to softmax-gates (5.87 → 5.84 on COFW) and significantly improved thanks to tree-gates (5.87 → 5.76 on COFW). This shows that using tree-gated ensembles of regressors allows to substantially increase the overall robustness of the model, particularly in the case of partial occlusions.
Architectures comparison
Furthermore, using head pose to gate MoE CNN models (Pose-Softmax-MoE and tree-gated-MoE) allows to significantly increase the alignment accuracy on I-BUG, which contains several examples of non-frontal head poses. This is however not the case for Pose-Softmax-MoE model on COFW database, which contains occluded examples. By contrast, Pose-Tree-MoE model generalizes better on COFW (5.76 → 5.58), and I-BUG (8.38 → 7.5), without signifi-cantly degrading performance on frontal faces (4.01 → 4.03 on average on LFPW and HELEN testsets).
These results show that using ensemble of experts allows for greater robustness, for modelling both the regression and representation layers. The use of gates also allows each expert to be more specialized for a given representation, leading to greater robustness. Last but not least, the hierarchical aspect of tree-gates further improves the use of expert regressors. Conditioning the learned representation to head pose estimation and taking advantage of using ensemble methods all the while learning the gates and expert layers jointly allows these experts to better co-adapt, leading to maximum robustness and accuracy. Table 2 shows a comparison between our approach and other recent state-of-the-art methods on both 300W (common and challenging subsets) and COFW databases. Our model outperforms these approaches on both 300W and COFW databases. The results on COFW show the robustness of our model to occlusions. The alignment error is similar to the human performance on this dataset (5.60 [2]). PCD-CNN [13] essentially uses head pose estimation as a multiplicative variable in a post-hoc processing fashion. By contrast, in Pose-Tree-MoE, head pose is used to select more relevant specialist CNNs to extract adequate features for each head pose range. As one can see, while PCD-CNN is better on 300W-Common, Pose-Tree-MoE significantly outperforms it on both 300W-Challenging and COFW. Therefore, using ensemble of tree-gated experts appears as a more robust way to adapt a face alignment network using head pose information, that leads to an overall better robustness to large variations in the data. Table 3 shows a comparison between our approach and other recent state-of-the-art methods for 3D face alignment on AFLW2000-3D. The state-of-the-art is achieved by the extended version of 3DDFA [31], which fits a 3D dense face model before estimating a sparse set of 68 landmarks. Our Pose-Tree-MoE achieves similar performance as compared to 3DDFA [31], all the while substantially outperforming it on large head poses in the [60, 90] range. Our model also significantly outperforms all other state-of-the-art approaches on this dataset [15,3,29,2,21,24]. Furthermore, contrary to [31], our approach only aligns a sparse set of landmarks, thus only requires ground truth landmarks for training, as opposed to the parameters of a morphable model. This shows that using a tree-gated committee of expert CNNs allows to learn relevant experts for each pose range, that produce suitable representations upon which the tree-gated MoE layer can adaptively align the facial landmarks. Conditioning the representation using the head pose estimate significantly improves the results on large poses. This is confirmed by the comparison between Pose-Tree-Moe and Tree-MoE.
3D Face alignment
In what follows, we propose a number of qualitative experiments to assess that the head pose clustering of the expert CNNs behaves as expected.
Qualitative evaluation
In this section, we conduct some experiment to provide insight on how the gated models behave, by visualizing the contributions of tree-gates: • Interpretability through hierarchical clustering visualization in representation layer. This allows to study how the model splits the poses space in order to extract the representation. In addition, this ensures consistency Bottom: Dispersion in head pose space (red axis: yaw, green axis: pitch, blue axis: roll). For more visibility, the data is normalized by shifting each point by the centroid and making it unit norm. At the first level of the tree, the split is mainly due to the yaw orientation, as illustrated by the comparison of the purple/blue images in the left graph with the orange/brown images in the right graph. For the right sub-tree, the split mainly uses the yaw and roll intensities, as shown on the right graph.
between the spatial distribution of poses and the use of expert CNNs.
• Efficiency through the distribution and use of expert FCs in regression layer. This allows for fewer regressors to be used, whose predictions are accurate.
Representation layer
As seen in the previous section, integrating head pose information to extract representations significantly improves the robustness to strong variations in pose, and results in a model that exceeds the state-of-the-art. It might be interesting to introspect the model in order to study how it behaves. To do this, we propose to visualize hierarchical clustering performed by our model on a dataset with maximum pose variability, such as AFLW2000-3D. Figure 5 represents the faces of AFLW2000-3D in the pose space, where each face is colored according to the expert CNN with the most weight in the committee. Since a unique color is given for each expert CNN, we can observe the splitting performed by the gates on the dataset. Figure 5 illustrates the repartition in head pose space from a Pose-Tree-MoE model trained on 300W-LP. We can then observe that on the first level of the tree, the red axis representing the yaw allows to separate the data associated with the two subtrees respectively. The same can be said for the second tree level, therefore the model learned to split the head pose space according to the yaw orientation primarily. This is consistent with the fact that the model was trained with 300W-LP, whose images essentially augmented with yaw. Figure 6 shows the average of the cumulative sum of the gates probabilities of expert CNNs, sorted in descending order on I-BUG. Notice on the right part that the tree-gated model allows 20% of the regressors to explain more than 90% of the final prediction, while the softmax-gated model needs about 40% of the regressors to explain 90% of the prediction. Thus, tree-gates allows to output a correct alignment using less expert regressors: the better repartition of expert regressors towards the specific alignment cases makes it pos- sible to better specialize each expert regressor, and to use fewer of these to obtain a better representation. The results for the representation layer are less clear-cut, the Pose-Tree-MoE model lying marginally above the Pose-Softmax-MoE model. This is likely due to the lower number of experts (8 vs 64 for representation vs prediction), indicating that the difference between tree and softmax-gated MoE models becomes more conspicuous as the number of experts increases. All in all, tree gates promotes a more efficient repartition of the experts, and a better specialization thereof, which, in turns, leads to an overall higher accuracy and robustness. Figure 7 shows the predictions of our model (5 th and 11 th columns) for large pose examples on AFLW2000-3D database, as compared with the ground truth markup (6 th and 12 th columns). The prediction and ground truth landmark localizations are also plotted along with the estimated and ground truth head pose. For most examples, the head pose estimate is very close to its ground truth counterpart: this allows to select relevant expert CNNs in the representation layers, which give rise to high-quality landmark alignment even on examples exhibiting large pose variations.
Visualizations
Furthermore, rows 1 to 4 and 7 to 10 shows the displacements outputted by only one top-scoring regressor (as indicated by the associated tree-gate value), from the current shape at the corresponding cascade iteration. It should be noted that using a single regressor, our Pose-Tree-MoE model can achieve reasonable alignment accuracy, which will justify investigating the use of a restricted (top-k) numbers of experts in future work, e.g. using greedy evaluation as in [4].
Runtime evaluation
Last but not least, our method is very fast as it operates at 17.54 ms per image on a NVIDIA GTX 1080 GPU, and thus can run at 57 fps. Furthermore, In [18], MoE are used to reduce the computational load by keeping only a small number (top-k) of experts. With hierarchical gates, an interesting direction would be to evaluate the tree-gate in a greedy layerwise fashion as in [4], and keep only the regressor corresponding to the maximum probability leaf to further reduce the computational cost.
Conclusion
In this paper, we have proposed to integrate ensemble methods within a deep architecture in order to increase the overall robustness of the model to large variations in the data. The use of a committee of experts neural networks instead of a single one allows an overall greater robustness. Furthermore, we showed that using a gate function to weight the responses of each expert network allows each of these networks to be more expert for a given context. In particular, the use of tree-gates makes it possible to jointly learn a committee of expert networks and a hierarchical clustering of the use of these experts. Additionally using neural trees to model the tree-gates allows to learn both the ensemble and associated gating network in an end-to-end manner.
As such, we showed that tree-gated MoE models can be used for modelling the regressors as well as the feature representation layers, by using high-level semantic information such as head pose as a proxy variable. These tree-gates allows a more efficient clustering and specialization of the experts, leading to a higher performance. Furthermore, thorough experimental validation, we demonstrated that, when applied for face alignment in the frame of cascaded regression, the proposed approach yields high accuracies, most notably on challenging data in term of head pose and occlusion, while keeping a reasonable computational cost.
As a future work, we will investigate the use of a limited number (top-k) of experts for both the representation and regression layers, e.g. using greedy evaluation [4], in order to further decrease the runtime. Furthermore, the tree-MoE architecture introduced in this paper is very generic and could be applied to a wide range of other computer vision problem, such as image classification, semantic segmentation, or object detection.
Step 1 (top 1) Step 2 (top 1) Step 3 (top 1) Step 4 (top 1) Ground truth Prediction Step 1 (top 1) Step 2 (top 1) Step 3 (top 1) Step 4 (top 1) Ground truth Prediction Figure 7: Visualisations of the predictions outputted for each cascade stage with only the top (maximum value of tree-gate) regressor. Head pose estimation is also displayed, as well as the ground truth. Images from AFLW2000-3D. | 2019-10-21T15:30:20.000Z | 2019-10-21T00:00:00.000 | {
"year": 2019,
"sha1": "3a0ffdc12b5d3d5f90b321cf3b0785ada0570a36",
"oa_license": null,
"oa_url": "https://arxiv.org/pdf/1910.09450",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "3a0ffdc12b5d3d5f90b321cf3b0785ada0570a36",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
269246083 | pes2o/s2orc | v3-fos-license | Referral practices and treatment of obstructive sleep apnea in pregnancies with obesity
Abstract Objective Obstructive sleep apnea (OSA) affects maternal and neonatal health during pregnancy. This study aimed to identify characteristics and comorbidities associated with sleep clinic referral in high‐risk pregnancies with Body Mass Index (BMI) ≥35 kg/m2. Method Retrospective cohort study for individuals in a high‐risk pregnancy clinic at a tertiary Australian hospital from 1 January to 31 December 2020 with BMI≥35 kg/m2. The primary outcome measure was sleep clinic referral. Exposure data included multiple comorbidities and formal tools (Epworth Sleepiness Scale and STOP‐BANG). Multivariable analysis was used to identify factors associated with referral. Descriptive data on barriers to diagnosis and treatment were collected. Results Of 161 pregnant individuals, 38.5% were screened using formal tools and 13.7% were referred to sleep clinic. Having STOP‐BANG performed was associated with sleep clinic referral (Odds Ratio: 18.04, 95% Confidence Interval:4.5–71.7, p < 0.001). No clinical characteristics were associated with the likelihood of performing STOP‐BANG. The COVID‐19 pandemic was a treatment barrier for three individuals. Conclusions Current screening practices identify pregnant individuals with the highest pre‐test probability of having OSA. Future research should evaluate real‐world strategies to improve identification and management in this high‐risk population.
| INTRODUCTION
Obstructive Sleep Apnea (OSA) is an important comorbidity of pregnancy as it is associated with increased maternal and neonatal morbidity.[3][4][5] Maternal OSA is also associated with preterm birth and small for gestational age infants. 2 There are also likely longitudinal effects; moderate OSA in high-risk pregnancies is associated with increased risk of developmental delay seen in children aged 6-36 months. 6e prevalence of OSA (defined as an Apnea Hypopnea Index [AHI] ≥5) in pregnant individuals with an elevated Body Mass Index (BMI) appears to be high.The prevalence of OSA was as high as 43.3% measured between 24 and 32 weeks of gestation for those with BMI ≥40 kg/m 2 . 4For those with elevated BMI, the pre-test probability of antenatal OSA is high and accurate identification and timely management is important to potentially prevent maternal and fetal complications.
It is unclear how OSA is currently screened in real-world high risk obstetric clinics and there are currently no guideline recommendations despite the knowledge that OSA is a significant risk to both mother and infant.A survey of obstetric anesthesiologists found that 82.7% did not have departmental guidelines for the assessment and management of OSA in pregnancy. 7Referral rates increase with a streamlined referral pipeline, but completion rates for sleep studies are still suboptimal, even when individuals are seen in a specialist obstetric sleep clinic. 81][12] These traditional tools incorporate characteristics irrelevant to most pregnant individuals for example, male gender and age over 50 in STOP-BANG.Other screening tools have been developed specifically for pregnancy, for example, Facco and colleagues' tool, which utilizes frequent snoring, chronic hypertension, age and BMI in its calculation. 13ere is little published on the barriers to OSA diagnosis and treatment in pregnancy.Positive airway pressure (PAP) therapy is the mainstay of treatment and is associated with decreased diastolic blood pressure and risk of pre-eclampsia in high risk pregnancies. 14eviously, low suspicion for OSA, inconvenience, and concerns about testing and treatment equipment have been documented as barriers to OSA testing. 8COVID-19 lockdowns and OSA diagnosis late in pregnancy may also be barriers to initiation and continuation of OSA treatment.
| Aims and hypotheses
This study aimed to identify the individual characteristics and comorbidities associated with referral to sleep clinic, for pregnancies seen in the Bariatric, Multidisciplinary Clinic (BuMP clinic) at an Australian tertiary hospital using a retrospective cohort study.As part of this clinic, it is anticipated that most pregnant individuals will be screened for sleepiness and OSA, using ESS and STOP-BANG, respectively, as well as general questioning around sleep and somnolence.It is not clear how formal screening tools are used in making referral decisions and whether clinicians also utilize information such as demographics, symptoms and relevant comorbidities (e.g., hypertension, previous pre-eclampsia, BMI).
Secondarily, this study aimed to identify the barriers to OSA diagnosis and treatment in individuals from the 'BuMP clinic' who attended the sleep clinic.This was collected descriptively from patient records.
| Study setting, design and exclusion criteria
This study collected retrospective data from medical records of all pregnant individuals seen in 'BuMP Clinic' for high-risk pregnancies with BMI ≥35 kg/m 2 who gave birth between 1 January 2020 and 31 December 2020.The BuMP clinic is an obstetrician-led service in a tertiary hospital in Canberra, Australia, which serves approximately 650,000 people. 15Individuals in the BuMP clinic are also seen by midwives, diabetes educators and endocrinologists.There are no sleep physicians directly involved in the clinic.As a publicly funded service, patients receive medical consultations and polysomnography with no out-of-pocket costs, but were required to fund their own therapy if OSA was diagnosed.
Individuals were excluded if they gave birth at another hospital.
If a participant had more than one pregnancy during the study period, only the first was included.
| Data collection
Manual file audit and automatic data extraction from the Birthing Outcomes System (BOS) were used to collect data on multiple demographic characteristics and comorbidities by the first author.The primary outcome measure was referral to a sleep clinic (Yes/No).
Other variables are shown in Table 1 and included patient demographics, cardiometabolic and respiratory comorbidities, data on screening outcomes (Epworth Sleepiness Scale 16 and STOP-BANG score) and pregnancy characteristics.For individuals referred to the sleep clinic, data were collected on whether they attended and then completed polysomnography (PSG).If PSG was completed, the Apnea-Hypopnea Index (AHI), Oxygen Desaturation Index (ODI), gestation of both diagnosis and treatment (in weeks) and attendance at follow-up post-partum were recorded.
| Statistical analysis
Normality was tested graphically via histograms.Continuous variables were presented as mean (standard deviation) or median (1st,
| Factors associated with referral to sleep clinic
During 2020, 161 individuals were seen in the BuMP clinic with BMI ≥35 kg/m 2 , and 22 of those individuals (13.7%) were referred to the sleep clinic.All were singleton pregnancies.Two individuals (1.2%) had pre-pregnancy OSA documented in clinic notes.It was intended that all individuals should have both STOP-BANG and ESS performed, but this was only performed for 58 individuals (36%).The STOP-BANG was performed for 62 individuals (38.5%),ESS was also performed for 62 individuals (38.5%), but some patients only had one or the other tool administered.
Table 1 shows the demographic and clinical characteristics collected and differences between the individuals referred and not referred to the sleep clinic using univariate analysis.Individuals referred to the sleep clinic were more likely to have ESS and STOP-BANG performed and their scores in these tools were significantly higher.They were also more likely to have a history of nongestational diabetes and a history of more miscarriages.
Having a STOP-BANG performed was the only variable significantly associated with referral to sleep clinic in a nested multivariable logistic regression model (see Table 2).
| Factors associated with completion of STOP- BANG screening for OSA
Given the significant effect of STOP-BANG completion on an individual's referral to a sleep clinic, we investigated patient or clinical characteristics that might determine whether a clinician is more likely to complete this tool.No variables predicted the completion of a STOP-BANG in a nested multivariable regression model (Table 3).
| Results for individuals who completed polysomnography (PSG)
Of those referred to the sleep clinic, 18 individuals (81.8%) attended and 14 (63.6%) completed PSG.Results can be seen in Table 4.Of those who completed PSG, 12 individuals (85.7%) had an Apnea-Hypopnea Index (AHI) ≥ 5 consistent with a diagnosis of OSA.
Variable
WARHURST ET AL.
-5 of 8 specificity of 74%.Although, data on frequent snoring was not available for our retrospective population (STOP-BANG asks regarding loud snoring), 80/161 (49.7%) individuals included in our study had a score of 75 or above, indicating they likely have OSA based on their age, BMI and presence of chronic hypertension.This number would likely have been higher with the inclusion of snoring data.
Furthermore, individuals who were ultimately diagnosed with OSA in the current study had high median ESS and STOP-BANG scores (13.5 and 4, respectively), suggesting that only those with the highest pre-test probability of having OSA were identified.There is likely a significant amount of undetected OSA in this group of pregnant individuals with BMI ≥35 kg/m 2 .
In further exploration of this argument, data from the current study was compared to unpublished data from the Canberra Obesity Management Service (COMS), 19 where patients are systematically screened for OSA and referred to sleep clinic as appropriate.During (Standard Deviation 9.7) and all were screened for OSA.Thirty-one (37.8%) were referred for sleep study and 27 of these (87.0%) completed PSG.All those who completed PSG were diagnosed with OSA.Although COMS population's mean BMI was higher than that of our high-risk pregnant population and the COMS population was studied pre-covid, the referral rates to sleep clinic were substantially higher (37.8% in OMS compared with 13.7% in the high-risk pregnancy clinic).This is likely attributable to universal screening in the COMS population.Similarly, in a United Kingdom-based bariatric clinic (both males and females) with universal screening for OSA and a mean BMI of 48.7 kg/m 2 , there was a very high prevalence of OSA of 73%. 20Therefore, it appears that the screening performed in 'BuMP' clinic's population is likely only to identify those with the highest pre-test probability of having OSA.
To improve OSA screening and treatment in high-risk pregnancies, multiple changes are recommended to overcome the various barriers observed in clinical practice (see Figure 1).The first and most obvious solution is improved or universal screening in high-risk pregnancy clinics and implementing a streamlined referral pipeline or multidisciplinary clinics. 8 7 Respondents commonly considered OSA in pregnant individuals with obesity and essential hypertension, but were less likely to consider OSA in individuals with pre-eclampsia and gestational diabetes. 7Furthermore, the optimal timing of screening needs to be considered.This is likely between 12 and 18 weeks to allow enough time for meaningful treatment, 22 but later screening should still be performed, especially for women at high risk or with significant related comorbidities for example, signs of right heart failure.Early gestational screening for OSA in individuals with chronic hypertension is also beneficial. 23Unfortunately, gestation of OSA screening was also not recorded in our study.
Another consideration in improving the identification of OSA in high-risk pregnancies is the screening method.As discussed, there is no clinical gold-standard for screening of OSA in pregnancy and this area would benefit from further research.In Australia, the STOP-BANG and ESS are commonly used due to inclusion in the Medicare Benefits Schedule that is, patients can qualify for funded PSG prior to sleep physician assessment.However, as demonstrated in many previous studies, the sensitivity and specificity of these tools for OSA in high risk pregnancies with BMI ≥35 kg/m 2 can be poor. 11,12The use of other pregnancy-specific tools should be considered as the use of BMI as a continuous rather than categorical variable appears to improve screening sensitivity in pregnant populations. 13nally, even with improved screening, there are barriers to diagnosis and treatment that need to be addressed.Those identified as high-risk of OSA during screening need timely access to sleep services.Support for socioeconomic disadvantage needs to be considered for access to PSG and PAP therapy.Further research is also needed on the effectiveness of PAP and other treatments and their impact on pregnancy complications.
This study adds to a very limited body of evidence on the practice of obstetric physicians when it comes to referring individuals with high-risk pregnancies for sleep clinic review.It identifies a significant gap in clinical practice that not only requires further research but would benefit from more specific guidelines from obstetric and midwifery professional bodies.This study was limited by its retrospective design and some data were incomplete.For example, data on ethnicity were not easily available and this likely impacts the effect of BMI on pregnancy-related complications such as OSA. 24The study represents the practice at one tertiary hospital in Australia, so results may vary depending on how pregnancy and sleep services are delivered in other centers and countries.The data collection period was also during the COVID-19 pandemic, when interactions with the health system were altered. 25Given the small number of individuals who were diagnosed and treated for OSA, this study may not have captured all the important barriers to clinical practice in this area.
There is still much to be done when it comes to identifying OSA in high-risk pregnancies, in day-to-day clinical practice.Obstetricians in this Australian center are referring a small proportion of individuals seen in high-risk pregnancy clinics for specialist sleep physician review and are primarily using formal screening tools as a basis for this referral.It is likely that referral rates will be much higher with systematic screening using either standard or pregnancyspecific screening tools. 13Referral rates are probably similar in other tertiary institutions, especially in the absence of formal multidisciplinary clinics or streamlined referral pathways.
This study documents the real-life challenges of OSA identification and management in high-risk pregnancies and the need for more rigorous and effective screening pathways.Future research should evaluate the effectiveness of strategies for improving clinical practice in managing OSA in high-risk pregnancies.
2 of 8 -
WARHURST ET AL. 3rd quartile) when normality was not met.Categorical variables were presented as frequencies and relative frequencies.To compare continuous variables between the Referred and Not Referred groups, t-tests were used when normal distribution was met.Mann-Whitney U Test was used to compare continuous variables when distribution was not normal.Chi-square was used to determine the relationship between categorical variables and referral status.A nested multivariable logistic regression model was performed to determine variables independently associated with referral to a sleep clinic (referral vs. non referral).The variables were added in blocks of clinical relevance and significance, with the first model including age (in years) and booking BMI (in kg/m 2 ), the second model included age, booking BMI, having STOP-BANG performed and indigenous status and the third model included variables from the first two models as well as history of ≥2 miscarriages, hypertension after 20 weeks, current smoker status, gestational diabetes in current pregnancy and history of asthma.The STOP-BANG score was not used in the model as it utilizes other variables included in the multivariable model (BMI, age and history of hypertension).Statistical significance was set at alpha = 0.05.Analysis was performed using IBM® SPSS® Statistics Version 28.17
T A B L E 1
Individual characteristics by referral to sleep clinic using univariate analysis (N = 161).
-3 of 8 2. 4 |
Barriers to diagnosis and treatmentBarriers to diagnosis (i.e., completion of sleep study) were documented for individuals who attended sleep clinics using descriptive information from clinical records.For those who completed polysomnographic testing, barriers to initiation and maintenance of treatment were also documented descriptively.The Australian Capital Territory (ACT) Health Human Research Ethics Committee provided a waiver for this research (ACT Reference 2022.LRE.00109) to proceed as a quality assurance activity.
For 16 of the 18
individuals who attended the sleep clinic, PSG was recommended because of a clinical history consistent with OSA.Two individuals declined the offer of PSG.For 10 of 12 individuals diagnosed with OSA, the treating clinician recommended CPAP therapy.For the two individuals in whom CPAP therapy was not recommended, the clinicians cited low severity of disease and lack of symptoms in their reasoning.Three individuals had virtual sleep clinic consultations due to COVID-19 lockdown, with emailed CPAP T A B L E 2 Nested multivariable model showing individual demographics and clinical characteristics associated with referral to sleep clinic.
4 of 8 -
WARHURST ET AL. scripts.They did not have any follow-up so it was unclear if this treatment was initiated or tolerated.One patient was unable to commence treatment due to late gestation of diagnosis (36 weeks) and development of pre-eclampsia.Another patient did not complete PSG and OSA treatment until post-partum due to late gestation of referral.The remaining five individuals did not have significant barriers to OSA treatment.Of the 14 individuals who attended a sleep clinic antenatally, nine (64%s) were followed up post-partum.The individuals who did not attend follow-up either did not have treatment (n = 2) or had their CPAP prescriptions emailed without further follow-up organized (n = 3).4| DISCUSSIONThis retrospective study found that obstetricians working at this tertiary hospital in Australia are referring a small proportion (13.7%) of individuals seen in a high-risk pregnancy clinic for specialized sleep assessment.These clinicians are primarily using formal tools (STOP-BANG or ESS) as criterion for sleep clinic referral, performed in 38.5% of individuals.Although it is expected that screening was performed for all individuals in the clinic, this was not the case in practice.There were no demographic or clinical characteristics that affected the clinicians' decision to refer to a sleep clinic or to complete a formal screening tool.The decision to formally screen for OSA was possibly clinician-or gestation dependent but these data was not documented.This study also descriptively documented barriers to diagnosis and treatment of OSA for the small number of individuals who attended sleep clinics and completed PSG.Two individuals were referred to the sleep clinic at a late gestation, which affected their diagnosis and optimal management.Impacts of the COVID-19 pandemic were noted; it likely played a role in accessing services with virtual CPAP prescription and no post-partum follow-up for three individuals.This is consistent with other research showing that the diagnosis and management of OSA was more challenging during the COVID-19 pandemic18 ; laboratories in Australia were not completely closed down, but management changed significantly with virtual clinic consultations and reduction in polysomnography services.Implementation and troubleshooting of PAP therapy was challenging as it is considered to be a potentially aerosol-generating procedure.The COVID-19 pandemic likely impacted negatively on referral rates to sleep clinics, rates of polysomnography testing and initiation of OSA treatment in high-risk pregnancies.
a
Alpha set at 0.05.T A B L E 4Sleep study results for individuals who completed polysomnography.
Nested multivariable model showing individual demographics and clinical characteristics associated with completion of STOP-BANG tool.
proposed a screening tool based on age, BMI, chronic hypertension and frequent snoring.The score is calculated using the formula [(15 if frequent snoring) þ (15 if chronic hypertension) þ age þ BMI].If pregnant individuals have a score of T A B L E 3 | 2024-04-21T05:05:55.007Z | 2024-04-01T00:00:00.000 | {
"year": 2024,
"sha1": "3a7324b6dfe719bc30177ff43726df89666e090a",
"oa_license": "CCBYNCND",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "3a7324b6dfe719bc30177ff43726df89666e090a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
213243652 | pes2o/s2orc | v3-fos-license | About the features of the verification of VLSI class “System on a chip” for complex information systems
The article discusses the hierarchical levels of the verification process for VLSI of the System-on-Chipclass, as well as information systems built on their basis. The increasing complexity of modern information systems creates additional requirements for the composition and complexity of the verification of such systems, including the early stages of their development. An approach to verification is proposed using a combination of software tools for modeling digital circuits at the lower levels of abstraction and specialized verification tools developed for a specific information system and implementing verification primarily at the system level. The proposed approach will enhance the performance of VLSI modeling of the System-on-Chip class, which will improve the reliability of verification of complex systems by improving test coverage. It is noted that when connecting mutually asynchronous fragments of VLSI it is necessary to carry out modeling at a high level, able to identify problems of data transmission between individual synchronous subsystems.
Introduction
The increasing complexity of information systems (IS) and VLSI, which form the basis of their hardware, increases the importance of their verification at high levels of abstraction. The system-onchip class VLSIs themselves are complex objects for verification. And their integration into an information system adds additional levels of verification, determined by the interaction between the components of a complex system and, possibly, issues of interaction with the "real world" (for example, network activity, physical effects the system as a whole and other effects that cannot be considered at the VLSI level). The issues of building complex ICs and related additional questions are currently the subject of research from the point of view of both digital circuitry [1][2][3][4][5][6][7][8] and system engineering [9,10]. Currently in the field of information systems there are several levels of testing and verification, including: unit tests (unit test), performed at the VLSI level or its components; integration tests performed to verify the correctness of the connections between the VLSI or IS components and their interaction; system tests performed to control the correctness of IP actions when interacting with control objects in a real environment. The practical effect of identifying levels of verification is the possibility of applying different approaches and tools at each level, and with increasing levels of abstraction, the performance of verification or modeling increases due to the rejection of checks of low-level parts that were verified before. However, an important task is to identify the level of detail of the model, which would be sufficient at an appropriate level.
Causes of malfunction in VLSI forming information systems
Due to the high complexity of ICs, which include not only VLSI, but also peripheral devices that interact with some environments, the sources of errors in the work are of a different nature. You can consider these reasons, grouping them by the level of system design, which is possible to analyze them and eliminate the causes.
At the level of VLSI or their modules, the occurrence of structural defects caused by both architectural errors and technological errors in the manufacture is possible. In addition to an improperly designed circuit, the problem may be caused by a combination of temperature, supply voltage and variation of technological parameters in the manufacture of a specific VLSI sample, which will cause its behavior to deviate from the expected one. This type of errors in modern CAD systems is subject to monitoring and elimination with the help of tools that are combined with the concept of 'Static Timing Analysis' (STA). The metastable state is understood as such a trigger operation mode in which its output has an intermediate voltage level, perceived by different components as a logical zero or a logical unit (Fig. 3). This is an undesirable mode in which the triggers are forced, if the voltage at the data input changes immediately before the arrival of the clock signal. According to the specification, each trigger has a setup time (setup time) and a hold time (hold time) -the time intervals before and after the clock edge, during which the signal level at its input D should not change. Metastability is probabilistic in nature and is explained by the peculiarities of switching transistors, of which the trigger consists, and not by any special design techniques for digital electronics. The signal can be metastable, input, for example, from a mechanical switch, or any external chip, which is clocked not from an internal clock generator. Any design that uses two clock signals is potentially subject to metastability, which should be eliminated, or at least reduce the likelihood of its occurrence. Even in the case when the nominal values of the clock frequencies are the same or multiples, normal work is possible only in the ideal case unattainable in practice -when the fronts of both clock signals appear at strictly certain points in time.
VLSI Verification Levels in Information Systems
For most synchronous components in VLSI, Static Timins Alalyze execution is acceptable. Since the design constraint set for VLSI clock circuits (create_clock) is analyzed by modern CAD systems with a sufficiently high level of adequacy, obtaining positive STA test results can be the basis for conclusions about the reliable operation of the synchronous node as a whole.
However, it should be borne in mind that even at the level of individual VLSIs, there is currently a tendency to separate clock domains in order to reduce the complexity of tracing clock circuits over a large area crystal.
A common mistake is to attempt to simulate a project to prove that there are no problems with metastability. The modeling performed by the CAD system of the VLSI is not able to reveal the problem of the appearance of metastability, since this process is of a probabilistic nature and, moreover, is not purely digital. Even the identification of the fact that for some kind of trigger, conditions are violated by the time of installation or holding a signal does not help to answer the question 'what state will this trigger take into account temperature, supply voltage and technological variation of parameters'.
A practical way to solve the problem of metastability is the correct resynchronization of signals transmitted from one clock domain to another. Dual port memory should be used with ports connected to the appropriate clock signals, where possible. A reliable way to transmit a one-bit signal is to use a chain of 2 or 3 flip-flops, and the optimization of this circuit by a synthesizer should be avoided. It is recommended to describe resynchronization nodes as separate modules and check the details of their implementation after synthesis.
Based on the above information, verification of the correct operation of the resynchronization scheme cannot be performed at the CAD VLSI level. Therefore, verification of such a project should be performed at the integration or system level, abstracting from the behavior of specific schemes and considering the resynchronization circuit as a 'black box' or 'gray box'. Such a chain can be attributed to the latency introduced by it in the receiver clock cycles, assuming that the metastable state will occur regularly. For a dual-port memory, it is necessary to simulate the appearance of the "data received" flag, taking into account the additional latency that excludes data reception, if the metastable state for the ready flag did not arise, but such a state arose for the received data.
The next level of abstraction may include such processes as receiving packets in communication systems or the appearance of information components of signals in measuring systems. Since these processes are related to the information environment of the IC, and cannot be adequately modeled at the CAD level, to simulate them, it is necessary to develop special software that takes into account the characteristics of the domain to correctly reproduce the behavior patterns external to the IC.
Levels of abstraction in the verification of VLSI
The considered problems of verification of projects on the basis of FPGA indicate an increase in the complexity of the verification process and the associated allocation of levels of abstraction. Levels imply the use of the following types of verification (table). For correct verification, it is necessary to ensure an appropriate level of test coverage. If for static time analysis the test coverage is mainly determined by CAD (including analysis of the effect of temperature, supply voltage and technological variation), then reproducing situations arising from the interaction of synchronous components of VLSI and VLSI and its environment is specifically a task developed software.
It is possibly to specify the obvious reasons for developing software designed for a specific IP or class of such an IC.
1. Network systems for which quality parameters are high-level characteristics, such as average exchange rate, percentage of lost packets, etc., which does not include analysis of individual VLSI signals. In addition, the formal correctness of the SBIS connection does not answer the questions of interaction of network infrastructure elements at high levels of the OSI model (i e above the MAC level, which is still subject to verification at the level of static time analysis).
2. Measuring systems, including those based on digital signal processing systems, for which metrological characteristics are relevant. Despite the fact that these characteristics can in principle be obtained on the basis of the results of static time analysis, such VLSI modeling in building AFC is too time consuming, therefore, it should be replaced with modeling at a higher level of abstraction.
For the examples given, the performance check at the system level is decisive in nature and allows you to make a decision about the feasibility of continuing development without implementing low-level project details.
Conclusion
When developing information systems based on digital VLSI, the allocation of abstraction levels in modelling and verification contributes to increasing development productivity and identifying possible system malfunction in the early design stages, including at the system modelling level, without involving specialists in the field of digital circuitry. | 2019-11-22T00:55:07.498Z | 2019-10-01T00:00:00.000 | {
"year": 2019,
"sha1": "5642164aea1e5928ac47519e8a443d37de15f0a8",
"oa_license": null,
"oa_url": "https://iopscience.iop.org/article/10.1088/1742-6596/1333/2/022011/pdf",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "0bcaafb0f4ffc0822e613178775318dcc094d3e5",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
14293288 | pes2o/s2orc | v3-fos-license | Modulation of microglia by Wolfberry on the survival of retinal ganglion cells in a rat ocular hypertension model
The active component of Wolfberry (Lycium barbarum), lycium barbarum polysaccharides (LBP), has been shown to be neuroprotective to retinal ganglion cells (RGCs) against ocular hypertension (OH). Aiming to study whether this neuroprotection is mediated via modulating immune cells in the retina, we used multiphoton confocal microscopy to investigate morphological changes of microglia in whole-mounted retinas. Retinas under OH displayed slightly activated microglia. One to 100 mg/kg LBP exerted the best neuroprotection and elicited moderately activated microglia in the inner retina with ramified appearance but thicker and focally enlarged processes. Intravitreous injection of bacterial endotoxin lipopolysaccharide (LPS) decreased the survival of RGCs at 4 weeks, and the activated microglia exhibited amoeboid appearance as fully activated phenotype. When activation of microglia was attenuated by intravitreous injection of macrophage/microglia inhibitory factor, protective effect of 10 mg/kg LBP was attenuated. The results implicated that neuroprotective effects of LBP were partly due to modulating the activation of microglia.
Introduction
Wolfberry (fruit of Lycium barbarum Linn, in the family Solanaceae) is known as Fructus Lycii and L. barbarum in the West and Gouqizi or Kei Tze in Asia. In traditional Chinese medicine literature, it has been known for balancing "Yin" and "Yang" in the body, nourishing the liver and kidney, improving visual acuity for more than 2,500 years [1]. The content of Wolfberry contains about 40% polysaccharides [lycium barbarum polysaccharides (LBP)]; therefore, research in Wolfberry often focuses on these water-soluble fractions. LBP as food supplement enhances the body defense system by restoring atrophied thymus in aged subjects and regulates the proliferation and the immune activity of splenocytes and T cells [2][3][4][5][6]. It has been shown that LBP can increase phagocytic activity of macrophages, immune activity of cytotoxic T cells, and natural killer (NK) cells in cyclophosphamide-treated and S180-bearing mice [7][8][9]. LBP increases interleukin-2 (IL-2) receptors on isolated human peripheral lymphocytes [10]. LBP can be purified into different fractions; glycoconjugate LBP 3P can increase the expression of messenger RNA and protein level of IL-2 and tumor necrosis factor-α (TNF-α) in human peripheral blood mononuclear cells [2] and increase phagocytosis by macrophage, antibodies secreted by spleen cells, spleen lymphocyte proliferation, and cytotoxic T cells activity in S180-bearing mice [9]. Our previous study reported the neuroprotective effects of LBP on RGCs in an experimental model of glaucoma [11]. However, it is unclear whether neuroprotection is mediated via modulating immune cells in the retina, which is our aim in this study.
Increasing lines of evidence obtained from clinical and experimental studies strongly suggests an aberrant activity of the immune system in glaucoma [12,13]. Microglial cells are the major immunocompetent cells in the central nervous system (CNS). It has been reported that microglia have diverse phenotypes, which secrete beneficial or destructive factors [14]. Activated microglia have been considered to be endogenous malefactors in the CNS; they induce neuronal death by releasing excess cytotoxic factors such as superoxide [15], nitric oxide, and TNF-α [16][17][18]. However, increasing lines of evidence have shown that the protective effects of microglia can be accomplished by releasing trophic and anti-inflammatory factors [19][20][21][22][23][24]. Whether microglia exhibit neuroprotective or neurodestructive effects depends on the disease state or the type of stimulus. There are increasing lines of evidence in vitro, showing that it is possible to manipulate the activation state of microglia so that their activation can be beneficial, i.e., protecting rather than destroying neurons [25]. However, it is difficult to achieve this goal in vivo, especially in a chronic neurodegenerative model.
A primary objective in this study is to evaluate the modulation of LBP on retinal microglia and its neuroprotective effect on survival of RGCs in a chronic ocular hypertension (OH) model. We studied the morphology of microglia in OH retina from rats fed with different doses of LBP. In addition, the effect on the survival of RGCs after administration of either a microglia activation inhibitor, macrophage/microglia inhibitory factor (MIF), or microglia activation stimulator, LPS (bacterial endotoxin lipopolysaccharide) was evaluated in OH rats.
Preparation of LBP
The Wolfberry originated from NingXia Huizu Autonomous Region, the People's Republic of China. The simplified extraction scheme of LBP from Wolfberry [26] has been reported by our group. Briefly, the dried wolfberries (10 kg) were grounded to fine powder and defatted by refluxing with 95% ethanol. The insoluble residue was filtered, air-dried, and extracted successively with 70°hot water. The concentrated extract was incubated with trichloroacetic acid, extensively dialyzed against running distilled water, concentrated, and then precipitated using 95% ethanol. After centrifugation and several rinses with absolute ethanol and acetone, the resulting precipitate was vacuum dried at 40°to yield a brown powder Wolfberry extract-LBP (2 g).
Animal grouping
Sixty-six adult female Sprague-Dawley rats (250-280 g) were obtained from the Laboratory Animal Unit of the LKS Faculty of medicine in the University of Hong Kong and were maintained in a temperature-controlled room with a 12-h light/dark cycle throughout the observation period. The animals were handled according to the protocol for the use of animal in research approved by the Committee on the Use of Live Animals in Teaching and Research of the University of Hong Kong and the Association for Research in Vision and Ophthalmology (ARVO, USA) statements for the use of animals in Ophthalmic and Vision Research. Prior to measuring intraocular pressure (IOP) or any other operations, the rats were anesthetized with an intraperitoneal injection of a ketamine/xylazine mixture (ketamine 80 mg/kg and xylazine 8 mg/kg; Alfasan, Woerden, Holland). Prior to every ocular photocoagulation (including IOP measurement, laser treatment, and intravitreous injection), one drop of proparacaine hydrochloride (0.5% alcaine, Alcon-Couvreur, Belgium) was applied to the eyes as a topical anesthetic. After every ocular manipulation, ophthalmic Tobrex ointment (3% tobramyxin, Alcon-Couvreur, Belgium) was applied topically on the eyes to prevent infection. All operations were performed under an operating microscope (Olympus OME, Tokyo, Japan).
The animals were divided into 11 groups, and every group consisted of six rats ( Table 1). The LBP powder was dissolved in 0.01 M sterilized phosphate-buffered saline (PBS; pH 7.4). Animals were fed daily through a nasogastric tube with 1 ml of either PBS or different dosages of LBP, including 1, 10, 100, 1,000 mg/kg. Daily feeding (groups 2-10) started at 7 days before the first laser treatment and continued until euthanization of the rats. IOP was measured before the first laser treatment (as baseline) and before killing (postoperative). A total of two laser photocoagulation was performed at 1-week interval.
Ocular hypertension model
OH was induced in the right eye of each animal using laser photocoagulation according to our previous publications [11,27,28]. Briefly, the limbal vein and the three radical episcleral aqueous humor drainage veins (superior nasal, superior temporal, and inferior temporal) were photocoagulated using an Argon laser (Ultima 2000SE Argon Laser, Coherent, USA). About 60 laser spots (power, 1,000 mV; spot size, 50-100 µm; duration, 0.1 s) around the limbal vein (except the nasal area) and 15-20 laser spots on each episcleral aqueous humor drainage vein were applied. To maintain a high IOP, a second laser treatment at the same settings was applied 7 days later.
Measurement of IOP
IOP was measured with a Tonopen XL tonometer (Mentor®, Norwell, USA) before the first laser treatment and every subsequent week until the rats were killed. To avoid diurnal variation and effect of anesthesia, all IOP measurements were taken at 10 A.M. and within 15-30 min after administration of ketamine and xylazine mixture (i.p.). An average of ten measurements was used to determine the IOP of the eye.
Intravitreous injection
Tuftsin fragment 1-3 acetate salt, also known as MIF, was purchased from Sigma (St Louis, MO, USA); bacterial endotoxin lipopolysaccharide (LPS) derived from Escherichia coli O111:B4 was purchased from Calbiochem (La Jolla, CA, USA). To study the influence of MIF on the survival of RGCs and microglia activation, immediately after the first laser treatment, 2 µm of 0.01 M PBS (group 7) or 172 ng of MIF (2.5 mM) in PBS (group 8) were injected into the vitreous cavity of the right eye [29] of the 10 mg/kg LBP-fed group. To demonstrate the fully activated microglia in the retina, 5 µg of LPS in 2 µl PBS was injected into the vitreous cavity of the right eye (group 11) immediately after the first laser treatment. Animals with cataract, intraocular bleeding, retinal detachment, or non-elevated IOP were excluded from this study (∼15% of experimental animals).
Retrograde labeling of RGCs
To evaluate the drug effects on RGCs, they were retrogradely labeled by applying FG on the surface of superior colliculus (SC) at 4 days prior to euthanization [30]. Briefly, the rat scalp was cut open in the mid-line, and a small hole was drilled on the skull on each side of the sagittal suture. The four edges of SC can be observed directly under the operation microscope after removing the overlying cerebral cortex. Then, a thin layer of gelatin sponge (UpJohn, Kalamazoo, MI, USA) pre-soaked with 6% FG (Fluorochrome, Denver, CO, USA) was placed on the surface of SC (FG is taken up by the axon terminals of RGCs and bilaterally transported retrogradely to its somata in the retina). Then, the scalp was sutured, and an analgesic, bupreorphine (100 mg/kg), was orally administered for 5 days for pain relief.
Counting of RGCs and statistical analysis
At different time points, the rats were killed with an overdose of a mixture of ketamine/xylazine after measuring the IOP. Both eyes were enucleated and post-fixed in 4% paraformaldehyde for 60 min, then cut horizontally into No laser 6 (group 1) Normal control Laser treatment + fed with 0.01 M PBS 6 (group 2) 6 (group 9) Solvent control for drug treatment Laser treatment + fed with 1 mg/kg LBP 6 (group 3) 6 (group 10) LBP feeding dose -response study Laser treatment + fed with 10 mg/kg LBP 6 (group 4) Laser treatment + fed with 100 mg/kg LBP 6 (group 5) Laser treatment + fed with 1000 mg/kg LBP 6 (group 6) Laser treatment + fed with 10 mg/kg LBP + i. superior and inferior eyecups. The superior eyecups with intact optic nerves were fixed overnight and processed to make paraffin blocks. Retinas from the inferior eyecups were dissected from the underlying sclera, and two cuts were made to divide the retina into three (nasal, inferior, and temporal) quadrants. The dissected retinas were then flattened with the vitreal side up and mounted using fluorescent mounting medium (Dako, Carpentaria, CA, USA). The FG-labeled RGCs (FG particles in the cytoplasm) were visualized at ×40 magnification using a fluorescent microscope with a UV-385 filter (Nikon, Kawasaki, Japan). The photos of RGCs were taken (200×200 µm 2 /microscope field) with each 500-µm separation along the median line of each quadrant (eight microscopic fields/quadrant) starting from the optic disc to the peripheral border of the retina. After counting of the RGCs with the aid of a computer software developed by us (manuscript in preparation), the results were manually double checked by a person who was blinded to the grouping. The average density of RGCs was calculated for the entire retina.
To evaluate different effects of various treatments, changes in the density of FG-labeled RGCs were expressed as a percentage loss of FG-labeled RGCs by comparing the laser-treated right eye and normal control eye: Density of RGCs in the normal eye -Density of RGCs in the right eye with OH Density of RGCs in the normal eye  100% The percentage loss of FG-labeled RGCs in different treatment groups was compared using one-way analysis of variance followed by a post hoc Tukey multiple comparison test (SigmaStat®, statistical significance is noted as p<0.05).
Immunohistochemistry of microglia in retinal sections
Retinal sections from different groups were handled at the same time for each primary antibody to avoid bench to bench variation. Four micrometer cross-retinal sections with intact optic nerves were used to detect OX42 immunoreactivity. The sections were deparaffinized and boiled in citric acid buffer (0.01 M, pH 6.0, 15 min). Catalytic enhancement was performed by incubation in 1% H 2 O 2 for 15 min. Following washing and blocking, retinal sections were incubated with mouse anti-rat monocolonal OX42 primary antibody (1:25, Pharmingen, California, USA) overnight at 4 C. After further washing, retinal sections were incubated with biotinylated goat and mouse secondary antibody (Molecular Probe) at room temperature for 1 h. Then, the sections were washed with TBS and incubated with avidin-biotin complex (Vector Lab, USA) for 1 h. The slides were washed in TBS twice and once in imidozole-acetate buffer (175 mM acetate, 10 mM imidazole, pH 7.2), for 10 min each. The sections were further incubated with 3,3-diaminobenzidine (DAB; 10 mM NiSO 4 , 125 mM acetate, 10 mM imidozole, 0.03% DAB, 0.003%H 2 O 2, pH 7.2) for 10 min. The sections were mounted with Permount medium. OX42-immunoreactive signals were observed under a light microscope. Images were captured using the Spot-Advance Digital system (Spot RT; Diagnostic Instruments, Sciscope Instrument Companies, USA). The specificity of the antibody was tested by omission of the primary antibody.
Immunohistochemistry of microglia in flat-mounted retina
In the 4-week study (groups 9, 10, and 11), the flatmounted retinas were carefully removed from the slides and rehydrated in 0.1 M PB using 48-well plate. There were six rats in each group, and three retinas from each group were used for the flat-mounted immunohistochemical study. The retinas were washed in 0.1 M PB with constant shaking at 4 C overnight to wash out the fluorescent mounting medium. Then, they were blocked with 10% normal goat serum in 0.1 M PB containing 1% Triton X-100 for about 2 h at 4 C. After three washings of ice-cold 0.1 M PB, the retinas were incubated with ionized calcium adaptive molecular 1 (iba-1) primary antibody (1:800; Wako Chemicals USA, Richmond, USA) for 3 days at 4 C. To visualize microglia, the retinas were incubated with Alexa-594 fluorescent-conjugated secondary antibody (1:800; Molecular Probe, USA) for 1 h at room temperature. To visualize different layers of the retina, they were further incubated with 0.2% diamidino-2-phenylindole (DAPI) for 1 h at room temperature. In between each incubation step, there were three turns of 0.1 M PB washing for 10 min each. Finally, the retinas were flat-mounted using fluorescent mounting medium with the vitreal side facing upward.
Under fluorescent microscope, the iba-1 signal was checked throughout the retina, and there was no regional difference observed. Therefore, one representative retinal area of 230×230 µm 2 at about 2,500 µm from the optic disc of each retina was scanned at ×40 magnification using a LSM-510Meta multiphoton confocal microscope (Carl Zeiss, Jena, Germany). All the images were taken under exactly the same excitation attenuation to avoid bias on the judgment of the immunoreactivity. The gain level for all groups was identical in order to demonstrate the best resolution of the microglia in the normal control group.
Guided by the morphology of DAPI-stained nuclei, scanning started from the surface of inner-limiting membrane to the outer nuclear layer. The Z interval was 1 µm among different focal planes, and depending on the retinal thickness, 80 planes would be scanned for each retina. On average, the scanning time for each retina was around 1 h. Vertical figure configured by the LSM software illustrated that most of the microglia were in the inner retina (from innerlimiting membrane to the outer most layer of the inner nuclear layer). Stacked images of different focal planes in the inner retina were created using the LSM software in order to display the entire microglia cell body and their processes. To further demonstrate the morphology of different statuses of microglia, representative single cells were chosen from the ×40 magnification region and then re-scanned under ×63 magnification. The scan started from the upper border of the processes to the soma and then to the lower border of the processes; the Z-interval was also 1 µm.
Results
Laser photocoagulation increased the IOP of the right eyes (OH eyes) about 1.7 times compared to the contralateral control eyes (23.4 vs. 13.9 mmHg). Oral feeding of PBS or various doses of LBP did not alter the level of IOP in all animals.
Monoclonal OX42 antibody that recognizes the expression of complement receptor 3 on the microglia was used to detect all statuses of microglia (from resting to fully activated status) in the retinal sections. In normal retinas (group 1), only processes-like signals of OX42 without identifiable perikarya or cell body were observed (Fig. 1a). At 2 weeks after the first laser-induced OH, there was a loss of 17% of RGCs in the PBS-fed rats (group 2) [11]. OX42-positive microglia were observed in the inner retina [including ganglion cell layer (GCL), inner plexiform layer, and inner nuclear layer (INL)] with ramified morphology (Fig. 1b). These cells were slightly activated and exhibited small perikarya with long thin branching processes that were detectable in 4-μm sections. We repeated the LBP-fed doses that previously showed significant protection on the survival of RGCs (groups 3, 4, and 5). The loss of RGCs was 1% in 1 mg/kg, 0% in 10 mg/kg, and 2.4% in 100 mg/kg [11]. In LBP-fed rats, both the number and intensity of OX42-positive microglia increased in parallel with the increase in dosages of LBP feeding. In animals with administration of LBP from 1 to 100 mg/kg, the majority of microglia in the retina of groups 3-5 was in ramified morphology (Fig 1c-e). However, the processes became thicker, and the branching increased compared with the PBS group. The maximum radial extent of the cell (soma axis + longest processes) was about 50 μm. We defined this as a moderate activation status. In 1,000 mg/kg group (group 6), there was much less protection on RGCs as shown by us previously [11]. In the retinas of this group of animals, most of the OX42 immunoreactive positive microglia was intensely stained, showing the fully activated status with amoeboid shape (Fig. 1f). The perikaya were enlarged, showing coarse and swollen appearance. The In the OH retina from the PBS feeding group (b), ramified microglia can be detected. In the OH retina of animals receiving different doses of the Wolfberry extract, 1 mg/kg (c), 10 mg/kg (d) and 100 mg/kg (e), there was an increase both in the number and immune intensity of OX42 microglia. In the 1000 mg/kg group (f), increased number of fully activated microglia was detected. They contained coarse and swollen perikarya that connected with thick processes. Scale bar is 50 μm processes were shortened and thicken with little branches. The maximum radial extent of these cells (soma axis + longest processes) was less than 30 μm. In addition, the entire area of the OX42 immunoreactive positive cells including the processes was markedly reduced. Thus, there seems to be a dose-related correlation of microglia activation status with neuroprotective effect of LBP. A moderate activation of microglia may be correlated with the neuroprotective effect of LBP in 1-100 mg/kg groups. However, in the 1,000 mg/kg group, less neuroprotection is linked to fully activated status of microglia.
In order to further illustrate the detailed morphology of the various statuses of microglia, a multiphoton laserscanning microscope was used to reconstruct the entire microglia on whole-mounted retina. For this purpose, polyclonal ionized calcium adaptive molecular 1 (iba-1) primary antibody was used. The iba-1 protein is specifically localized in microglia and is not found in neurons, astrocytes, or oligodendroglia [31,32]. Expression of iba-1 is enhanced when microglia are activated [33]. The retinas were first flat-mounted for counting of RGCs and then refloated to go through the immunohistochemical staining. The morphology of microglia was investigated using multiphoton laser-scanning microscope (LSM-510Meta, Zeiss).
At 4 weeks after the first laser photocoagulation, confocal images by stacking all Z layers in the inner retina (from the nerve fiber layer to the INL) showed the slightly activated microglia in the PBS-fed group (group 9); they displayed small perikarya and two or more thin branching processes, which were longer than the soma diameter (Fig 2a). In 1 mg/kg LBP-fed rats, microglia displayed increased iba-1 immunoreactivity in both the soma and processes. There was enlargement of the soma and regional thickening of the processes (Fig 2b). At 4 weeks, LBP significantly reduced the loss of FG-labeled cell from 21.1% (PBS fed) to 6.6% (as previously reported [11]). Therefore, consistent with the 2-week observation after administration of LBP in OH rats, moderately activated microglia were linked to neuroprotection of RGCs in OH rats.
Will the neuroprotective effect be affected if microglia are fully activated? To test the effect of fully activated microglia in the retina under OH, bacterial endotoxin LPS was intravitreously injected immediately after the first laser. The effect on the survival of RGCs was observed at 4 weeks time point. Intravitreous injection of LPS significantly increased the loss of RGCs from 21.1±1.5% (PBS fed) to 28.1±1.9% (p<0.05, Fig. 3b). Neither the neuroprotective effect of LBP nor the neurodestructive effect of LPS on the survival of RGCs was linked to any changes of IOP after the laser treatment (Fig. 3a).
The morphology of one of the representative microglia at resting, moderate, and fully activated status was scanned at ×63 magnification. In normal retina, resting microglia (diameter, ∼50 μm) exhibited ramified shape with small nuclei and long thin processes and were located in the inner retina with almost no overlapping of processes (Fig. 4a). At 4 weeks of OH, microglia in the LBP-fed rats (Fig 4b) showed a moderatively activated morphology with increased iba-1 immunoreactivity in the soma and processes. The processes were shortened with focal enlargement compared with resting microglia. Intravitreous injection of LPS at 5 μg greatly alerted microglia from a resting state to a fully activated state in the hypertensive eyes; immunoreactivity of iba-1 in the microglia was dramatically increased. They displayed enlarged nuclei and significantly thicker and shorter processes (Fig. 4c). a b Fig. 2 Morphology of iba-1 immunoreactive microglia in flatmounted retinas at four weeks after the first laser photocoagulation. Confocal images by stacking all Z layers in the inner retina (from the nerve fiber layer to the inner nuclear layer) showed that morphology of the resting microglia in the PBS-fed group displayed small perikarya and two or more thin branching processes, which are longer than the soma diameter (a). In LBP-fed rats, the microglia displayed increased iba-1 immunoreactivity in both the soma and processes. There was enlargement of the soma and regional thickening of the processes (b). Scale bar is 20 μm To test whether the existence of moderately activated microglia have the neuroprotective effects, activation of microglia was attenuated by using intravitreous injection of MIF. Compared with intravitreous injection of PBS (1.8±2.5%), there was a greater loss of RGCs (9.7±1.1%, p=0.026) after the intravitreous injection of MIF (Fig. 5b) in the LBP-fed rats. Intravitreous injection of PBS did not affect the survival of RGCs in LBP-fed rats following induction of OH. The elevated IOP was not altered by intravitreous injection of MIF or PBS (Fig. 5a).
Discussion
Our results suggest that the neuroprotective effects of LBP are partly due to the modulation of activation status of microglia. Concomitant with neuroprotective effect of LBP (1-100 mg/kg), microglia in the retina are moderately activated. Confocal image of the moderately activated microglia in the inner retina exhibits ramified morphology with thicker and focally enlarged processes, which are different from the resting microglia. This kind of morphology is different from what we can observe by using LPS to elicit fully activated microglia. Appearance of this form of microglia is correlated with the neuroprotection of LBP because the use of MIF to inhibit activation of microglia can attenuate the neuroprotective effect of LBP.
Microglial cells are considered to be resident immune cells in the CNS. In the normal mature brain, "resting" microglia constantly extend their processes, with extention and retraction of the processes as well as motile filopodium-like a b c There was no significant difference in the eyes among PBS-fed, LBP-fed, and LPS intraocular injection groups. Compared with PBS-fed control (b), 1 mg/kg LBP daily feeding significantly reduced the loss of RGCs at four weeks (**p<0.001), while LPS intraocular injection increased the loss of RGCs (*p<0.05). Error bar represents SEM. The LBP result was reported previously [11] and reproduced here for comparison protrusions [34,35]. They are appropriately named "surveillance" microglia because they actively search for and detect signals in their neighboring environment [36]. Activation of microglia has been considered as a stepwise transformation from resting state (ramified morphology) to activated state (amoeboid morphology) in response to pathological stimuli. Based on new findings from in vivo imaging [34,35], activation of microglia should no longer be considered to be an all or none or one-step event. As pointed out by Hanisch and Kettenmann, the transition between resting and activated states should be considered to be a change in their functions [36]. Activation of microglial cells can result in different morphologies with diverse functions. Engagement of microglia can be either neuroprotective or neurotoxic, resulting in containment or aggravation of disease progression [14,36].
Our study provides in vivo evidence that a moderately activated microglial morphology correlates with the survival of neurons in ocular hypertensive retina. The important role of moderately activated microglia was further strengthened by using MIF in our experiment. Both in vitro and in vivo experiments previously show that MIF directly inhibits the activation of microglia/macrophages [37][38][39] and has no direct effect on neuron viability [40]. In our experiment, the use of MIF attenuated the neuroprotective effect of LBP. The results suggest that activation of microglia at least partially contributed to the neuroprotective effect of LBP.
LPS has long been considered to be a potent stimulus for microglia. Intravitreous injection of LPS decreased the survival of RGCs at 4 weeks and stimulated microglia to amoeboid morphology, which is consistent with the morphology of activated microglia described previously (Fig. 4c). The increased death of RGCs should not be due to direct neurotoxic effect of LPS, as it has been shown that LPS does not exert direct neurotoxicity on neurons [41]. Therefore, fully activated microglia are neurodestructive rather than neuroprotective.
The modulation of body immunity is often one of the first indicators to access how a Chinese medicine improves our overall body health [42][43][44]. It has been shown that LBP can increase phagocytic activity of macrophages, antibodies secreted by spleen cells, lymphocyte proliferation in spleen, and activity of cytotoxic T cells [7][8][9]. In vitro, LBP has been demonstrated to induce maturation of murine bone marrow-derived dendritic cells to secrete IL-12 p40 and increase the expression of membrane markers I-A/I-E and CD11c [45]. This study provides the first in vivo evidence that neuroprotective effects of LBP in a rat glaucoma model may be partly due to the modulation of microglia in the retina.
There is also a possibility that LBP provides direct protection of the RGCs against OH. LBP has been shown to improve cognitive functions by enhancing the spontaneous electrical activity of the hippocampus in vivo [46]. In view of the direct cytoprotective and anti-aging effects, it has been shown that LBP counteracts β-amyloid peptide toxicity in primary neuronal cell culture [26,[47][48][49]. In addition to its anti-oxidant effects [50,51], LBP can inhibit two key pro-apoptotic signaling pathways (JNK and PKR) in Aβ peptide neurotoxicity [47,48,52,53]. Recently, a new arabino-galactan-protein (LBP-III) isolated from LBP was reported to attenuate the Aβ peptide-triggered caspase-3-like activity and the phosphorylation of PKR [26]. Therefore, pro-apoptotic signaling pathways, including PKR, JNK and caspase-3 like activity, should also be evaluated in the LBP neuroprotection against apoptotic RGCs deaths in experimental glaucoma.
According to the theory of Chinese medicine, Wolfberry (L. barbarum) may modulate the energy flow known as "Qi" in our body, meaning that it can modulate one organ indirectly by affecting other organs. For example, 10 mg/kg LBP can significantly reduce blood glucose, nitric oxide, and malondoaldehyde levels in streptozotocin-induced diabetic rats [54]. Therefore, in our previous study reporting neuroprotective effects of LBP on the survival of RGCs against OH, vital organs were also collected to comprehensively study LBP biological mechanisms in other organ systems. We hope to use this experimental glaucoma model as an example to illustrate both the "direct" and possible "indirect" effects of Wolfberry as proposed by Chang and So [1]. These results may guide us on how to make use of Wolfberry for therapeutic intervention of glaucoma in the future. | 2014-10-01T00:00:00.000Z | 2009-09-01T00:00:00.000 | {
"year": 2009,
"sha1": "9dea92686cdbc42a1c1383967f08200afc64fbe6",
"oa_license": "CCBYNC",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s12177-009-9023-9.pdf",
"oa_status": "HYBRID",
"pdf_src": "CiteSeerX",
"pdf_hash": "9dea92686cdbc42a1c1383967f08200afc64fbe6",
"s2fieldsofstudy": [
"Biology",
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
236275932 | pes2o/s2orc | v3-fos-license | EURADOS Intercomparison on the Usage of the ICRP/ICRU Adult Reference Computational Phantoms
The European Radiation Dosimetry Group, EURADOS, has organised an intercomparison study on the usage of the ICRP/ICRU voxel reference computational phantoms together with radiation transport codes. Voluntary participants have been invited to solve specific tasks and provide solutions to the organisers before a certain deadline. The tasks to be solved are of practical interest in occupational, environmental and medical dosimetry. The aims of this training activity were to investigate if the phantoms have been correctly implemented in the radiation transport codes and to give the participants the opportunity to check their own calculations against quality-assured master solutions and improve their approach, if needed.
Introduction
EURADOS, the European Radiation Dosimetry Group, is a network of more than 75 European institutions and 600 scientists coordinated in working groups that -among other activities -organises scientific research meetings and training activities as well as intercomparison and benchmark studies.
Since most radiation transport codes are rather complex, many -especially novice -users are applying them as "black boxes", sometimes failing to realise whether the parameters chosen are indeed suitable for the tasks to be solved. This is one of the reasons why EURADOS aims at improving this situation by organising intercomparison studies (Broggio et al., 2012;Gómez-Ros et al., 2008;Gualdrini et al., 2005;Price et al., 2006;Siebert et al., 2006;Tanner et al., 2004;Vrba et al., 2015;Vrba et al., 2014), in which participants are invited to solve proposed computational tasks and check their results against both quality-assured so-called "master solutions" provided by EURADOS and the solutions of other participants.
EURADOS Working Group 6 "Computational Dosimetry" recently organised an intercomparison study on the usage of the ICRP/ICRU adult reference computational phantoms (ICRP, 2009) that aimed to investigate whether participants were able to correctly combine the phantoms with the radiation transport codes used, and if they were able to correctly apply ICRP guidance on the evaluation of specific dosimetric quantities such as organ absorbed and/or equivalent dose (in particular for the red bone marrow) (ICRP, 2010) and effective dose (ICRP, 2007). The purpose of this article is to summarise the general aspects of the intercomparison exercise.
Phantoms
Two phantoms of the human body were to be used in the intercomparison exercise. These are the male and female adult reference computational phantoms as described in ICRP Publication 110 (ICRP, 2009). The phantoms are based on the voxel models "Golem" (Zankl and Wittmann, 2001) and "Laura" , which are in turn based on medical image data of real people whose body height and mass resembled the reference anatomical and physiological parameters for both male and female subjects given in Publication 89 (ICRP, 2002). For construction of the reference phantoms, several modification steps were applied to the segmented phantoms Golem and Laura. These were: voxel scaling to match reference height and reference skeleton mass; inclusion of further anatomical details, such as a greater amount of blood vessels, bronchi, and lymphatic nodes; sub-segmentation of the skeleton; matching the organ masses of both models to the ICRP data on the adult Reference Male and Reference Female without compromising their anatomic realism; adjusting the whole-body masses to 73 and 60 kg for the male and female reference computational phantoms, respectively, by "wrapping" the body with additional layers of adipose tissue.
The adult male and female reference computational phantoms are shown in Figure 1, and their main characteristics are summarised in Table 1. The phantom data are given as an ASCII file consisting of an array of organ identification numbers listed slice by slice; within each slice, row by row; and within each row, column by column (ICRP, 2009). The elemental composition and density for the material assigned to every organ ID are also provided. The data are publically available as online supplement of ICRP Publication 110.
2.2
Skills to be tested Participants were invited to attempt to solve the tasks and submit their results to the organisers. Besides testing several skills of the participants, further aims of the intercomparison exercise were: to provide an opportunity for the participants to improve their computational procedures via feedback, to identify common pitfalls, and understood the correct application of a variety of normalisation quantities (e.g., air kerma free in air, kerma-area product, activity concentration), and were able to correctly apply the methods for red bone marrow and endosteum dosimetry as recommended in ICRP Publication 116 (ICRP, 2010).
2.3
Tasks to be solved The tasks to be solved considered a variety of exposure scenarios (occupational, environmental and medical) and radiation types (photons, electrons, neutrons). More specifically, these were: A photon point source in front of the phantoms at 125 cm from the bottom of the feet and 100 cm from the chest: The aim was to calculate organ absorbed doses for both reference computational phantoms and the effective dose from a Co-60 source with activity of 1 GBq during ten minutes exposure time.
A neutron point source in front of the phantoms at 125 cm from the bottom of the feet and 100 cm from the chest: The aim was to calculate organ absorbed doses for both reference computational phantoms and the effective dose for a 1 minute exposure to a 1 GBq source of 10 keV neutrons.
Am-241 ground contamination: The contamination is assumed to be contained within a disc of radius 2 m, with the anthropomorphic phantom standing at its center, and is deposited on the surface of a concrete floor; a uniform ground contamination is assumed, with an emission rate of 1 photon per cm 2 per second. The aim was to calculate organ absorbed dose rates for both reference computational phantoms, as well as the effective dose rate.
Immersion in a radionuclide source homogeneously distributed inside a room: The ICRP/ICRU adult male reference phantom is located at the center of a confined room filled with N-16 contaminated air. The aim was to calculate organ equivalent dose rates per activity concentration.
Typical x-ray examinations: The aim of this exercise was to calculate organ absorbed dose conversion coefficients per air kerma and per kerma-area product for the male and female reference computational phantom for two typical x-ray examinations (chest PA and abdomen AP).
Internal dosimetry: The aims of this exercise were to evaluate (1) absorbed fractions and specific absorbed fractions of energy in specified "target" organs for (1a) monoenergetic photons and (1b) monoenergetic electrons distributed homogeneously in specific "source" organs of both phantoms, and (2) S-values for the same source and target organ combinations for specific radionuclides.
Approach chosen
Each of the tasks was supervised by two or three members of EURADOS WG6. One person was responsible for providing a master solution, the correctness of which had to be ascertained by second/third calculations by the other members supporting the task.
A collection of the task specifications was announced on the EURADOS website (http://www.eurados.org/) and distributed to various mailing lists for recruiting potential participants in May 2018. Each interested participant was free to solve one or several of the problems, according to his/her knowledge, interest, and time to be devoted to the participation.
The participants had to provide their solutions to the person responsible for each specific task by a specified deadline. Microsoft Excel templates for entering the participants' solutions in a pre-defined format were provided in order to ease evaluation by the responsible persons. The templates contained also a general part asking for personal and affiliation details, as well as information about the transport code used and its version, the cross-section libraries, cutoff values chosen, the potential use of kerma approximation, and the method of bone dosimetry applied. In case the latter deviated from the ICRP 116 method (ICRP, 2010), participants were requested to explain their method in detail. The solutions were evaluated, and feedback to the participants was provided in spring and summer 2019; potential mistakes were aimed at being resolved by direct contact between the responsible persons and participants. The final deadline for revised solutions was at the end of May 2020.
Although the results are presented anonymously, all participants were invited to co-author the manuscripts that contain the detailed analyses of the results of the tasks to which they contributed, and some have accepted the offer. The respective articles are part of the present Special Issue of Radiation Measurements.
Response and general findings
The Intercomparison Exercise was well-received by the computational dosimetry community. 32 participants from 17 countries submitted solutions to at least one of the proposed tasks; some participants solved several or even all tasks. The agreement of the submitted solutions with the master solutions was very variable -ranging from excellent agreement to discrepancies of several orders of magnitude in single cases. Several participants were found to have had problems in correctly applying the ICRP recommended method of red bone marrow dosimetry using dose response functions (ICRP, 2010). This is the reason why there is a specific article in this Special Issue that describes this method in more detail (Zankl et al., 2021).
Many problems in the initial submissions could be solved by feedback between the participants and the persons responsible for each task. In most cases, the participants then resubmitted a revised set of results. Some initial errors were attributable to simple carelessness, such as copy-and-paste errors or mis-arranging the results in the given template. Sometimes there was a misunderstanding concerning the normalisation quantity, e.g., normalising to the correct quantity but at a different distance from the source than was asked for. These errors were mostly easy to find. There were, however, cases where the participants did not disclose how they changed their computational procedure to obtain a revised solution; in these cases, no knowledge about the nature of the initial mis-comprehensions can be gained, unfortunately.
One general finding was that some participants provided results that were obviously wrong, although
Conclusion
The tasks that were set in the EURADOS intercomparison exercise are of practical interest in the fields of medical physics as well as occupational and environmental radiation protection. A correct simulation of the proposed tasks with computer codes requires an appropriate knowledge of the physical quantities involved and the ability to combine the ICRP/ICRU reference computational phantoms correctly with radiation transport codes.
The main scope of the intercomparison exercise was to offer an open forum for discussion and training in the field of computational dosimetry. In many cases, initial errors made by the participants were easy to find and eliminate. In some other cases, however, no knowledge about potential miscomprehensions could be gained due to the participants not disclosing how they improved their computational procedure.
One general conclusion is also that there was sometimes a lack of awareness of the necessity to quality assure computational results, such as with the help of plausibility checks or comparison with literature data for similar exposure conditions.
The present intercomparison exercise demonstrated once more that these types of study are beneficial to the field of computational dosimetry. Besides training the participants directly by improving their computational procedures via feedback with the task organisers, they lead also to the availability of representative dose values for various exposure conditions that may aid future novice users in the quality assurance of their methods. | 2021-07-26T00:05:06.076Z | 2021-07-01T00:00:00.000 | {
"year": 2021,
"sha1": "6d31ffd7aea037227d6f1a53d44e5d5d0a641dc5",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2112.03831",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "1e6c3517512deaefe9459a0f8898f151a901b2af",
"s2fieldsofstudy": [
"Medicine",
"Physics",
"Environmental Science"
],
"extfieldsofstudy": [
"Physics",
"Computer Science"
]
} |
3450147 | pes2o/s2orc | v3-fos-license | CAR/FoxP3-engineered T regulatory cells target the CNS and suppress EAE upon intranasal delivery
Background Multiple sclerosis (MS) is an autoimmune disease of the central nervous system (CNS). In the murine experimental autoimmune encephalomyelitis (EAE) model of MS, T regulatory (Treg) cell therapy has proved to be beneficial, but generation of stable CNS-targeting Tregs needs further development. Here, we propose gene engineering to achieve CNS-targeting Tregs from naïve CD4 cells and demonstrate their efficacy in the EAE model. Methods CD4+ T cells were modified utilizing a lentiviral vector system to express a chimeric antigen receptor (CAR) targeting myelin oligodendrocyte glycoprotein (MOG) in trans with the murine FoxP3 gene that drives Treg differentiation. The cells were evaluated in vitro for suppressive capacity and in C57BL/6 mice to treat EAE. Cells were administered by intranasal (i.n.) cell delivery. Results The engineered Tregs demonstrated suppressive capacity in vitro and could efficiently access various regions in the brain via i.n cell delivery. Clinical score 3 EAE mice were treated and the engineered Tregs suppressed ongoing encephalomyelitis as demonstrated by reduced disease symptoms as well as decreased IL-12 and IFNgamma mRNAs in brain tissue. Immunohistochemical markers for myelination (MBP) and reactive astrogliosis (GFAP) confirmed recovery in mice treated with engineered Tregs compared to controls. Symptom-free mice were rechallenged with a second EAE-inducing inoculum but remained healthy, demonstrating the sustained effect of engineered Tregs. Conclusion CNS-targeting Tregs delivered i.n. localized to the CNS and efficiently suppressed ongoing inflammation leading to diminished disease symptoms.
Background
Multiple sclerosis (MS) is an autoimmune disease of the central nervous system (CNS) involving autoreactive T cells recognizing myelin epitopes. Activated T cells invade the CNS, recruit peripheral mononuclear phagocytes and demyelination in the brain and spinal cord tissue, ultimately leading to impaired neuronal transmission [1]. T regulatory cells (Tregs) have the capacity to regulate ongoing immune reactions and are important in the control of autoimmunity [2,3]. Tregs exert their immunosuppressive functions via secretion of inhibitory cytokines, by interfering with the metabolism of T cells and/or in an undetermined contact-dependent manner. Furthermore, Tregs block T cell activation indirectly via their interaction with antigen-presenting cells (APCs), preventing APC maturation and consequently downregulate their expression of costimulatory molecules and cytokine secretion [4,5]. Many studies have investigated the role of Tregs and their suppressive function in MS patients and despite some contradicting results, likely due to the multiple definitions of Treg subclasses, it has been concluded that their suppressive activities are impaired during disease progression [6][7][8]. MS patients may therefore benefit from Treg cell therapy to restore this insufficient immunosuppressive capacity.
It has also been demonstrated that Tregs play a critical role in the protection and recovery of the animal model of MS, experimental autoimmune encephalomyelitis (EAE). Depletion of Tregs inhibits natural recovery from EAE whereas transfer of Tregs to recipient mice reduces disease severity [9,10]. Transfer of antigen-specific Tregs derived from TCR transgenic mice was more effective than polyclonal Tregs in controlling murine models of both autoimmune gastritis [11] and MS [12]. However, adequate numbers of antigen-specific Tregs are difficult to achieve for adoptive transfer. A chimeric antigen receptor (CAR) was used to redirect Tregs to a desired antigen in a colitis model [13,14]. Cultured Tregs may change their suppressive phenotype posttreatment, and this might be detrimental for patients. For example, Xu and coworkers have reported that Tregs in the absence of TGFβ can differentiate into Th17 cells, which are considered an integral cause of autoimmune manifestations in MS [15]. Since FoxP3 is fundamental for the differentiation and maintenance of Tregs, a stable expression of FoxP3 by genetic engineering may block Treg conversion into effector cells and thereby provide a safer option for patients. In a model of arthritis, FoxP3 was coexpressed with an antigen-specific TCR to achieve multiple stable targeting Tregs [16]. By co-expressing FoxP3 with a chimeric antigen receptor (CAR) [17] targeting myelin oligodendrocyte glycoprotein (MOG) in naive CD4 + T cells we can generate sufficient numbers of stable Tregs that localize to the CNS. The role of the CARαMOG receptor is to attach the Treg to the vicinity of MOG + oligodendrocytes to prevent immune attacks against these cells.
In the present study, engineered Tregs were analyzed for their suppressive function, capacity to localize to the CNS upon intranasal (i.n) or intraperitoneal (i.p) cell delivery, and for their therapeutic capacity in EAE symptomatic mice.
Antibody production, purification and immunohistochemistry
Hybridoma cell line 8-18 C5 [18] was cultured in RPMI 1640 medium supplemented with 10% fetal calf serum. Antibodies were purified using protein A affinity chromatography (HiTrap MabSelect, GE Healthcare, Little Chalfont, UK) following addition of 0.5 M trisodium citrate (Sigma-Aldrich Corp., St Louis, MO, USA) to the clarified supernatant. The column was washed with 500 mM sodium citrate pH 8.5 and the antibody fraction was eluted with 0.1 M glycine (Sigma-Aldrich) at pH 2.7. The eluate was neutralized using Tris-HCl (Sigma) at pH 8 and concentrated using a JumboSep ultrafiltration device and 10kD cutoff filter (Pall Gellman, WWR International, Stockholm). Specificity of the antibody was confirmed through Western blotting analyses of whole mouse myelin and recombinant MOG.
Chimeric antigen receptor (CAR) construct
The CARαMOG-FoxP3 vector ( Figure 1A) was constructed as follows: a single chain variable fragment (scFv) was cloned from hybridoma (8.18 C5) producing anti-rat myelin oligodendrocyte glycoprotein (MOG) antibodies. The scFv was linked via an antibody hinge region to the transmembrane and intracellular part of a CD3ζ chain, which was in turn fused to an intracellular CD28 domain. The murine FoxP3 gene was inserted into the construct and separated from the CAR gene by a 2A peptide (described in reference [19]). The final CARαMOG-FoxP3 construct was inserted into the lentivector pRRL-CMV (kind gift from R Houeben, Leiden University Medical Center, Netherlands). Lentiviruses (Lenti-CARαMOG-Foxp3 and Lenti-Mock, Lenti-GFP) were produced by co-transfecting 293FT cells with pLP1, pLP2 and pLP/VSVG (Invitrogen, Paisley, UK). Virus supernatants were harvested on days 2 and 3 and concentrated by ultracentrifugation. The amino acid sequence for the CARαMOG receptor is given in Additional file 1: Figure S1.
Genetic engineering of T cells
Murine naive CD4 cells were sorted using the MACS bead system (Miltenyi, Bergisch, Germany) and prestimulated with an initial dose of 1 μg of both anti-CD3 and anti-CD28 immobilized antibodies (BD Biosciences, San Diego, CA, USA) as well as IL-2 (R&D systems Inc., Minneapolis, MN, USA) for three days prior to viral transduction, since a good viral gene transduction of T cells require cycling cells. 50 μl of viral supernatants was added to 5 × 10 5 stimulated CD4 + T cells in 100 μl RPMI-1640 medium supplemented with 1% sodium pyruvate, 1% nonessential amino acids, 10% fetal bovine serum, 1% penicillin/streptomycin (all from Invitrogen, Paisley Scotland) and 8 μg/ml Polybrene (Sigma-Aldrich Corp., Saint Louis, MO, USA). Cells were incubated for four hours at 37°C, 5% CO 2 followed by addition of 300 μl of media (as above) supplemented with 100U IL-2. The following day, media (as above) was replaced with fresh media supplemented with 80U IL-2. Cells were cultured for seven days with addition of 80U of IL-2 every second day. Transduction efficiency was analyzed three-to-six days post-transduction. Transduced cells were incubated for 10 minutes at 4°C with a FITCconjugated mAb specific for the IgG-kappa in the scFv (BD Biosciences, San Diego, CA, USA), washed with PBS and resuspended in 1% paraformaldehyde (PFA) in PBS. Samples were analyzed for surface expression of CAR or intracellular green fluorescent protein (GFP) expression using a FACScanton (BD Biosciences, San Diego, CA, USA).
EAE induction and Treg cell administration
Female C57BL/6 mice were purchased from Taconic, Lille Skensved, Denmark. Mice were housed in the Department of Animal Resources facilities at Uppsala University and used at five to eight weeks of age. Studies were approved by the regional animal ethics committee in Uppsala (C28/10). EAE was induced by subcutaneous (s.c.) immunization in both hind and front limbs with 200 μg MOG 35-55 peptide emulsified in complete Freunds' adjuvant (CFA) (Difco Laboratories, Detroit, MI, USA) containing 5 mg/ml Mycobacterium tuberculosis. Pertussis toxin (100 ng i.p) (Sigma-Aldrich Corp., Saint Louis, MO, US) was given at the time of immunization and a second dose two days later. Disease severity was monitored according to the following scale: 0, no disease; 1, flaccid tail; 2, hind limb weakness; 3, hind limb paralysis; 4, fore limb weakness; 5, moribund. When the mean score value was 3 (usually at day 15), mice were treated using cell therapy. Cells (1 × 10 5 CAR or Mock-transduced Tregs diluted in 10uL PBS) or phosphate-buffered saline (PBS) were administered i.n. in 5μL PBS using a plastic catheter connected to a pipette (polyethylene tube, Becton Dickinson, Franklin Lakes, NJ, USA) inserted for 3 mm in both nasal nostrils during anesthesia (0.05 to 0.1 mg ketalamine-xylazine mixture/10 g body weight; ketamine 50 mg/ml, Pfizer AB, Sollentuna, Sweden; xylazine 20 mg/ml, Bayer AG Animal Health, Business Group, Leverkusen, Germany). For i.p. cell therapy 1x10 5 cells (CAR or Mocktransduced Tregs) diluted in 100 μL PBS were injected. Mice were sacrificed with gaseous CO 2 and brains were excised and fixed either in ice-cold 4% phosphate- The CARαMOG-FoxP3 vector contains a scFv cloned from the 8.18 C5 hybridoma. The scFv is linked via an antibody hinge region to the transmembrane and intracellular part of a CD3 zeta chain. The zeta chain is further fused to an intracellular CD28 domain. The murine FoxP3 gene was inserted into the construct after a 2A peptide sequence. Upon translation, the whole expression cassette is translated into a CARFoxP3 fusion protein that is self-cleaved at the 2A site to produce the two separate proteins CARαMOG and FoxP3. (B) CARαMOG and FoxP3 is transported to the cell surface and nucleus, respectively. At the cell surface CARαMOG can bind to MOG + cells to attach the Treg to those cells and prevent immune attacks on MOG + cells such as oligodendrocytes in the CNS. FoxP3 will drive the Treg phenotype by regulating gene transcription in the nucleus. buffered formaldehyde (pH7.4) or isopentane with dryice for paraffin-embedding or frozen-sectioning, respectively. Tissues embedded in low-melting paraffin after grade scale alcohol dehydration and xylene treatment were sectioned in the sagittal plane (4 μm) through the brain, mounted on gelatine-coated glass and used for immunohistochemistry.
Tissue localization of engineered Treg cells in naïve mice 1 × 10 4 GFP/CARαMOG-FoxP3-engineered CD4 + T cells diluted in 5 μl PBS, Mock-transduced Tregs or PBS were administered i.n in the right nostrils of naïve animals as described above. Horizontal cryosections of the brain, (10 μm) were air-dried and kept at −80°C. Tissue sections at the same level were selected following a quick staining with toluidine blue. Sections washed in cold PBS were quenched with 0.3% H 2 O 2 in methanol, blocked with 2.5% normal horse serum for one hour followed by staining with anti-GFP primary antibody (1:300) ab390 (Abcam, Cambridge, UK) overnight at 4°C. Thereafter, Alexa Fluor 488 anti-rabbit (1:200) secondary antibody was applied. Specificity controls for immunostaining included sections stained in the absence of primary antibody and staining of sections from a vehicle-treated mouse not receiving GFP + cells. For detection of DNA/nuclei sections were overlain with Vectashield Mounting Medium containing 4′,6′-diamidino-2-phenylindoledihydrochloride(DAPI) (Vector Laboratories, Burlingame, CA, USA). Immunofluorescence images were captured using a Leica DMRBE fluorescence microscope, a digital camera (Nikon Dxm 1200 F Nikon Corp., Tokyo, Japan) and Nikon ACT-1 version 2.62 software. All images were processed in Adobe Photoshop and Illustrator CS4, and green (GFP) and blue (DAPI) channel images were merged using Photoshop software.
Immunohistochemistry for nerve damage and repair
For myelin basic protein (MBP) and glial fibrillary acidic protein (GFAP) detection the avidin biotin complex (ABC) method and 3,3′ diaminobenzidine (DAB) as chromogen were used. Deparaffinized and rehydrated sagittal sections were rinsed with PBS and PBS-T. For GFAP antigen, demasking was performed in a microwave using 10 mM sodium citrate buffer. Endogenous peroxidase activity was blocked with 1 to 3% H 2 O 2 in PBS-T and nonspecific background staining was blocked with 4% BSA in PBS. Sections were incubated overnight with the primary antibodies (MBP 1:200 Abcam, Cambridge, UK; GFAP 1:400 Millipore, Billerica, MA. USA). After washing, the sections were incubated with a biotinylated secondary antibody and then with ABC complex (both from Vector Laboratories, Burlingame, CA, USA). Immunoreactions were visualized with DAB (Sigma-Aldrich Corp., St.Louis, MO, USA). Sections were counterstained with hematoxylin. Finally, the tissue sections were rinsed gradually through a graded alcohol series and finally in xylene, and mounted immediately after with Pertex (Histolab, Göteborg, Sweden). The tissue sections were analyzed using an Olympus microscope (Olympus, Tokyo, Japan) and images were captured using a digital camera as described above. Results were analyzed in a blinded mode scoring the level of staining as weak, moderate or strong. Digital images were collected at the same time using identical settings with respect to image exposure time and image compensation setting. Images were processed using Adobe Photoshop and Illustrator CS4.
Treg suppression assay
For in vitro suppression assays, 3 × 10 4 CARαMOG-FoxP3 or Mock-transduced CD4 + T cells irradiated at 25 Gy were mixed in different ratios with αCD3/IL-2-stimulated splenocytes derived from a naïve healthy mouse in a total volume of 200 μl/well. The RPMI-1640 medium was supplemented with 0.1% sodium pyruvate, 1% nonessential amino acids, 1% Hepes buffer, 1% βmercaptoethanol, 10% fetal bovine serum and 1% penicillin/streptomycin (all from Invitrogen, Paisley, UK). Cells were seeded in 96-well rounded-bottom tissue culture treated plates (Sarstedt, Newton, NC, USA) and incubated for 48 hours, after which 1 μCi of 3 H-thymidine (PerkinElmer, Waltham, MA, USA) was added per well. Cells were incubated for an additional eight hours before harvest to filters. The incorporated 3 H-thymidine was measured using a β-counter (Perkin Elmer Life Science, Turku, Finland). In some experiments, murine 2.5 × 10 4 macrophages or 2.5 × 10 4 MOG + cells were added to cultures with CARαMOG-FoxP3-transduced CD4 + T cells in a 1:1 ratio. Activated macrophages were obtained via plastic adherence of splenocytes. Monocytes were activated by 1 μg lipopolysaccharide (LPS) and matured during one week in RPMI-1640 medium supplemented with 1% sodium pyruvate, 1% nonessential amino acids, 10% fetal bovine serum and 1% penicillin/ streptomycin. MOG + cells were generated via lentiviral gene transfer of murine MOG to 293 T cells. MOG expression was confirmed by histochemistry using αMOG antibodies (clone 8.18 C5) as described above.
Quantitative PCR
Brain biopsies from EAE mice treated i.n. with CAR, Mock-transduced Tregs or PBS, respectively, were treated with tissue lysis buffer ATL (Qiagen, Hilden, Germany) at 60°C for three hours followed by DNA purification using High Pure Viral Nucleic Acid kit (Roche, Basel Switzerland). cDNA was obtained using the Superscript II Reverse Transcriptase kit (Invitrogen, Paisley, UK). Quantitative-PCR was performed using the real-time system (iCycler, Bio-Rad Laboratories Inc., Hercules, CA, USA). The reaction was performed with SYBR green mix (BioRad). Primer pairs for β-actin were designed as follows: forward 5′-TTCCTTCCCAGAGTTCTTCCAC, reverse 5′-CCAGGATGGCCCATCGGATAAG (Cybergene AB, Huddinge, Sweden). Primers to detect IL-12 and IFNγ were designed as described previously [19,20]. In order to correct for variable amounts of DNA content between samples, all copy numbers were corrected to β-s. The mRNA copy number in 2 μL cDNA was evaluated in the experiments.
Statistics
Significant differences between groups were calculated using GraphPad Software (La Jolla, CA, USA). The method for each individual calculation is stated in the Figure Legends. *P < 0.05, **P < 0.01, ***P < 0.001.
Phenotype and function of engineered Tregs
Antibodies produced and purified from the 8.18 C5 hybridoma were tested for cross-reactivity to murine MOG. The selected MOG antibody detected murine MOG in the brains of naive mice (Figure 2A). A scFv from the 8.18 C5 hybridoma was generated and cloned into a murine CAR receptor and was then inserted in tandem with murine FoxP3 into a lentiviral system to produce CARαMOG-FoxP3 viruses. Sorted and preactivated naive CD4 + T cells were successfully transduced with the CARαMOG-FoxP3 lentivectors (CAR Tregs). Prior to stimulation and transduction CD4+ T cells are sorted from splenocytes. Transduced and expanded cells remain CD4 positive. Post-gene transfer the scFv of the CARαMOG receptor could be detected on the cell surface of approximately 10 to 15% of cells ( Figure 2B) and the FoxP3 mRNA levels in the engineered cells were two-fold greater than that of Mock-transduced T cells (CD4 Mock) which include a population of naturally occurring Tregs ( Figure 2C). Natural Tregs suppress activated T cells in a non-TCR restricted manner by contact dependent and independent mechanisms. CAR Tregs suppressed polyclonally stimulated T cells (P < 0.05) at a 1:2 ratio ( Figure 2D) demonstrating the gained suppressive function. The role of the CARαMOG receptor is to attach the Treg to the vicinity of MOG + oligodendrocytes to prevent immune attacks against these cells ( Figure 1B). To investigate that the CAR Tregs retained their suppressive function upon binding to MOG + cells, the suppressive function of polyclonally stimulated T cells were analyzed in cocultures with MOG + cells. In Figure 2D it is demonstrated that the CAR Tregs continue to suppress T cell proliferation in the presence of MOG + cells. Further, activated murine macrophages may produce cytokines or other factors that block the function of the CAR Tregs. Activated macrophages are part of the MS pathology and therefore CAR Tregs were cultured with such cells to determine if CAR Tregs were still suppressive against T cells. In our assay CAR Tregs were still able to suppress effector T cell proliferation in the presence of activated macrophages ( Figure 2D; P < 0.05). In vivo localization of Tregs to the brain CAR Tregs co-expressing GFP and CARαMOG-FoxP3 were used to evaluate in vivo targeting upon i.n. cell delivery in naïve mice. The overall localization of GFP immunofluorescence is illustrated in the schematic drawing in Figure 3. The green fluorescence was mainly localized in clusters of cells in the granular layer and the external plexiform layer of the olfactory bulb ( Figure 3B, C), in the lateral septal nucleus ( Figure 3E), in the central medial thalamic nucleus ( Figure 3F), in the ectorhinal cortex ( Figure 3H), in the medial genic nucleus ( Figure 3I) and in the Purkinje cell layer and white matter of the cerebellum ( Figure 3K, L). In addition, green immunofluorescence was observed in anterior olfactory nucleus and anterior orbital cortex (data not included). The green immunofluorescence was only observed in the soma and was preferentially present in the perinuclear part ( Figure 3F, I). Although a unilateral dose of cells was given, the localization of immunofluorescence occurred both on the ipsilateral and contralateral sides of the brain. In the vehicle control animal no, or extremely weak, green background immunofluorescence could be detected (Figure 3A, D, G, J). The localization of immunofluorescence is summarized in Table 1.
CNS-targeting CAR Tregs suppress active EAE
At the peak of EAE inflammation 1 × 10 5 cells of each CAR Tregs and Mock CD4 + T cells, or PBS alone, was administered i.n. or i.p. to 10 mice per group. EAE mice responded well to cell therapy independently of administration route ( Figure 4A to C). The EAE scores were initially reduced upon i.n delivery of either CAR Tregs or Mock CD4 + T cells. Seven days post-treatment only the CAR Treg treatment group exhibited a continuous reduction of clinical disease symptoms and at day 25 all mice (n = 10) were symptom-free ( Figure 4A). At the same time point, only a few mice in the Mock control group (n = 5) that contained a normal rate of natural Tregs (see Figure 2C) were classified as being healthy. The remaining mice exhibited symptoms corresponding to a clinical score of 3. At day 30 also, the Mock control group had a good performance score and significantly differed from PBS controls (P < 0.05). At this time point healthy mice from each group, except the PBS treatment that did not cure mice, were re-challenged with an additional EAE-inducing inoculum using CFA and pertussis toxin. In the CD4 + Mock group all mice developed EAE symptoms by day two. In the CAR Treg group only one mouse developed weak EAE (score 1) ( Figure 4B).
Upon examination of immunohistochemical markers for myelination (MBP) and reactive astrogliosis (GFAP), mice in the CAR Treg group exhibited confirmed recovery 15 days post-treatment ( Figure 5). Reactive astrogliosis was evaluated in the olfactory bulb ( Figure 5A-C), corpus callosum ( Figure 5D-F), cerebellum ( Figure 5G-I) and hippocampus (data not included). An increased GFAP staining was detected in both CAR Treg-and CD4 + Mock-treated EAE mice as compared to PBStreated EAE mice. The level of staining was higher in CAR Treg-treated EAE mice as compared to CD4 + Mock-treated EAE in all areas except the olfactory bulb, where the level of staining was lower ( Figure 5B, C). Myelination was evaluated in the brain stem ( Figure 5J-L), hippocampus ( Figure 5M -O) and cerebellum ( Figure 5P-R), corpus callosum and olfactory bulb (data not included). The degree of myelination, as indicated by MBP staining in PBS-treated EAE mice compared to CAR Treg-treated mice in brain stem and in cerebellum, was slightly stronger ( Figure 5J and P), whereas staining was weak or absent in the others areas. MBP staining in the brain of CD4 + Mock-treated EAE mice was lower compared to that in CAR Treg-treated mice.
In addition to the noted markers of recovery and myelination, the levels of Th1-associated cytokines were measured by quantitative PCR analysis of tissues from the same brains. These results revealed lower levels of the T cell associated IFNγ mRNA in mice treated with CAR Tregs compared to control brain tissue such as mice treated with PBS or Mock-transduced cells ( Figure 6). IL-12, on the other hand, is only detected in PBS mice demonstrating that DC maturation may be compromised in both CAR Treg and Mock groups.
Conclusions
Tregs are developed in the thymus (natural Tregs) or in the periphery in response to cytokines such as TGFβ. In autoimmunity, the patients may have genetic variations leading to reduced, or nonfunctional, Tregs. They may, as well, have deficits in other signalling pathways that affect the Treg suppression mechanisms. Severe systemic autoimmunity, such as immune polyendocrinopathy enteropathy X-linked syndrome (IPEX) shows FoxP3 mutations that block the Treg differentiation. It is difficult to dissect if natural or peripherally derived Tregs are the most important for controlling autoimmunity, but since the natural Tregs commonly have affinity for selfantigens these Tregs likely have a predominant role in blocking emerging autoimmunity [2,3,5]. In the current investigation, CD4 + T cells were modified utilizing a lentiviral vector system to express a chimeric antigen receptor (CAR) targeting myelin oligodendrocyte glycoprotein (MOG) in trans with the murine FoxP3 gene that drives Treg differentiation. In that sense, the gene-engineered Tregs are peripherally derived cells. However, they express FoxP3 and have a strong affinity to a self-antigen via the CAR receptor, so they are also similar to natural Tregs. The genetically engineered Tregs demonstrated suppressive capacity in vitro and reduced disease symptoms in mice with active EAE in vivo. The suppressive effects of Tregs in murine models of autoimmune pathology have provoked an interest in clinical translation. Transfer of Tregs into animals with autoimmunity provides protection but, to date, there are no records of clinical trials using adoptive transfer of Tregs in humans with autoimmune diseases. Tregs have, however, been used in the clinic to induce transplantation tolerance [21]. One problem with Tregs is sorting and expanding the population since FoxP3, the most reliable marker, is only present intranuclearly. It is also a major challenge to produce antigen-specific Tregs for cell therapy since that would require antigen-stimulation for an extended time. However, Tregs can be generated from naive CD4 + T cells by gene transfer of FoxP3 [22]. Using retroviral Table 1 Summary of GFP + cell distribution in mouse naive brain a . gene transfer of murine FoxP3 into CD4 + CD25 -T cells, Chai and coworkers generated Tregs and used them in an animal model for transplantation with promising results [23], and this was later confirmed using a lentiviral vector system [24]. Genetically engineered and cultured Tregs have been evaluated for treatment efficacy in a wide variety of adoptive transfer models of autoimmune diseases [9,23,[25][26][27]. However, systemically delivered Tregs may result in recipient failure to respond to infectious disease, or they may not accumulate in sufficient amounts at the correct location. Mekala and co-workers redirected murine CD4 + CD25 + Tregs by using a chimeric antigen-MHC/ζ receptor targeting myelin basic protein (MBP) in the EAE model. In their study MBP-specific Tregs were able to suppress EAE-inflammation [28]. Hombach and coworkers have redirected human CD4 + CD25 + Tregs by using retroviral transfer of a recombinant anti-CEA-immunoreceptor to target the inflamed intestine, with promising results [29]. In the present study, we have used chimeric antigen T cell receptors, so-called CARs, to direct the T cells to MOG present in the CNS [17]. The construct also contained murine FoxP3 to drive the transduced CD4 + T cells toward a Treg stable phenotype.
The engineered Tregs expressed both CAR and FoxP3, and in assays testing their function they significantly decreased T cell proliferation even in the presence of LPS-stimulated macrophages that are thought to take part in the transformation of Tregs into Th17 effector cells due to their production of activating cytokines [30,31]. Binding to MOG + cells by their CAR did not change their suppressive function either, as indicated by co-culturing CAR Tregs, MOG + cells and stimulated T cells. Activated macrophages are part of the pathology of MS and the CAR T cells were challenged with such cells in the suppressive assays to exclude that they lose their function in the presence of macrophages. There was a somewhat decreased suppressive capacity noted in these groups, but it did not reach significance. Ten mice in three groups were given 1 × 10 5 CAR Tregs, CD4 + Mock T cells or PBS alone by i.n. administration at the peak of EAE inflammation (15 days post-EAE immunization) and thereafter were monitored for EAE symptoms. Ten days post cell treatment all EAE mice in the CAR Treg group were cured (P < 0.001). At end point (15 days post cell treatment) four out of ten EAE mice in the Mock-treated group still exhibited EAE symptoms. The experiment was repeated three times with similar results. (B) Symptom-free mice from each treatment group were given a second dose of EAE-inducing inoculum and monitored for EAE symptoms. CAR Treg-treated mice were able to resist EAE inflammation to a higher extent than CD4+ Mock-treated mice (P < 0.001). Pooled data from six EAE mice (three/group from two separate experiments) are shown in the figure. Scores for individual mice are shown separately in the figure. (C) Ten EAE mice in three groups were administered 1x10 5 CAR Tregs, CD4 + Mock T cells or PBS alone by i.p. injection at peak of EAE inflammation (day 15). At end point (day 15 post-treatment) all mice in the CAR Τreg group were cured but six out of ten mice in the Mock CD4 + T cell group still exhibited EAE symptoms. Statistics are analyzed with Mann-Whitney test using GraphPad prism software. *P < 0.05, **P < 0.01, ***P < 0.001. Figure 5 Astrogliosis and remyelination after intranasal administration of CNS-targeting Treg. Mice in three groups were given 1 × 10 5 CAR Tregs, CD4 + Mock T cells or PBS alone by intranasal administration at the peak of EAE inflammation (15 days post-EAE immunization). Fifteen days post cell treatment the mice were killed and brain sections from each group (CAR Tregs, Mock CD4 + T cell and PBS) were analyzed for reactive astroglisosis using glial acidic fibrillary protein (GFAP) (A-I) and myelination using myelin basic protein (MBP) (J-R). GFAP was evaluated in olfactory bulb (A-C), corpus callosum (D-F), and cerebellum (G-I) of brain sagittal section from PBS, Mock CD4 + T cell and CAR Τreg-treated EAE mice. In the olfactory bulb of mice treated with Mock CD4 + T cells there was a strong staining for GFAP whereas this area in PBS-treated mice (A) and of CAR Treg-treated mice (C) exhibited a weak and moderate staining, respectively. There is an extremely weak staining in PBS-treated EAE mice, moderate staining in Mock CD4 + T cell-treated EAE mice and strong staining in CAR Treg-treated EAE mice for corpus callosum and cerebellum. MBP was evaluated in brain stem (J-L), hippocampus (M-O) and cerebellum (P-R) of brain sagittal sections in PBS, Mock CD4 + T cell and CAR Treg-treated EAE mice. In the brain stem and cerebellum of mice treated with CAR Tregs, there was a moderate staining for MBP (L, R) whereas the brain stem of Mock-treated mice (K,Q) and PBS-treated mice (J,P) exhibited a weak and strong staining, respectively. There is an extremely weak staining in PBS-treated EAE mice, a moderate staining in Mock CD4 + T cell-treated EAE mice and strong staining in CAR Treg-treated EAE mice for hippocampus. Original magnification 10×.
GFP-expressing CAR Tregs were then used to track the immunofluorescence in the brain 24 hours after an i. n. cell administration. Analysis of GFP-positive immunofluorescence in the brains of naïve mice revealed clusters of fluorescent cells in various brain areas. Immunofluorescence was localized, for instance, in the olfactory bulb, orbital and ectorhinal cortex, but also in the Purkinje cells and white matter of the cerebellum. The selective GFP immunofluorescence of the Purkinje cells and other cells that are not matching was detected in the cerebellum. The selective GPF immunofluorescence of the Purkinje cells, and/or other cells that are not matching the features of Tregs, may possibly be related to a vesicular transfer and uptake of the GFP protein. Further investigations are needed to establish that the observed immunofluorescence is due to GFP-expressing CAR Tregs, fusion with other cells or cell debris that has been taken up by other cells.
A previous study has described that myelin-specific Tregs accumulate in the CNS but fail to control autoimmune inflammation [32]. This depended on the resistance of local effector T cells to suppression, partly due to IL-6 and TNF production. However, McGeachy and coworkers described that transfer of low numbers of CD4 + CD25 + cells from the CNS of recovering mice before EAE reinduction reduces disease severity in recipients [33], demonstrating the potential of Treg therapy in EAE. Our present results clearly demonstrated that Tregs (engineered or not) could reduce disease symptoms in mice with active EAE upon both i.n. and i.p. delivery. Hence the effector cell resistance to Treg suppression demonstrated by Korn and coworkers [34] may not occur using engineered Tregs. However, the mock-transduced CD4 + T cells containing a mixture of natural FoxP3 + Tregs and naïve T cells did not completely cure EAE. When delivered i.n. mice treated with CD4 + Mock T cells could recover from EAE nearly as well as with CNS-targeted CAR Tregs, but the effect was not optimal, since only CAR Tregs could generate mice resistant to an additional EAE challenge 30 days post-treatment. If delivered i.p. the difference between targeted and nontargeted Tregs became more evident.
Immunohistochemical evaluation of recovery (GFAP) and myelination (MBP) of axons in the brain confirmed recovery and revealed decreased damage to axons in mouse brains treated with CAR Tregs compared to control groups. Furthermore, mice treated with CAR Tregs had reduced levels of effector cytokines (IL-12 and IFNγ) in brain tissue compared to both mice treated with crude T cells and PBS, thus indicating the different qualities of the targeted and non-targeted Tregs in suppressing inflammation.
A problem with adoptive transfer of Tregs is the inadequate number of cells reaching the target. Cell numbers decrease during migration and the risk of therapeutic cells ending up in vital and/or reproductive organs must be taken into consideration. The olfactory pathways have been extensively investigated as potential entry for pharmaceutical drugs into the brain [35,36]. Recently, i.n delivery has been examined as a potential route of administration for transplantation of cells into the brain with the advantage of reducing cell doses required for therapeutic efficacy while, at the same time, decreasing systemic exposure [32,37,38]. In this study, we further demonstrated intranasal administration of engineered GFP + Treg cells carrying a MOG-targeting receptor can be delivered via the nostrils and access the brain.
A migration of engineered Treg in the olfactory pathways to the brain may occur via extracellular channels comprising olfactory ensheathing cells surrounding the olfactory neurons or via perivascular spaces from the nose to the brain. In addition, a migration into the general blood circulation cannot be excluded since the nasal mucosa is highly vascularized. A previous report demonstrated migration of cells from the nasal mucosa through the cribriform plate along the olfactory neural pathway into the brain and cerebrospinal fluid (CSF) following i.n. cell administration [32]. In that study, only a low number of cells were tracked up to one hour postinstillation. In the present study, we observed clusters of immunofluorescent cells in the brain 24 hours postinstillation in naïve mice. Because EAE mice treated Figure 6 Decreased expression of effector cytokines in CNS-targeting CAR Treg-treated brain. Mice in three groups were given 1 × 10 5 CAR Tregs, CD4 + Mock T cells or PBS alone by i.n. administration at the peak of EAE inflammation (15 days post-EAE immunization). Fifteen days post cell treatment, brain biopsies from five EAE mice per group (CAR Tregs, Mock CD4 + T cells and PBS) were analyzed for expression of effector cytokines (IL-12 and IFNγ) by quantitative RT-PCR. Error bars represent standard error of the mean (SEM). | 2016-05-12T22:15:10.714Z | 2012-05-30T00:00:00.000 | {
"year": 2012,
"sha1": "014c0229d7b366e78ac4e644a4a468359fd29f08",
"oa_license": "CCBY",
"oa_url": "https://jneuroinflammation.biomedcentral.com/track/pdf/10.1186/1742-2094-9-112",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "014c0229d7b366e78ac4e644a4a468359fd29f08",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
209445795 | pes2o/s2orc | v3-fos-license | Nutrient germination improves DNA recovery from industrial Bacillus subtilis endospores during qPCR enumeration assays
Growth-independent microbial enumeration methods such as quantitative PCR require the efficient extraction of genomic DNA from targeted cells. Bacillus endospores are popular inclusions in commercial products due to their hardiness and metabolic dormancy; however, this hardiness is known to render Bacillus endospores resistant to traditional DNA isolation techniques. Metagenomic studies have sought to address this resistance through nutrient-based germination of bacterial endospores in environmental samples. In the present study, we sought to apply this technique to the enumeration of microbial products using an industrial strain of Bacillus subtilis as a model organism. Germination was induced through incubation of axenic spore suspensions in an AGFK-based rich medium. Total spore count, dipicolinic acid release and OD600 absorbance were monitored over time to track the progression of spore populations through the stages of germination and outgrowth. Aerobic plate counts and flow cytometry were used to monitor cell populations for proliferation during the incubation period. Finally, quantitative PCR with taxon-specific primers was used to examine DNA recovery as a function of time. Results show that customized germination protocols, once appropriately validated for the species and product matrix under consideration, can result in more efficient DNA extraction and thus lower limits of detection for qPCR assays targeting industrial Bacillus endospores in microbial products.
Introduction
Due to their metabolic dormancy and resistance to environmental and chemical stressors, Bacillus endospores are popular inclusions in microbially-based products (Cutting, 2011). Global labeling and regulatory mandates require the accurate enumeration of Bacillus endospores in such products, and thus they are routinely subjected to enumeration assays. Growth-based plate counting methods such as the aerobic plate count (APC) are the industry standard for enumerating microbial products; however, even under optimal conditions these assays tend to underestimate microbial concentration (Davis, 2014;Sutton, 2012) and standard iterations are sometimes inappropriate for the enumeration of Bacillus-based products (Gorsuch et al., 2019a). Growth-independent enumeration methods such as quantitative polymerase chain reaction (qPCR), digital polymerase chain reaction (dPCR) and flow cytometry (FC) address many of the APC assay's limitations and thus represent attractive alternatives to plate counting (Davis, 2014;Gorsuch et al., 2019b).
In the case of qPCR and dPCR, microbial enumeration cannot be conducted until genomic DNA is extracted and isolated from the cells of interest. This presents a challenge for Bacillus-based products, as the same hardiness which makes endospores so attractive to manufacturers also renders many of them resistant to traditional DNA isolation techniques (Lara-Reyna et al., 2000;Filippidou et al., 2015). In an interesting reversal of Staley and Konopka's "great plate count anomaly" (Staley and Konopka, 1985) the resistance of endospores to DNA extraction contributes to the underrepresentation of endospore forming members of the phylum Firmicutes in metagenomic studies (Filippidou et al., 2015). In the presence of nutrient and non-nutrient germinants an endospore will rapidly resume metabolic activity and thus lose its trademark resistance properties (Setlow, 2014). In order to facilitate endospore detection, the authors of some metagenomic studies have exploited the spore's germination response to render them more amenable to DNA extraction and subsequent detection (Lara-Reyna, 2000).
This approach is not without limitations for quantitative microbial detection assays. Upon the completion of germination, Bacillus cells undergo the process of outgrowth in which the cell conducts protein synthesis, nucleotide synthesis and, eventually, DNA replication (Paidhungat and Setlow, 2002). In an assay where recovered DNA is used to quantify populations of Bacillus cells, the onset of DNA replication among an appreciable subset of the population could lead to misleadingly high DNA yields from a given quantity of cells. Therefore, it may be advantageous to target DNA extraction when a majority of the Bacillus population are early in the process of outgrowth, as the endospores have lost their trademark durability, but have yet to engage in DNA replication.
In order to design a germination protocol which meets these criteria for populations of a given Bacillus species, such as the population found in a sample of commercial product, an intimate understanding of how germination unfolds in populations of the targeted organism is essential. Fortunately, though some gaps in our knowledge remain, the process of endospore germination is quite well understood and has been summarized eloquently in textbooks (Paidhungat and Setlow, 2002) and in review articles (Setlow, 2014). Common microbiological methods can be used to assess the progress of an endospore population through the stages of germination. Endospores lose their trademark heat stability early in the germination process, and growth-based Total Spore Count (TSC) assays use pasteurization prior to plate counting to quantify this shift within the population. Endospore cores also contain large reserves of pyridine-2,6-dicarboxylic acid, also known as dipicolinic acid or DPA, which is released upon germination (Setlow, 2014) and can be measured colorimetrically in solution (Janssen, 1958). Decreases in absorbance at OD 600 for a spore population can be monitored to track the excretion of spore components and core rehydration (Paidhungat and Setlow, 2002) as well as the shift in the refractive index of the endospores during germination (Zhang et al., 2015).
A germination protocol which renders the majority of an endospore population amenable to DNA extraction by inducing germinationand which avoids the onset of appreciable DNA replication within the cell populationmay have the potential to improve DNA recovery from Bacillus-based products, resulting in lower limits of detection (LOD) for qPCR assays. In the present study, we characterized the onset and progression of germination in an axenic population of industrial Bacillus subtilis (BS) and monitored improvements in DNA recovery as germination proceeded using qPCR with taxon-specific primers. We assessed the release of DPA, shifts in OD 600 absorbance, and the loss of heat stability as a function of time to track the progression of spore populations through the stages of germination. Cell concentration was monitored using APC and FC assays to rule out the onset of cell proliferation. Although nutrient germination protocols are likely to be highly specific to individual Bacillus strains and product matrixes, nutrient germination may have the potential to bring the same type of benefit to quality control and regulatory enumeration assays that it has brought to metagenomic studies.
Preparation of Bacillus subtilis endospore suspensions
An industrial strain of Bacillus subtilis (BS) used in commercial products (United States patent US 10398156B2) was selected as a model organism for this study. Axenic BS endospore suspensions with concentrations of 1.0 Â 10 10 CFU/mL were obtained from an industrial fermentation company. Spore suspensions were produced by the supplier using proprietary methods which were not made available to the authors. Prior to use in experiments, BS endospore suspensions were pelleted, rinsed and resuspended in sterile phosphate-buffered saline (PBS, Thomas Scientific, Swedesboro NJ) to remove any background DPA and to dilute germination inhibitors added by the supplier. Centrifugation at 5,000 RCF was carried out at ambient temperature. Supernatant was decanted, replaced with sterile PBS, and the cell pellet was resuspended by aspiration with a sterile serological pipette. Once the pellet was completely resuspended, the process was repeated. Rinsed and resuspended endospore suspensions were stored at 4 C until needed.
Preparation of germination medium and batch reactor flasks
Customized germination medium was prepared which included the BS germinant mixture L-asparagine, D-glucose, D-fructose and potassium (AGFK, Paidhungat and Setlow, 2002). Tryptic Soy Broth (TSB, Carolina Biological Supply, Burlington NC) containing D-glucose (2.5 g/L) and dipotassium phosphate (2.5 g/L) was augmented with L-asparagine (Millipore Sigma, Burlington MA) at 3.0 g/L and D-Fructose (Millipore Sigma, Burlington MA) at 2.5 g/L. Medium was dispensed in 194mL aliquots into 500mL Erlenmeyer flasks which were capped with aluminum foil and sterilized in an autoclave at 121 C, 15 psi for 15 min. Flasks were allowed to equilibrate to room temperature prior to use. Sterile PBS was used as a negative control, and PBS flasks were prepared in the same manner as described for flasks of germination medium.
Dosing, incubation and sampling of batch reactor flasks
Flasks of germination medium and of PBS (n ¼ 3 replicates each per treatment) were dosed aseptically with 6mL of the appropriate rinsed endospore suspension (prepared in 1.1 above) for a final flask volume of 200mL. After the collection of 0-minute samples (described below in 1.4) flasks were transferred to an incubator/shaker and held at 37 C, 200RPM for the duration of the sampling period. Batch reactor flasks were sampled at 15-minute intervals for 105 min. Incubation time was limited to 105 min to avoid the onset of detectable cell proliferation. Preliminary experiments showed that detectable cell proliferation (defined as an increase of 20% in culturable cell populations using an APC assay) begins for this species under the tested conditions after 150 min of incubation (data not shown). Six separate samples were collected aseptically at each time point: 1.0mL for APC assay, 1.0mL for TSC assay, 1.0mL for OD 600 testing, 5.0mL for DPA assay, 1.0mL for FC analysis and 1.0mL for DNA extraction and qPCR. Samples not intended for immediate testing were immediately transferred to an ice bath at 4 C, where they were held until processing.
Aerobic plate count assays
Samples of batch reactor flask medium (1.0mL) were collected from each replicate flask (n ¼ 3 per treatment) at 15-minute intervals over 105 min for APC assays, which were conducted as described in previous work (Gorsuch et al., 2019a) following a spread-plate technique. Culture medium was Tryptic Soy Agar (TSA) augmented with 0.075g.L bile salts (Millipore Sigma, Burlington, MA) and 0.025 g/L Congo red dye (Carolina Biological Supply, Burlington NC). Serial dilutions were conducted beneath a laminar flow hood using sterile 0.1% peptone blank as the diluent. Due to the reactor flasks' nominal cell concentration of 3.0 Â 10 8 CFU/mL, APC assays had a targeted reading frame of 10 À5 , 10 À6 and 10 À7 . The 100μL plating inoculum constituted the final tenfold dilution of each sample and was spread across the surface of each plate with a freshly autoclaved glass spreader. Plates were allowed to sit face-up to absorb the liquid inoculum for 15 min before being inverted and transferred to a plate incubator, where they were held at 37 C for 24 h before counting.
Total spore count assays
TSC assays were conducted in the same manner as described above for APC assays, with the exception that targeted dilution bottles were pasteurized at 80 C for 15 min in a hot water bath before plating aliquots were collected and spread on agar plates.
Flow cytometry assays
FC analysis was conducted by Bioform Solutions (San Diego, CA) as described in previous research (Gorsuch et al., 2019b). Samples were stored in an ice bath prior to shipping and were shipped overnight packed in ice to the Bioform Solutions laboratory where they were immediately refrigerated at 4 C until testing. For each treatment n ¼ 3 reactor flasks, and replicate samples were tested in triplicate such that each time point for each replicate flask represents a geomean of three instrument readings.
Colorimetric determination of dipicolinic acid
Samples of batch reactor flask medium (5.0mL) were collected from each replicate flask (n ¼ 3 per treatment) at 15-minute intervals over 105 min for colorimetric determination of extracellular DPA following the protocol of Janssen (1958). Samples were spun in a centrifuge at 5,000 RCF at ambient temperature. Following centrifugation, supernatant was passed through a 0.2μm syringe filter (Thomas Scientific, Swedesboro NJ) at which point 4.0mL was collected using a serological pipette and dispensed into a 20mL scintillation vial (Thomas Scientific, Swedesboro NJ) for colorimetric analysis. Color change was induced by adding 1.0mL of freshly prepared Janssen's reagent (0.5M acetate buffer containing 1% w/v each of Fe(NH4)2(SO4)2 * 6H2O and L-ascorbic acid, respectively, each sourced from Millipore Sigma, Burlington MA) and measuring OD 440 absorbance with an Agilent Cary UV-Vis spectrophotometer (Agilent Technologies, Santa Clara CA). The instrument was blanked before OD 440 readings using a solution of 4.0mL sterile germination medium or PBS (as appropriate) and 1.0mL Janssen's reagent. For each treatment n ¼ 3 reactor flasks, and replicate samples were read in triplicate such that each time point for each replicate flask was a geomean of three instrument readings. DPA concentration was calculated using a standard curve developed with serial dilutions of pyridine-2, 6-dicarboxylic acid (Millipore Sigma, Burlington MA). The standard curve produced an equation of y ¼ 0.0029x þ 0.0677 with a correlation coefficient (R 2 ) of 0.9998.
OD 600 absorbance monitoring
Samples of batch reactor flask medium (1.0mL) were collected from each replicate flask (n ¼ 3 per treatment) at 15-minute intervals over 105 min for OD 600 analysis. Samples were diluted tenfold in 9mL of DI H 2 O to ensure that all readings had an absorbance of 0.2. Absorbance at OD 600 was assessed using an Agilent Cary UV-Vis spectrophotometer (Agilent Technologies, Santa Clara CA) after the instrument was blanked using a tenfold dilution of the appropriate sterile medium in DI H 2 O. For each treatment n ¼ 3 reactor flasks, and replicate samples were read in triplicate such that each time point for each replicate flask was a geomean of three instrument readings. Each average value was then multiplied by 10 to account for the initial tenfold dilution of the sample.
Genomic DNA extraction and qPCR
Samples of batch reactor flask medium (1.0mL) were collected from each replicate flask (n ¼ 3 per treatment) at 15-minute intervals for 105 min for genomic DNA extraction and qPCR analysis. Genomic DNA was isolated using a QIAGEN DNEasy Powerlyzer Powersoil kit (QIAGEN, Inc., Germantown MD) following manufacturer's instructions. Samples of purified genomic DNA were immediately subjected to qPCR following the protocol described in previous research (Gorsuch et al., 2019b). Taxon-specific BS probes (Life Technologies Corporation, Carlsbad CA) and primers (Eurofins Genomics LLC, Louisville KY) were designed by the Center for Applications in Biotechnology (California Polytechnic State University, San Luis Obispo CA) as detailed in previous research (Gorsuch et al., 2019b). The forward primer sequence used was CCAACA-TATAAGACCTCTAC, the reverse primer sequence used was TTATTTCATCCCATCCTGAC and the customized TaqMan probe sequence used was CCCAACCAGCGATCCATAC. qPCR reactions were conducted using a Bio-Rad CFX 96 Deep Well C1000 Touch thermal cycler (Bio-Rad Laboratories, Hercules CA). Reaction conditions were 10 min at 95 C followed by 40 cycles of 95 C for 15 s, 55 C for 30 s, and 60 C for 60 s. Recovery of genomic DNA from batch reactor flask samples was assessed through a comparison of quantitation cycle (Cq) values reported by the Bio-Rad CFX 96 Deep Well C1000 Touch thermal cycler.
Cell counting assays (APC, FC and TSC)
APC assays showed a static CFU count over the entire incubation period for both treatments, a trend confirmed by static total cell counts as measured by FC (Figure 1). TSC assays showed a static concentration of heat-stable endospores for the duration of the incubation period for control treatments; however, in reactor flasks of germination medium a steady decline in the concentration of heat-stable endospores presented after 15 min of incubation and continued throughout the incubation period (Figure 1). Data points represent the geomean of three replicates per time point, and error bars represent three standard deviations above and below the geomean.
Spectrophotometric assays (DPA and OD 600 absorbance)
OD 600 absorbance values for BS endospore suspensions in PBS remained stable throughout the incubation period; however, in germination treatments OD 600 values began to decrease after 15 min of incubation with a maximum loss of OD 600 absorbance occurring after 60 min of incubation (Figure 2). Colorimetric assays for extracellular DPA show no detectable release of DPA in control treatments during the incubation period; however, in germination treatments an increase in extracellular DPA was detectable after 15 min of incubation with maximum DPA release occurring at 90 min of incubation (Figure 2).
qPCR assays
Recovery of genomic DNA from batch reactor flask samples was static throughout the incubation period for PBS treatments, while germination treatments showed increased DNA recovery as a function of time before stabilizing at 75, 90-and 105-minute time points (Figure 3).
Discussion
Growth-independent enumeration methods such as qPCR are desirable alternatives to the industry-standard APC assay for the enumeration of Bacillus-based microbial products; however, the same durability which makes Bacillus endospores so appealing to manufacturers can also render them resistant to traditional DNA isolation techniques (Lara-Reyna, 2000;Filippidou et al., 2015). In the present study, we examined DNA recovery from a model industrial strain of Bacillus subtilis as a function of incubation time in germinant free PBS and in an AGFK-based germination medium. Germination medium reactor flasks showed decreasing TSC counts, a decrease in OD 600 absorbance (with maximum loss observed at 45 min, Figure 2b) and an increase in extracellular DPA (with release plateauing around 45 min, Figure 2a), data which compare favorably with expectations of an endospore population undergoing germination. PBS reactor flasks showed stable TSC counts and OD600 absorbance as well as undetectable levels of extracellular DPA for the duration of the incubation period (Figure 2), data which do not support the onset of germination. qPCR analysis showed that DNA recovery was improved as a function of time and compared more favorably across replicate flasks of germination medium relative to flasks of PBS (Figure 3), with maximum recovery achieved between 75-105 min. No increase in cell count was detected in either treatment by APC assay or by FC (Figure 1) ruling out the possibility that the improvements in DNA recovery observed between 75-105 min in flasks of germination medium were attributable to cell proliferation. It must be noted that none of the methods used here can empirically rule out the replication of genomic DNA as a lurking variable contributing to improved DNA recovery; however, our data would suggest that this interpretation is unlikely to be the case. Data from germination reactor flasks show a maximum loss of OD600 absorbance and a maximum release of DPA occurring for BS endospore populations after 45 min of incubation (Figure 2), suggesting that a majority of BS endospores had not become metabolically active until this point. Continuous increases in OD 600 absorbance after the occurrence of this minimum paired with a lack of cell proliferation (Figure 1) would suggest the ongoing transition of freshly germinated BS endospores into larger vegetative cells. Furthermore, nucleotide biosynthesis does not generally commence until 10-20 min into outgrowth (Paidhungat and Setlow, 2002). Assuming that a majority of the BS population entered outgrowth at 45 min of incubation, nucleotide biosynthesis would not be expected to commence until 55-65 min. As the largest decreases in Cq value for BS populations in germination medium occur between 0 -15 min and between 60-75 min, respectively, the appreciable onset of genomic DNA replication seems an unlikely explanation for the trends observed here.
The impact of germination upon DNA recovery from BS endospores suggests that standard curves developed using a germination protocol could have limits of detection (LOD) nearly one order of magnitude lower than standard curves generated without a germination step. Such improvements may allow for the application of qPCR-based enumeration methods to low-activity microbial products. Concentrated endospore preparations such as the BS material described here are often blended into animal feed additives, and then further diluted when such additives are blended into animal feeds, presenting the risk of final Bacillus cell counts below the lower LOD of a qPCR assay. Improvements in DNA extraction efficiency may help to alleviate such concerns, allowing the application of PCR-based enumeration methods to a wider variety of microbial products.
For a variety of reasons, we do not propose that the germination medium and timeframe described here represent a broadly applicable approach for improving DNA extraction from industrial Bacillus endospores. Resistance to DNA extraction and germination profiles of industrial Bacillus endospores are likely to vary considerably from strain to strain, and finished product matrices are likely to be equally diverse. This approach would also require extensive validation for mixed-species Bacillus assemblages, as compounds such as DPA released by a germinating endospore can act as non-nutrient germinants for neighboring spores (Paidhungat et al., 2001) raising the possibility that a strain's germination profile may differ when the organism is blended into a mixed-species assemblage. Therefore, it is likely that any germination-based protocol for improving DNA recovery will require extensive validation and optimization for the strain and matrix under consideration. However, our data show that an overall strategy of "germinate to enumerate" could join the well-known "germinate to exterminate" as a means for mitigating the trademark resistance properties of the bacterial endospore, and that such methods may be worthy of consideration during the development of next-generation enumeration methods for Bacillus-based products.
Author contribution statement
John P. Gorsuch: Conceived and designed the experiments; Performed the experiments; Analyzed and interpreted the data; Wrote the paper.
Peyton Woodruff: Performed the experiments; Analyzed and interpreted the data.
Funding statement
This work was supported by BiOWiSH Technologies. | 2019-12-12T10:50:48.796Z | 2019-12-01T00:00:00.000 | {
"year": 2019,
"sha1": "38c3791a3b9fcf66c7ac368b2dfef8ee4df0f603",
"oa_license": "CCBYNCND",
"oa_url": "http://www.cell.com/article/S2405844019365764/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "604f29d07ad5d1c9fc7ae7f2b751faf21658fa62",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
234788934 | pes2o/s2orc | v3-fos-license | Photocatalytic removal of naphthalene (C10H8) from aqueous environments using sulfur and nitrogen doped titanium dioxide (TiO2-N-S) coated on glass microbullets in presence of sunlight
Background and aims: Due to their toxicity and carcinogenic effects, polycyclic aromatic hydrocarbons (PAHs) such as naphthalene (C10H8) are regarded as hazardous compounds for both humans and the environment, and it is essential to remove these contaminants from the environment. The present study aimed to remove naphthalene from a synthetic aqueous environment using sulfur and nitrogen doped titanium dioxide (TiO2-N-S) nanoparticles (NPs) immobilized on glass microbullets under sunlight. Methods: In this experimental study, TiO2-N-S NPs were synthesized using sol-gel process. The structure of NPs was investigated using X-ray diffraction (XRD), scanning electron microscope (SEM), energy-dispersive X-ray (EDX), and differential reflectance spectroscopy (DRS). In addition, using statistical analyses, the effects of parameters such as the initial concentration of naphthalene, pH, contact time, and the optimal conditions on naphthalene removal were investigated. Results: XRD patterns and SEM images of the samples confirmed the size of synthesized particles in nanometer. The EDX and DRS spectra analysis showed the presence of two elements (sulfur and nitrogen) and the optical photocatalytic activity in the visible region, respectively. The maximum level of naphthalene removal in the presence of sunlight was obtained to be about 93.55% using a concentration of 0.25 g of thiourea immobilized on glass microbullets at pH=5 and contact time of 90 minutes. Conclusion: The rate of naphthalene removal using the immobilized TiO2-N-S on glass microbullets was 93.55% in optimal conditions. Therefore, this method has an effective potential for naphthalene removal, and can be used to remove naphthalene from industrial wastewater.
Introduction
Water is a vital substance for human life and all people are aware of the importance of access to safe drinking water. One of the water pollutants is polycyclic aromatic hydrocarbons (PAHs). These compounds include a large group of organic compounds, with two or more aromatic rings that enter the environment through natural and industrial activities (1). Due to toxicity, mutagenicity, and carcinogenicity, these compounds have been ranked among the leading pollutants by the US Environmental Protection Agency (EPA), and the agency has declared a concentration of 0.5 mg/L of phenolic compounds in industrial effluent as permissible amount and even listed the lower concentrations of these compounds as permissible for aquatic and soil organisms (2). Naphthalene (C 10 H 8 ) is the first group of these hydrocarbons that has attracted the attention of many researchers in recent years as a common contaminant in water. Naphthalene is able to accumulate in water and soil for a long time (3). Exposure to naphthalene at high levels can cause hemolytic anemia, red blood cell breakdown (4), genetic mutations, fetal damage, congenital disorders, and kidney damage (5). Due to the extremely hazardous effects of naphthalene on human health, it is necessary to evaluate the likelihood of water pollution with this compound (6). Aerobic and anaerobic biological treatment, chemical oxidation, electrochemical oxidation, photochemical oxidation, coagulation, ion exchange filtration, adsorption onto activated carbon, buoyancy, biological degradation, and ozonation are important methods for the treatment of pollutants (7). Practically, there are limitations to the application of these methods, including the need for specific reactants, high cost, lack of removing small amounts of pollutants, production of by-products, and sludge production. Therefore, in recent decades, the use of new and more appropriate methods of wastewater treatment has been prioritized to maximize compliance with environmental regulations by researchers in this field. These methods also include those with chemical degradation, namely, advanced oxidation processes (AOPs) (8). AOPs include the effect of ultraviolet (UV) light on materials and the production of intermediates or active agents with oxidizing or reducing properties. The processes of AOPs themselves are subdivided into several large groups, the most important of which are photocatalytic oxidation processes.
Photocatalytic removal is carried out by UV radiation to the semiconductor surface such as ZnO or TiO 2 , and oxidation is accomplished based on the activity of hydroxyl radicals, which are highly reactive species (9). As anatase crystals, inhomogeneous photocatalysts, including TiO 2 , are the most popular photocatalysts due to their environmental friendliness, high optical activity, low cost, low toxicity, and high chemical and thermal stability (10). Over the past 30 years, TiO 2 nanoparticles (NPs) have been widely used as powders, but due to certain problems such as continuous mixing during work, high cost of filtration and centrifugation of powders for resuscitation, dispersion of NPs in solution and light blocking, efforts to stabilize photocatalysts on surfaces such as glass pellets, glass fibers (9,11), silica (12), activated carbon and zeolites and nanotubes are increasing. Optical degradation of pollutants as thin film fixation on a fixed bed has two major benefits, including the avoidance of high costs for catalyst separation (10,13,14) and lack of forming hazardous compounds due to advanced oxidation often used with other oxidants such as halogens (15). Besides, the broad TiO 2 gap has made it an efficient photocatalyst in the UV region. However, the sunlight naturally contains only 4% UV and the photocatalytic removal of naphthalene from aqueous solutions using UV increases the cost of purification and consumption, with practical limitations such as the low lifetime of the UV lamps and the consumption of electricity. UV lamps themselves are also considered a serious threat to the environment (10), so that methods such as doping with noble metals, metal ions and anions (C, N, S, F) are used to modify this inherent TiO 2 property and produce a new photocatalyst capable of retaining photocatalytic activity under visible sunlight (16,17). Muruganandham et al and Li et al doped the N and La on the TiO 2 lattice to reduce the energy gap and increase the TiO 2 optical activity in the visible light range (18,19). In this study, titanium dioxide (TiO 2 ) was first synthesized using the sol-gel method, which is an ideal method for preparing homogeneous products with high purity in the production of metal oxides. Moreover, the absorption of TiO 2 was transferred to the range of light by doping nitrogen (N) and sulfur (S). Furthermore, with the stabilization of TiO 2 -N-S on the bed of glass microbullets, the removal of naphthalene at concentrations of 5, 10, 15, 20, 25, and 40 ppm in the presence of sunlight was investigated considering the amount of total daily radiation and the optimal slope of pilot installation.
Photocatalyst synthesis
All chemicals were procured from Merck Co. (Germany), deionized (DI) distilled water from Zolal Company (Iran), and glass microbullets from Glass Seeds Company (Iran). In this study, TiO 2 -N-S NPs were first synthesized by solgel method (20). TiO 2 sol was prepared by hydrolysis of tetrabutyl orthotitanate (TBOT) in acidic solution. For this purpose, the mixture was first mixed with 2.5 ml of TBOT, 10 mL ethanol, and 2.5 mL acetyl acetone. After 30 minutes, a clear yellow solution was obtained. Then, 2 mL of deionized water was added to the solution and the resulting solution was stirred for 10 minutes on the strainer. Concentrated hydrochloric acid and sodium hydroxide were used to adjust the pH of the cell at approximately 1.8. Next, 0.25 g of thiourea, as a source of nitrogen and sulfur, was added to the synthesized sol. The addition of thiourea results in the transfer of TiO 2 photocatalytic activity to the visible region. After 2 hours, a stable yellow sol was obtained. In order to stabilize the prepared sol, 40 g of glass microbullets with diameters of 450 to 600 microns were used in each round of synthesis. The glass microbullets were first rinsed with chloroform solution and then washed several times with deionized water and dried in an oven at 105°C for 1 hour. The glass microbullets were then coated in the sol solution using immersion for 10 minutes on a magnetic stirrer. They were then left in an oven at 60°C for 4 hours to evaporate ethanol. Finally, the fixed film on the bullets was calcinated in the furnace at 500°C for 1 hour and left in an ultrasonic bath for 15 minutes to eliminate possible contamination. One of the important properties of heterogeneous photocatalysts is the selection of the appropriate bed for the stabilization of TiO 2 NPs. The important considerations for selecting an appropriate bed include high surface area, strong bonding between the bed and the photocatalyst, the stability of the bed over a long period of time, no change in catalyst activity during and after removal of contamination, as well as stability during irradiation and acquiring resistance to certain radicals produced (21).
Investigation of photocatalyst properties
The bonding gap of the samples was obtained using UVvisible spectroscopy (Avaspec 2048 Tech (AVANTES, USA) and according to the equation (1), where λ (nm) represents the wavelengths of adsorbed layers in the spectrum and Eg (eV) does the bonding gap.
Moreover, in the equation (2), S represents the average crystal dimension, k the crystal particle shape constant (0.89), θ the diffraction angle at the maximum peak, λ the x-ray wavelength, and β the peak width at half of its radius (22).
The surface morphology of the TiO 2 film was investigated by electron scanning microscope (SEM, Tescan MIRA3, Tescan Co., Czech Republic) equipped with X-ray diffraction (XRD) spectroscopy for elemental analysis.
Examination of photocatalytic optical activity
Optical degradation of naphthalene was investigated by synthesized photocatalyst under visible light and sunlight. The experiments were performed using 0.25 g of synthetic thiourea on 18 g of glass microbullets. The glass microbullets were distributed in four quartz glass tubes with a diameter and height of 8 and 200 mm, respectively, and placed on a flat mirror. Figure 1 illustrates the reactor used for photocatalytic experiments under sunlight. The volume of tested naphthalene solution in each test was adjusted at 500 mL and flow rate at 20 mL/min using a peristaltic pump in a closed system.
Photocatalytic experiments were carried out with initial concentrations of naphthalene 5, 10, 15, 20, 25, and 40 mg/L. The best naphthalene removal efficiency was obtained at pH 5. Residual concentrations of naphthalene were measured at a maximum amount of 276 λmax by a spectrophotometer (Perkin Elemer-Uv-Vis, USA), and naphthalene degradation efficiency was calculated using Equation 3. lambda where, C 0 represents the initial concentration and C t the residual naphthalene concentration in solution.
Statistical analysis
To perform data analysis, SPSS version 18 (SPSS Inc., Chicago, IL) was used. The statistical significance of intergroup differences was investigated by one-way ANOVA and to determine which times are different from other times, the Dunnett's post hoc test was used. Finally, Tukey's test was used to compare experimental groups. The results were expressed by mean ± standard deviation and P < 0.05 was considered as statistically significant.
Synthesis, morphology, and structural properties of photocatalysts
The compound structure of the TiO 2 -N-S NP phases prepared by XRD was determined. In order to investigate the formation of the anatase phase in the synthesized thin layers, the analysis was performed on the samples (23). The location of the peaks in the XRD of the anatase phase was measured according to the Joint Committee on Powder Diffraction Standards (JCPDS) of Card No. 0447-04 (23). Figure 2 illustrates the XRD of thin nanolayer of pure TiO 2 and thiourea-doped TiO 2 . Peak 2θ=25.4° shows the main peak of the anatase phase (24), while the main peaks of Rutile and Brookite are 27.4° and 30.8°, in none of which the layers are seen (25). Therefore, all layers contain the anatase phase and no Rutile and Brookite phases. In Figure 2d, the anatase phase of the TiO 2 -N-S powder sample was observed in peak 101 and 2θ=25.346°, and the particle size was calculated to be 14.5 nm. In Figure 2c, the anatase phase of the TiO 2 -N-S powder sample was observed at peak 101 and 2θ=25.348°, and the particle size was calculated to be 11.5 nm. The size of the NPs coated on the glass microbullets according to Figure 2b was calculated to be the same as the powder sample and 11.5 nm.
An example of an uncoated glass microbullet for comparison in the formation of phases is illustrated in Figure 2a, on which no peaks appear, resulting in NPs fixed on the bed of glass microbullets with an anatase phase structure that has photocatalytic properties. Figure 3 illustrates the SEM images obtained from NPs of TiO 2 thin layer and TiO 2 -N-S coating on glass microbullets. The samples were examined in terms of appearance and morphology of the sample surface. Figures 3a and 3c illustrate that TiO 2 and TiO 2 -N-S NPs are distributed as nanoblocks with a diameter of 15.58 and 11.8 nm, respectively, on a uniform surface. Figures 3b and 3d illustrate the thickness of TiO 2 thin layer and TiO 2 -N-S coating on glass microbullets, respectively. Accordingly the thickness of the thin layer of pure TiO 2 and TiO 2 -N-S is 809.93 and 693.68 nm, respectively. Figure 4 illustrates the energy-dispersive X-ray (EDX) analysis of TiO 2 and TiO 2-N-S on glass microbullets. In this model, due to the presence of thiourea, the presence of two elements (sulfur and nitrogen) is observed in the mentioned film. The EDX analysis for the thin layer of coating TiO 2 with glass microbullets is illustrated in Figure 4a and the EDX analysis for TiO 2 -N-S thin layer coated with glass microbullets is illustrated in Figure 4b. Elemental and atomic analysis of a thin layer of TiO 2 coated on glass microbullets was created by adding thiourea (CH 4 N 2 S) to the orthotitanate sol solution. Thus, the percentage of sulfur and nitrogen elements was obtained as 6.58% and 0.57% of the weight of studied samples, respectively.
The analysis of diffuse reflectance spectroscopy (DRS) nano-UV-vis diffuse reflectance spectra of pure TiO 2 , TiO 2 doped with sulfur and nitrogen, and TiO 2 -N-S doped with sulfur and nitrogen is illustrated in Figure 5. As can be seen, the diffusion coefficient value of the TiO 2 -N-S spectrum in the visible region is stronger than pure TiO 2 . To estimate the optical band gap energy of nanostructures, the Tauc plot (known as Kubelka-Munk model) was used (26). The absorption value obtained for the TiO 2 thin layer was 378 nm and for the TiO 2 -N-S thin layer 416 nm, which was equivalent to the 2.3 eV energy gap of TiO 2 and 98.2 eV for TiO 2 -N-S thin layer, respectively.
Removal of naphthalene with synthetic photocatalyst
The effect of pH on naphthalene degradation The pH of the environment has different effects on the rate of oxidation reactions of materials. One of the most important determinants of removal efficiency is the electric charge level of the photocatalyst. The pH of the zero charge (pH pzc ) indicate the surface of catalyst has zero or neutral electric charge. (27). Figure 6 reports the zero point (PZC) pf the TiO 2 molecule at pH 6.25. Therefore, TiO 2 levels will have a positive charge in acidic state (a) and in weak alkaline conditions, (b), i.e. pH>6.25, the charge level of TiO 2 particles will be negative (18). Zhao et al reported that a positive charge on TiO 2 levels at pH less than 6 caused better migration of light-generated electrons and prevented the recombination of electrons and cavities, thus increasing the efficiency of the photocatalytic process (28). To determine the optimal pH, solutions with an initial constant concentration of 5 ppm of naphthalene were first made at pHs 3, 5, 7, 9, and 10, and then the efficacy of naphthalene removal efficiency was investigated in the presence of TiO 2 -N-S catalyst. Sampling was performed at intervals of 0, 30, 60, 90, 120, 150, 180, 210, and 240 minutes and according to the standard method at a wavelength of 276 nm. In addition, the amount of adsorption of samples was read with a spectrophotometer and the percentage of naphthalene removal using equation 3. According to the results shown in Table 1, the highest efficiency of naphthalene removal was obtained at pH 5 and 90 minutes, equivalent to 79.33±0.41, with a statistically significant difference with other pHs during this period (P < 0.05).
Investigating the Effect of Naphthalene Initial Concentration
After determining the optimal pH, naphthalene solutions with concentrations of 5, 10, 15, 20, 25, and 40 ppm were prepared at a constant pH of 5 and tested at each concentration using a TiO 2 -N-S photocatalyst under sunlight. Table 2 shows the naphthalene optical degradation with different initial concentrations in the presence of the TiO 2 -N-S photocatalyst under sunlight. Accordingly, with increasing concentration and contact time, the removal efficiency of naphthalene increased and up to 90 minutes of contact time, this increasing trend was observed in all concentrations and no significant changes in naphthalene removal efficiency were observed since 90 minutes.
Investigating the Effect of Solar Radiation on the Optical Removal of Naphthalene
In order to investigate the effect of total light (total solar radiation, TSR) on naphthalene removal, information about total and instantaneous radiation of Shahrekord meteorological synoptic station, which was located at a distance of 3640.77 m from the pilot installation, was examined. Due to the fact that the optimal conditions for removing naphthalene were obtained within 90 minutes, the total and instantaneous amount of radiation during 90 minutes of contact of naphthalene solution with the photocatalyst was extracted from the statistics of Shahrekord meteorological solar radiation (Table 3). Accordingly, the efficiency of naphthalene removal increased with increasing radiation. According to the isotropic model (29), the best slope of installing panels containing mirrors and glass tubes against sunlight in spring, summer, autumn, and winter is equal to 2.66°, 11.33 o , 53.63 o , and 48.33 o angles. In general, the optimal annual installation angle is 29 degrees, which is almost close to the latitude of Shahrekord with a difference of less than 10% (30). In the study, the panel containing glass tubes was installed at an angle of 53.6 degrees to the horizon.
Intermediate compounds resulting from naphthalene decomposition
To evaluate the intermediate compounds obtained from the degradation process of naphthalene photolysis with TiO 2 -N-S photocatalyst, gas chromatography-mass spectrometry (GC-MS) device was used with Agilent Technologies 7890 A gas chromatography technique and Agilent Technologies 5975C mass spectrometer (USA). Based on the inhibition time factor as well as the library evaluation of the instrument regarding mass spectrum interpretation, the intermediate compounds include phthalic acid, 2formylcinnamaldehyde, 2-carboxy cinnamaldehyde, and ethanoic acid. The proposed mechanism for naphthalene removal is the oxidation process in the presence of OH and HO 2 radicals. It should be noted that previous studies have also suggested intermediate compounds resulting from the degradation of naphthalene photosynthesis and degradation mechanisms, which is in satisfactory agreement with the proposed results of this study (31,32). The results of GC-MS of the solution after irradiation at 120th minute indicate the removal of intermediates, and finally the mineralization of intermediate organic compounds and conversion to CO 2 gas and H 2 O.
Discussion
In this study, the TiO 2 -N-S photocatalyst was synthesized and fixed on glass microbullets using the sol-gel method to remove naphthalene from aqueous media. The morphological and structural characteristics of the photocatalyst using XRD, SEM, EDX, and DRS analyses showed that the photocatalyst was well fixed and synthesized on a fixed surface (glass microbullets). According to Figures 2c and 2b, the XRD analysis of the samples showed the anatase phase in the powder samples and the samples fixed on the glass microbullets bed, respectively, and the diameter of the NPs was obtained in both stabilized states on the surface of microbullets and NPs on very fine and homogeneous powder particles at 11.5 nm. Photocatalysts are used in the anatase phase under UV radiation and exhibit optical activity and catalytic properties (25).
According to the research by Zhuang et al, an increase The data is based on the standard deviation of the average removal efficiency. * In five pH groups and in each time group, there is a significant statistical difference (P < 0.05). ** There is a statistically significant difference between five pH groups and eight time groups (P < 0.05). The data is based on the standard deviation of the average deletion efficiency. Among the six concentration groups and 11 time groups, all groups had a statistically significant difference with control (P < 0.001). * There is a statistically significant difference between six concentration groups in each time group. ** There is a statistically significant difference between six concentration groups and 11 time groups (P < 0.05).
in thiourea leads to a higher accumulation of nanometer particles in some areas (33), which is clearly seen in the SEM images of samples (Figures 3a and 3c). In general, nanometer-sized photocatalyst particles tend to accumulate due to the van der Waals force between the surface of the particles (34). According to the studies by Brindha et al and Sathish et al (34 , 35), the diameter of TiO 2 -N-S NPs in the temperature range of 500°C was reported to be 15 nm, which is close to 14.1 nm of pure TiO 2 NPs. According to Figure 4b, EDX analysis, and elemental analysis of TiO 2 -N-S, using thiourea in the catalyst structure leads to the weight percentage of sulfur and nitrogen elements as 6.85% and 0.57%, respectively. Moreover, in comparison with the case where thiourea is not used (Figure 4a), which uses only pure TiO 2 , the presence of more titanium along with sulfur and nitrogen elements is observed. TiO 2 photocatalytic impurification with sulfur and nitrogen non-metals in accordance with the views of Zhuang et al and Hong et al has a significant effect on the removal of organic and mineral pollutants from aquatic environments (33,36).
Comparing the DRS spectrum of pure TiO 2 ( Figure 5a) and TiO 2 doped with sulfur and nitrogen (Figure 5b), the addition of two non-metallic elements of sulfur and nitrogen in the crystal structure of TiO 2 powder leads to a narrower energy gap and photocatalyst activity transfer of the TiO 2 to the visible area (37). According to researchers such as Zhang et al and Masoudipour et al, this energy gap is thinner than the mixture of P and non-metallic orbitals and 2P of TiO 2 oxygen (38,39). Shifu et al. used gas ammonia to dope nitrogen in a TiO 2 photocatalyst and stabilize it on the surface of hollow glass surfaces. They reported that the addition of nitrogen not only did not adversely affect the transfer of the TiO 2 phase from Rutile to anatase, but also caused the transfer of TiO 2 adsorption about 60 nm to the visible area (40). In the study, the absorption value of 38 nm was transferred to the visible area, which is in agreement with the results of Brindha et al and Rokhmat et al (34,41).
The results of this study showed that the rate of naphthalene removal is strongly dependent on the pH function and the contact time of the solution so that with decreasing pH, the efficiency of naphthalene removal increases. According to the results of Table 2, at a concentration of 5 ppm and the duration of 90 minutes of solution's contact with the photocatalyst, the removal efficiency increased from 7.28±2.69 at pH 10 to 79.31±0.41 at pH 5, indicating a statistically significant difference among five groups of pH and eight time groups with other pH and time groups (P < 0.05). Avisar et al investigated the effect of solution pH on the simultaneous purification and elimination of polymorphic sulfamethoxazole (SMX), dextracycline (OTC), and ciprofloxacin (CIP) compounds in the presence of UV radiation. Their results showed that when these compounds were examined separately, increasing the pH of the solution from 5 to 7 led to a decrease in SMX degradation rate and an increase in OTC and CIP degradation rates, and when used as a mixture, the best result was obtained at pH 5, and the rate of SMX degradation reached 99% and the rate of OTC and CIP degradation increased from 54% and 26% in pH 7 to 91% and 96% in pH 5, respectively, which is consistent with the results of our study (42). Therefore, the pH of the solution plays a key role in the removal of aromatic compounds in aquatic environment. Moreover, as Table 3 shows, at the initial pH of 5 and approximately 5 of naphthalene solutions, the pH value increased for 90 minutes of contact with the photocatalyst and reached a maximum of 5.765, which according to Figure 6, the naphthalene load in this range is positive and better electron migration has taken place on the surface of the photocatalyst, and the destructive power of naphthalene is increasing (18). The initial concentration of the solution was another important factor for the efficiency of naphthalene removal. According to Table 2 Statistical results showed that increasing the concentration of naphthalene solution is very effective in the process of photocatalytic removal of TiO 2 -N-S and the maximum efficiency of naphthalene removal was obtained in acidic conditions (pH=5) and at 40 ppm concentration of naphthalene solution at contact time of 90 minutes. Based on the results, the removal efficiency of naphthalene with TiO 2 -N-S NPs increased with increasing contact time and decreasing pH. Although naphthalene removal increases over time up to 90 minutes, the removal power of the photocatalyst decreases with increasing contact time. After 90 minutes, the amount of removal changes is very small. Although naphthalene degradation is based on oxidation, naphthalene must first come to the surface of the microglia for oxidation to occur, but the reason for the decrease in naphthalene removal efficiency with increasing contact time can be considered in saturation of absorption points on the NP surface (43).
Karimi et al in a study of advanced oxidation removal of naphthalene by H 2 O 2 /UV/TiO 2 process from aqueous media found that due to the use of 20 ppm H 2 O 2 at pH 3 and the radiation intensity of 5.6 (w/cm 2 ) at 254 nm wavelength, conditions for mineralization of naphthalene at a concentration of 15 ppm are provided after 100 minutes of contact time and the removal efficiency is reported to be 73% (44), while in the current study, the removal efficiency of naphthalene at the same concentration after 64.6 minutes of contact with photocatalyst reached 73%, which represents the high power of this photocatalyst in removing naphthalene without using disposable UV lamps and H 2 O 2 oxidizing compound. During a constant period, with increasing the concentration of the solution, the efficiency of naphthalene removal increases. Moreover, in case of the increase of the solution concentration from 5 to 40 ppm and the contact time of 90 minutes, the removal efficiency increased from 72.5% to 93.26%, respectively. This is due to the increased power of the concentration gradient with higher concentrations of naphthalene. At lower concentrations, the ratio of the initial number of naphthalene moles to the available adsorption sites is low, and as a result some of the initial adsorption will be independent of the initial concentration (45).
Muthukumar et al coated the Fe-ZnO NPs on the Amaranthus dubius plant and synthesized the catalyst (Fe-ZnO-NP) and used it under UV radiation to remove naphthalene. Optimal conditions for naphthalene removal were reported at initial concentration of 40 ppm and pH 4, using 60 mg/L Fe-ZnO-NP as being 92.33% at 240 minutes in exposure to a 16-watt UV lamp (46). However, the results of current study showed that the naphthalene solution at 40 ppm exhibited a higher removal efficiency at a lower contact time (90 min) and exposure of TiO 2 -N-S photocatalyst to sunlight.
Masoudipour et al used TiO 2 -N-S NPs stabilized on glass microbullets to remove cyanide from the aqueous medium in the presence of sunlight and reported removal efficiency of cyanide solution at optimal concentrations of 50 ppm, pH 11 and 240 minutes as being 100% (47). However, in our study, the optimal conditions for naphthalene removal occur in a shorter time and at a pH close to neutral pH, which is cost-effective in terms of time and energy. Moreover, the amount of sunlight has a positive effect on the removal of naphthalene.
According to the results of Table 3, in the initial constant concentration of 5 ppm of naphthalene solution and radiations of 528.42, 722.95, and 772.51 w/m 2 , the efficiency of naphthalene removal is equal to 50.59, 60.43, and 70.75, respectively. This shows that increasing the amount of irradiation increases the removal of naphthalene, and in close ranges of the amount of radiation and different concentrations of naphthalene, the concentration factor plays a decisive role in the efficiency of naphthalene removal (45), so that the efficiency of naphthalene removal in concentration of 20 ppm and the amount of radiation 522.17 w/m 2 was equal to 77.92% and in the concentration of 25 ppm and the amount of radiation 527.22 w/m 2 was equal to 78.36%; however, the amounts of radiation were not substantially different. As a result, the efficiency of naphthalene removal at higher concentrations increased, which is related to the number and amount of surface sites present at higher concentrations in naphthalene solution (21).
Xiaolong et al studied the highly efficient destruction of PAHs by Ag 3 PO 4 -doped graphene oxide under sunlight, and 7 minutes following exposure of the solution to the photocatalyst, the intermediate compounds 1-naphthol and 1,4-dinaphthol and 1,4-naphthoquinone and 1,2-benzenedicarboxylic acid and dialkyl ester and after 20 minutes of contact, two compounds 1,4-dinaphthol and 1,4-naphthoquinone were identified with GC-MS, which eventually converted to dialkyl ester and 1,2-benzenedicarboxylic acid (48). However, in the current study, the intermediate compounds obtained from naphthalene oxidation under sunlight were converted to water and carbon dioxide using the TiO 2 -N-S photocatalyst.
Conclusion
In this study, synthesis and stabilization of thin layer of TiO 2 -N-S by sol-gel method on a fixed bed (glass microbullets) was successfully performed. The composition of the thin layer along with its morphological and structural characteristics showed that the optical activity of the photocatalyst was directed towards visible light and could be used under the pure and endless energy of the sun to remove polyaromatic and resistant compounds and decompose them into harmless products such as water and carbon dioxide. | 2021-05-19T22:32:00.711Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "c119c1e44cf19d996125c78799f9f0030be9e294",
"oa_license": "CCBY",
"oa_url": "http://j.skums.ac.ir/PDF/jskums-23-34.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "c119c1e44cf19d996125c78799f9f0030be9e294",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": []
} |
257913905 | pes2o/s2orc | v3-fos-license | LQR-CBF-RRT*: Safe and Optimal Motion Planning
We present LQR-CBF-RRT*, an incremental sampling-based algorithm for offline motion planning. Our framework leverages the strength of Control Barrier Functions (CBFs) and Linear Quadratic Regulators (LQR) to generate safety-critical and optimal trajectories for a robot with dynamics described by an affine control system. CBFs are used for safety guarantees, while LQRs are employed for optimal control synthesis during edge extensions. Popular CBF-based formulations for safety critical control require solving Quadratic Programs (QPs), which can be computationally expensive. Moreover, LQR-based controllers require repetitive applications of first-order Taylor approximations for nonlinear systems, which can also create an additional computational burden. To improve the motion planning efficiency, we verify the satisfaction of the CBF constraints directly in edge extension to avoid the burden of solving the QPs. We store computed optimal LQR gain matrices in a hash table to avoid re-computation during the local linearization of the rewiring procedure. Lastly, we utilize the Cross-Entropy Method for importance sampling to improve sampling efficiency. Our results show that the proposed planner surpasses its counterparts in computational efficiency and performs well in an experimental setup.
I. INTRODUCTION
Robot motion planning involves computing an optimal plan that guides a robot safely and efficiently towards a goal.Sampling-based planners, such as Probabilistic Road Map (PRM) [1] and Rapidly-exploring Random Trees (RRT) [2], have been widely used to solve this problem.With particular relevance to this work, there is also a an asymptotically optimal version of RRT, called RRT*, and its variants [3]- [5].Given the popularity of RRT*-based planning algorithms, there has been an extensive effort to reduce the sampling computational complexity by exploiting the problem structure.An example is Informed RRT*, [6], which outperforms RRT* in terms of convergence rate.
The above approaches assume a collision checking method exists during trajectory extension and sampling, such that unsafe states can be rejected during sampling to ensure safety.In addition, classic RRT or RRT* variants do not consider dynamics during planning, and therefore do not address the feasibility and control constraints.Several studies, such as those in [7] and [8], have enhanced the collision checking procedure in sampling-based motion planners.However, these improvements can still be computationally expensive and require a more generalized solution for various nonlinear systems.Recent work [9] employs geometric RRT* as guidance of deep reinforcement learning to improve the exploration efficiency.Similarly, the works in [10], [11] employ an importance sampling (IS) algorithm, namely the cross-entropy method (CEM), for efficient exploration and sampling for RRT*.Our work aims to create a complete sampling-based motion planning framework that can efficiently generate samples from an optimal distribution, while ensuring optimality and safety.Control Barrier Function (CBF) There has been extensive research on safety-critical control and motion planning using CBFs.The significant advantage of CBFs comes from their guarantees on forward invariance [12], i.e., if the system trajectory is initialized in a safe set, it will never leave the set if there exists a corresponding CBF.The standard formulation of a CBF-based controller involves Quadratic Programs (QPs) and applies the generated control inputs using zero-order hold (ZOH) controllers [13], whereby each QP is constrained by a Control Lyapunov Function (CLF) for stability and CBFs for safety (see [14]- [16]).For motion planning, the first CBF-based sampling-based motion planning algorithm was introduced in [17].In contrast to [18], [19], the forward invariance property from CBFs removes the requirement of explicit collision checking.The approach can generalize to any control affine nonlinear system, and the safety sets require no assumption on linearity or convexity.Later works, such as [11], [20], were built on top of it to improve its computational and sampling efficiency.Inspired by RRT* [21] and [17], the work in [11], Adaptive CBF-RRT*, implemented a rewiring procedure to improve the cost of the trajectory.However, the computation is costly due to the iterative calculation of the QP.Linear Quadratic Regulator(LQR) An LQR [22], [23] is a widely used optimal control strategy that minimizes a quadratic cost function over a system's states and control inputs.The algorithm computes an optimal state-feedback gain based on the system dynamics, which can be solved by an algebraic Riccati equation.It has proven effective for robotic systems with linear dynamics [24]- [27].Other variations, such as iterative LQR (iLQR) [22], [28] can handle nonlinear system dynamics [29]- [31].In these methods, the cost function is approximated using a second-order Taylor expansion.While effective for nonlinear systems, this method can be costly.Moreover, collision checking is required during edge extension and rewiring.
In prior works [11], [17], CLFs and CBFs are incorporated into the steering function to ensure stability and safety.The QP is solved iteratively to generate controls that steer the system trajectory toward the sampled new state.The CBF constraints guarantee that the resulting trajectory avoids collisions with obstacles.However, this approach has a few downsides: (i) Solving a sequence of QPs in the steering function can be costly, especially when the step size is selected to be small [17]; (ii) The QPs can easily become infeasible [32]; and (iii) The QP controllers only ensure sub-optimal controls as the formulation of the objective function only guarantees optimality point-wise in time [11].Contribution We propose an efficient, optimal, and safe sampling-based offline motion planner that accounts for system dynamics.The generated state trajectory can then be tracked by feedback controllers online.This work specifically focuses on offline optimal planning using RRT*like approaches [3], [5], [29].Therefore, dynamic obstacles and online planning are not within the scope of this paper.Our proposed approach is shown to significantly improve the efficiency from several aspects.In summary, • Our method generates optimal control during offline planning that minimizes the LQR cost (2).We show superior efficiency via baseline comparisons.For nonlinear systems, we reduce the frequency of computing LQR gains for locally linearized models by storing the previously calculated feedback gains using a hash table, avoiding repetitive LQR calculations during the rewiring procedure.
• To reduce the computational cost and mitigate the infeasibility in the traditional CBF and CLF-based QP formulation in [11], [17], our framework does not require formulating and solving a CBF-based QP.Moreover, our approach guarantees optimality thanks to the LQR control formulation.
• We used a customized omnidirectional robot to track the generated optimal trajectory for validating our method, and we showed that the robot successfully completed its navigation task in a cluttered environment.
II. PRELIMINARIES
For a continuously differentiable function h : R n → R, we use ḣ to denote the derivative with respect to time t.The £ r f h(x) := are the r-thorder Lie derivatives [33].We define continuous function α : [−b, a) → [−∞, ∞), for some a > 0, b > 0, as a class kappa function, denoted as K.The α is strictly increasing, and α(0) = 0. Lastly, we denote B r (x) as a ball of region with radius r centered at x ∈ R n .
A. System Dynamics
Consider a continuous time dynamical system with state space X and control space U. We define x ∈ X ⊂ R n , u ∈ U ⊂ R m , and f (x) : R n → R n , g(x) : R n → R n×m are locally Lipschitz continuous.We define the obstacles as the union of regions in X in which the robot configurations state variables coincide with the obstacles, X i,obs ⊂ X , i = 1, . . ., N obs and the obstacle-free set X safe := X \ ∪ N obs i=1 X i,obs .
B. Linear Quadratic Regulator
We use LQR to compute optimal control policies.The cost function with the infinite time horizon is defined as where Q = Q T ⪰ 0 and R = R T ≻ 0 are weight matrices for state x and control u, respectively.For linear system dynamics, we can compute the closed form solution for the optimal control for a given linear-time invariant systems.Given a linear system, and the cost function (2), we can compute the LQR gain matrix K LQR by solving the algebraic Riccati equation with cost matrix P as such that optimal control policy is denoted as where 34] and P is a stabilizing solution for 4. For nonlinear systems, a linearization can be made w.r.t. an equilibrium point (x eq , u eq ) using first-order taylor expansion.Given a nonlinear system, ẋ ≈ F (x eq , u eq ) + ∂F (x eq , u eq ) ∂x (x − x eq ) + ∂F (x eq , u eq ) ∂u (u − u eq ) Based on the equilibrium point , the linearized system can be re-written as ẋ = Âx + B û, with  = .Finally, we can compute the optimal gain K LQR with the Riccati equation (4).
C. Higher Order Control Barrier Functions
Given a continuously differentiable function h(x) : R n → R. We denote the r b -th derivative of h(x) with respect to time t of (1) as The relative degree r b ≥ 0 is defined as the smallest natural number such that
we formally introduce the definition of CBF. Given a time-varying function
From this, we denote a series of safety sets based on Ψ i as Definition 1. [35] Given the functions defined in (7) and safety sets (8), the r th b order function h : R n × [t 0 , ∞) → R is a Higher Order Control Barrier Function (HOCBF) for system (1) if there exists class Then, the state trajectory is always safe [12]., i.e., the system trajectory is always safe.
Definition 2. Given an initial state x 0 = x(0), the set C is called forward invariant if for every x 0 ∈ C, x(t) ∈ C, ∀t.
Theorem 1. [12] Given system 1 and a continuous function h, if there exists a HOCBF as in 7, then the set C is forward invariant (i.e., the system is safe).
Definition 3 (Safe Trajectory).We define a safe trajectory for system (1) in C as σ := (u, x), where given control inputs u : [0, T ] → U the produced system trajectory x(t) is subject to (1) and Definition 4 (Trajectory Cost).Given a produced control and state trajectory σ = (u, x), with u : [0, T ] → U, we define the state and control trajectory cost as follows.
where Q and R are the same weight matrices defined in (2).
III. PROBLEM FORMULATION
where X init is an initial obstacle free set, and a bounded goal region with X goal = B r goal (x goal ) for some pre-defined radius r goal , such that X goal ⊂ C. Find control inputs u : [0, T ] → U, where T ∈ R + is the time horizon, that produces the path σ such that x(T ) ∈ X goal and x(t) ∈ C, ∀t ∈ [0, T ], with the optimal cost of the path, C * (σ), being the optimal one in c, i.e., IV. LQR-CBF-RRT* In this section, we introduce LQR-CBF-RRT* algorithm 1 as an extension of CBF-RRT [17] that incorporates HOCBF constraints (9) checking for collision avoidance, and LQR policy (5) for optimal control synthesis.We aim to solve 1 using sampling-based motion planning.Our framework encodes goal-reaching and safety requirements during the control synthesis procedure in the steering function (Line 5, 11 in Algorithm 1).To ensure asymptotic optimality [36], an rewiring procedure (Line 16 in Algorithm 1) is included.For each iteration, a state, denoted as x samp is sampled from an uniform distribution on the configuration space.The x samp then used as an input in function NearbyNode to find a set X near of its nearby nodes.The Nearest is then used to select its closest neighbor node x nearest from the tree.Next, the function LQR-CBF-Steer 2 is used to synthesize controls based on computed optimal gain and steer the state from x nearest to x samp .The resultant trajectory is denoted as σ and its end node in σ is x new .At the stage of ChooseParent, the function takes in the computed x new and set X near to compute state trajectories and corresponding costs from x new to all the nearby nodes defined in X near .The trajectory with the minimum cost, denoted as σ min , is used and its end node is defined as x min .After this step, x new and its edge are added to the tree.To asymptotically optimize the existing tree, the Rewire function is used.During this procedure, the LQR-CBF-Steer is used to check whether the cost for each nearby node x nn ∈ X near can be optimized.The x nn that returns the minimum costs is rewired to x new , and the corresponding cost is updated.
A. LQR-CBF-RRT* Algorithm
The detailed algorithm is introduced as the following.We define a tree T = (V, E), where V is a set of nodes and E is a set of edges.
• Nearby Node utilizes a pre-defined euclidean distance d to find a set of nearby nodes X near in T that is closest to x sample .• Nearest returns the nearest node from X near w.r.t.
x sample .4).For a nonlinear system, the system can be linearized locally [29] and then computes K LQR based on linearized system dynamics.• LQR-CBF-Steer The steering function (Algorithm 2) contains two components: (i) The LQR controller (ii) CBF safety constraints.Given two states (x current , x next ), the LQR controller generates a sequence of optimal controls u * i , that steers state trajectory based on (3).At each time steps, the CBF constraints (9) are checked to ensure generated path is collision free.If none of the CBF constraints are active, then the LQR-CBF-steer is used for steering x current to x next , and node x next is added to the tree.Otherwise, the extension stops at the first encounter of (9) that is violated.Then, the end state x new and its trajectory σ are added to the tree.Remark 1. Thanks to the forward invariance property from CBFs [17], no explicit collision checking is required.The generated state trajectory and controls from LQR-CBF-Steer are guaranteed to be safe.
V. EFFICIENCY IMPROVEMENTS
In this section, we develop three strategies to improve the efficiency of the sampling-based process.Notably, when integrating RRT* with both linear and nonlinear systems, section V-A shows how the designed LQR steering function has better performance compare to offline MPC and iLQR.Moreover, since repetitively solving CBF-QP is computationally expensive, section V-B designs a QP-free mechanism to check CBF constraints.Lastly, we propose an adaptive sampling method to enhance efficiency and optimality in VII.
A. Efficient LQR computation
For each steering process, applying the MPC [37] and iLQR [38] requires iteratively solving optimization problems multiple times to obtain an optimal control.These approaches are not efficient as sampling-based methods need to keep using steering function to explore the state space.
This leads to the fundamental motivation for applying LQR in our framework, i.e., providing an efficient steering process.For linear dynamic systems, ẋ = Ax + Bu.The LQR feedback gain K LQR can be computed through Riccati equation in (4) i.e., K LQR = LQR(A, B, Q, R), which only depends on pre-modeled matrices A, B, Q, R. We only need to compute it once and use it for every steering process.
For nonlinear systems, we can linearize it around a local goal as the equilibrium point.To reduce the computational burden, we only solve for the gain once for each steering process.While it outperforms offline MPC and iLQR in efficiency, the computation time can be further improved.More specifically, the linearization and computation of optimal control gain in the approximated dynamical system during the Rewiring procedure is costly.To mitigate this issue, we store the LQR feedback gain for each local goal of the steering process using a hash table to reduce computation during the rewiring procedure (lines 16 -18 in Algorithm 1) The existing works [11], [17] formulate the QP controllers and iteratively solve for the controls point-wise in time during ChooseParent and Rewire procedure, which can leads to the following problems: First, it requires extensively solving the QPs over time.Second, hyper-parameters have to be determined to ensure the feasibility of the QP [32].
In this work, instead of solving the CBF-CLF-QP or CBF-QP with minimum perturbations of a given reference controllers, we only check whether the CBF constraints ( 6) are satisfied and use them as the termination condition of the steering process.For example, given a reference trajectory x 0 u 0 x 1 . . .u n−1 x n from LQR optimal control, to reach the sampled goal from current state x current = x 0 .We check if moving from x i to x i+1 via control u i for all i ∈ {0, 1, . . ., n − 1} satisfies the CBF condition.We terminates the extension process if the CBF condition is violated at any step i.Different from [20], we do not ignore the whole reference path if it violates the CBF condition.Instead, we keep the prefix trajectory where the CBF constraint is always satisfied.
C. Adaptive Sampling
We leverage LQR-CBF-RRT * with an adaptive sampling procedure [11] to focus sampling promising regions of X safe in order to approximate the solution of Problem 1 with a fewer number of samples.We define G as set of control and state trajectories pairs, i.e., G := {(x(t), u(t))|x(t) ∈ X safe , ∀t ∈ [0, T ]}.The CEM [39] is used for IS, which is a multi-stage stochastic optimization algorithm that iterates upon two steps: first, it generates samples from a current Sampling Density Function (SDF) and computes the cost of each sample; second, it chooses an elite subset, E, of the generated samples for which their cost is below some threshold; finally, the elite subset is used to estimate a probability density function (PDF) as if they were drawn as i.i.d samples and we define m to be the number of elite trajectories.The CEM was first used in [40] for RRT * importance sampling with Gaussian mixture models (GMM).We use the CEM IS procedure that we implemented in [11] in this paper (Algorithm 3), since it utilizes weighted Gaussian kernel density estimate (WGKDE) for estimating the PDF of the elite samples.The number of mixtures in GMM has to be picked based on the workspace and is difficult to tune.Using WGKDE, however, mitigates this challenge.
VI. PROBABILISTIC COMPLETENESS AND OPTIMALITY
We provide the probabilistic proof for our algorithm.Following [21] we assume that Problem 1 is robustly feasible with minimum clearance ε > 0. Hence, ∃u ∈ U such that it produces a system (1) trajectory ϕ, where ϕ : The assumption on the sets of initial state and goal state are required to ensure the existence of a solution.Without such assumption, there's a probability such that certain states x init , x goal will be rejected, and the state trajectory will never reach the goal.In practice, we can ensure the assumption holds by choosing appropriate HOCBF hyper-parameters.Lemma 1. [41] Given two trajectories ϕ and ϕ ′ , as well as a period T ≥ 0, such that ϕ(0) = ϕ ′ (0) = x init , the trajectories can be bounded by control Lemma 1 ensures any two trajectories with the same initial state, the distance between their end states at time T is bounded by the largest difference in their controls.Remark 3. Consider steering from any x s ∈ X saf e towards any reachable x f ∈ X saf e using Algorithm 2. By carefully selecting appropriate constants of the class K functions, α 1 , ..., α ρ , of the HOCBF (see Defintion 9), we can produce u = u(t 0 ), t(t 1 ), . . ., u(T OL ), such that the bound (12) be given as , where x f and x ′ f are the states at time T OL of the produced trajectories under the control inputs u OL and the CBF-QP control inputs u LP , respectively, and µ ∈ R >0 .
Proof.The completeness of RRT* is implied by the completeness of RRT [21].Therefore, we need to prove the completeness of LQR-CBF-RRT.One can implement LQR-CBF-RRT * by mitigating Lines 9-14 and Lines 16-21 in Algorithm 1.Given that the local motion planner 2, and by leveraging Theorem 2 in [42], we need to prove that the incremental state trajectory will propagate to a sequence of intermediate states until reaching X goal .Assuming the trajectory ϕ of the solution of Problem 1 with ε clearance and has a length L. Considering m + 1 equidistant states x i ∈ ϕ, i = 1, . . ., m+1, where m = 4L ε , we define a sequence of balls with radius ε/4 that are centered at these states.For state x i , such ball is given by: For the consecutive states x i , x i+1 ∈ φ x , we want to prove that starting from x s ∈ B ε 2 (x i ) the steering function LQR-CBF-Steer is able to generate a control and state trajectories that its end state x ′ f fall in B ε 4 (x i+1 ).Given Remark 3, we assign η = ε 4 + µ + 2ι and 0 < ι < ε 4 − µ.Next, we assign B η (x s ) and B ε 4 −µ−ι (x i+1 ) at x s and x i+1 , respectively.Let S := B η (x s ) ∩ B ε 4 −µ−ι (x i+1 ) denotes the successful potential end-states set.For any x f ∈ S, LQR-CBF-Steer generates trajectories that fall in B µ (x f ) ⊂ B ε 4 (x i+1 ).We denote |.| as the Lebesgue measure, then, for x s , the probability of generating states in S is p = |S| |X | and is strictly positive.The probability p can be represented as success probability of the k Bernoulli trials process [42] that models generating m successful outcomes of sampling states that incrementally reach X goal .The rest of the proof follows the proof of Theorem 1 in [42].
Next, we tackle the asymptotic optimality with respect to a safety region, which is conservative and we are able to compute.
Assumption 1.We assume that we have access to a (conservative) safety set C ⊆ X saf e in which the procedure LQR-CBF-Steer(x current , x next ) is able to produce a trajectory that converges to x next .
Assumption 1 implies that the optimal path on the original state space is not realizable by the proposed algorithm due to the CBF constraints.Rather, the algorithm will produce an asymptotically optimal path in a conservative set of the environment.Provided that our algorithm is an RRT * variant with an LQR-based local motion planner with a CBFbased sample rejection method, in the following theorem, we provide an asymptotic optimality result.At the i-th iteration of LQR-CBF-RRT*, we consider the produced path, σ i , which connects x init (0) to an x(T ) ∈ X goal , as a concatenation of paths produced by LQR-CBF-Steer.Theorem 3. Consider the cost of the optimal solution of Problem 1 to be c * (σ); the LQR-CBF-RRT* (Algorithm 1) produces an asymptotically optimal solution in C to Problem 1. i.e., P (lim i→∞ c(σ i ) = c * (σ)) = 1 Proof.(Sketch) Given Assumption 1, the asymptotic optimality of the solution follows directly from Theorem 5 in [43] VII.EXPERIMENTAL RESULTS In this section, we illustrate our framework on two different systems.All experiments have the same workspace configuration, with initial state x init = [2, 2] and the goal state x goal = [30,24].We consider obstacles to be circular and the simulations are performed on a MacBook Pro with an M1 Pro CPU.The code is available at1 .
A. Baseline Summary
We performed several metrics comparison on our framework, including avoiding the construction QP, storing previous LQR feedback gain, and adaptive sampling.For the nonlinear system, we have (1) Our method.(2) LQR-CBF-RRT* without storing feedback gain for nonlinear systems, i.e., naive adaptive LQR-CBF-RRT*; (3) LQR-CBF-RRT* without both adaptive sampling and storing feedback gain.(4) QP based LQR-CBF-RRT*.For the linear system, we have (1) Our method.(5) LQR-CBF-RRT* with QP-solver during steering processes, i.e., LQR-CBF-RRT*-QP; The reason that we uses a separate system to do the comparison is because the QP controller with the nonlinear system is too conservative, i.e., many of the QPs could become infeasible.Nonlinear System We conduct a performance comparison on an unicycle model.We perform five experiments with 2000 iterations for each baseline with different random seeds.In table I, we list the time it takes to complete the motion planning for different baselines.Based on the result, implementing a hash table for optimal gain K LQR can significantly improve the performance, as it avoids the repeated linearization and recalculation of the gain matrices.
The graphical result can be found in Figure 3. Linear System For the linear system comparison, our proposed method is about 86% faster than using LQR-CBF-RRT*-QP (Baseline ( 5)).The result can be found in Table II.Our method is compared with an offline MPC with time
B. Numerical Example 1: Double Integrator Model
In this example, we performed our sampling-based motion planner on a double integrator model with the linear dynamics with the state [x 1 , x 2 , x 3 , x 4 ] : ẍ1 = u 1 and ẍ3 = u 2 where [x 1 , x 3 ] is the position, [x 2 , x 4 ] is the velocity.We control the system using acceleration [u 1 , u 2 ].For the i-th obstacle, the corresponding i-th safety set can be defined based on the function h ] is the centroid of the i-th obstacle and r i is the radius.We define CBF constraint ζ i for obstacle i as We perform constraint checking in LQR-CBF-Steer with ζ i in both edge extension and rewiring procedures.The satisfaction of CBF constraints ζ i ≥ 0, ∀i guarantees the Fig. 4: We compare our method w.r.t.other planning methods in terms of the optimal distance cost.Fig. 5: We use a customized DJI Robomaster robot as our experimental platform [44].The robot is equipped with a Raspberry Pi for onboard computation.generated controls and state trajectories are safe.We iterate through in total of 2500 steps, and the final result is shown in Figure 1d.
C. Numerical Example 2: Unicycle Model
In this example, we validate our framework on a unicycle model with system dynamics ẋ1 = v cos(θ), ẋ2 = v sin(θ), θ = ω, where [x 1 , x 2 ] is the position, and θ is the heading angle.The control input u = [v, ω] consists of the translational and angular velocity.Given this system dynamics, the control inputs v, and ω has mixed relative degree (v has a relative degree 1 and ω has a relative degree 2), which does not allow us to construct a CBF constraint directly.To bypass this issue, we fix the translational velocity v and only enforce CBF constraints on the angular velocity ω.Next, the CBF constraint for the i-th obstacle is constructed as the following The algorithm performs 3000 iterations before termination, and the result can be found in Figure 2.
D. Hardware Experiment
To evaluate the effectiveness of our planner in real-world scenario, we performed an experiment with an omnidirectional Fig. 6: Before deploying the robot, we tested the planner in a simulation.In the visual representation, each green edge denotes a sampled trajectory, while the red trajectory illustrates the final solution obtained after 3,000 iterations.Fig. 7: Showcasing an omni-directional ground robot that safely navigates around obstacles using our planner.The path followed by the robot is highlighted in blue.robot (Figure 5) to validate our method.The robot was tasked with successfully navigating through a cluttered environment featuring six obstacles.The obstacles are overapproximated with circular shapes with a radius of 0.25 meters for constructing the CBFs.To accurately track the robot's position, we employed an external telemetry system (OptiTrack), effectively simulating an outdoor GPS-enabled environment.The onboard Raspberry Pi computer is running ROS2 for communicating with the positioning system and processing control commands.A customized software stack handles the robot's low-level control called Frejya [45].In this scenario, we first used our planner to generate the optimal trajectories offline (shown in Figure 6).Then, the robot used its own online MPC controller to track the generated trajectory plan.The result (Figure 7) showed it is able efficiently navigate to the pre-defined goal without any collisions.The video is available at2 .
VIII. CONCLUSION
In this paper, we formulated an offline sampling-based motion planning problem to optimize the LQR cost function and ensure safety.We employ the CEM method, further boosting the efficiency of our algorithm during the sampling stage.Notably, our technique outperforms benchmark counterparts in comparative tests and demonstrates robust results in real-world experiments.
Fig. 1 :
Fig.1: We performed simulations on a double integrator (1a to 1d with adaptive sampling.The plots 1a (500 steps) and 1b (2500 steps) demonstrate how our algorithm explores the workspace.The 1c and 1d illustrate how SDF distribution is generated as the number of elite samples increases.
Fig. 2 :
Fig. 2: We performed the simulation on an unicycle model ( 2a to 2d) with adaptive sampling.The same environment configuration applies to the nonlinear system, where 2a (500 steps) and 2b (3000 steps) show the generated state trajectories.The 2c and 2d show the SDF level set and elite samples for the unicycle model.
Fig. 3 :
Fig.3:The figure shows our method outperforms the other three cases by a significant margin.The last column (with QP) was the method implemented in[17].
• ChooseParent The procedure (Algorithm 1 Line 10-14) tries to find collision-free paths between x new w.r.t.all its neighboring nodes.If there exists a collision-free path between x new and x nn ∈ X near , the corresponding cost is calculated.The ChooseParent procedure then selects the x nn with the lowest cost (2) as the parent of x new .
• Rewire is defined in Algorithm 1 Line 16-20.It evaluates and optimizes the LQR cost (2).This function checks a selected node's neighborhood and calculates the costs w.r.t.all the neighboring nodes.A new state trajectory from the current node to the nearby node is added to T .•LQRSolver The method (Algorithm 1 Line 4) computes optimal gain matrix K LQR from Riccati equation (
TABLE I :
Efficiency Comparison (Unit: Seconds)
TABLE II :
Efficiency Comparison (Unit: Seconds) horizon of 5 and discrete time interval of 0.05.The MPC based method went through 2000 iterations over 41 minutes during the experiment.Finally, we performed an optimal Euclidean distance comparison Fig.4.The result shows that LQR-CBF-RRT* does optimize the euclidean distance w.r.t.CBF-RRT. | 2023-04-04T01:16:18.761Z | 2023-04-03T00:00:00.000 | {
"year": 2023,
"sha1": "703d9c169405613e88b8c0722c3923f8b603c0d8",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "703d9c169405613e88b8c0722c3923f8b603c0d8",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Engineering"
]
} |
119408405 | pes2o/s2orc | v3-fos-license | Interacting hard-core bosons and surface preroughening
The theory of the preroughening transition of an unreconstructed surface, and the ensuing disordered flat (DOF) phase, is formulated in terms of interacting steps. Finite terraces play a crucial role in the formulation. We start by mapping the statistical mechanics of interacting (up and down) steps onto the quantum mechanics of two species of one-dimensional hard-core bosons. The effect of finite terraces translates into a number-non-conserving term in the boson Hamiltonian, which does not allow a description in terms of fermions, but leads to a two-chain spin problem. The Heisenberg spin-1 chain is recovered as a special limiting case. The global phase diagram is rich. We find the DOF phase is stabilized by short-range repulsions of like steps. On-site repulsion of up-down steps is essential in producing a DOF phase, whereas an off-site attraction between them is favorable but not required. Step-step correlation functions and terrace width distributions can be directly calculated with this method.
I. INTRODUCTION
The surface roughening transition and the nature of the rough phase are theoretically very well understood by a variety of approaches ranging from phenomenological descriptions, based on the sine-Gordon model, to microscopic solid-on-solid (SOS) models. [1] The preroughening transition (PR) and the ensuing disordered flat phase (DOF), both predicted several years ago by Rommelse and den Nijs, [2] have also been studied and characterized within certain restricted solid-on-solid (RSOS) models. [3][4][5][6][7][8][9] Although the physics behind these models, and the ingredients stabilizing the DOF phase, have been discussed in some detail, it is still useful to explore this subject from a different, and perhaps more physically appealing, perspective. The RSOS models, in particular, do not directly emphasize steps, terraces, and kinks, which on the other hand are very crucial actors in these transitions. This need is made more urgent by the recent experimental evidences for preroughening on rare gas solid (111) surfaces, [10,11] which calls for a detailed reinvestigation of the step-step interactions, [8] or the reconstructive tendencies, [5,6,9] crucial to obtain a DOF phase.
In the context of roughening, Villain and Vilfan, [12][13][14] den Nijs, [15] and Balents and Kardar, [16] for the case of reconstructed surfaces, and Villain, Grempel, and Lapujoulade, [17] for the case of vicinal surfaces, have shown that a more phenomenological approach based on working directly with steps yields a very direct picture of the physics involved.
Anisotropic surfaces, in particular, have a definite direction of stronger bonding along which a step tends to run; kinks on such steps, involving the breaking of strong bonds, are energetically expensive. The (110) face of fcc noble metals (Au,Pt,Ag,. . .) is a physical realization of an anisotropic surface, with a stronger, compact, [110] direction, and a softer [001] direction. In the strongly anisotropic limit, the transfer matrix problem for the system of steps can be mapped onto the imaginary-time evolution of a system of quantum particles in one dimension (1D), the imaginarytime being the preferred direction in which the steps run. [18][19][20] This is a well known mapping, heavily exploited, for instance, in the theory of uniaxial commensurate-to-incommensurate transitions of adsorbates. [20,19] Following a line of thought initiated by Ref. [13], Balents and Kardar applied the full machinery of the theory of interacting fermions in one dimension to explore the possible phase diagrams of generic (p × 1) reconstructed anisotropic surfaces. [16] In their approach, steps of double height are forbidden (as energetically too expensive), while up and down monoatomic steps are mapped onto spin-1/2 fermions in 1D, described (in the continuum limit) by the hamiltonian [16] Here γ is the inverse line tension of a step, V σσ ′ (x) is an interaction between steps, and the remaining notation is standard. This Hamiltonian describes infinite steps, traversing the entire length of the sample, as the number of particles is conserved by the (imaginary-time) dynamics. In reality, steps on surfaces can lead to finite defects by forming loops (i.e., finite terraces) on the surface. [13,16] The order of the reconstruction p, dictates, through the symmetry of the different ground states, the form of the "loop" terms which are allowed. [13,16] For a (p × 1) reconstructed surface, the Hamiltonian H has to be supplemented with a term of the type where a is a cut-off distance of the order of the lattice constant. Balents and Kardar argued, by power counting, that the effect of finite terraces, i.e., the introduction of H LOOP , is irrelevant for p > 2, marginal for p = 2, and strongly relevant for p = 1. They went on by addressing in detail the p = 2 case, of relevance to the (110) missing-row reconstructed facet of Au. (See also Ref. [15] for closely related work on the p = 2 case.) However, the unreconstructed (p = 1) case was not pursued further. The approach we take in the present paper is similar in spirit. Our specific goal, however, is to address the question of the presence of a DOF phase, and the preroughening transition, for unreconstructed surfaces. Thus, in the classification introduced above, we are now interested in detail in the p = 1 case. Technically, this leads, as we shall see, to significant differences with respect, for instance, to Ref. [16], and to a new phase diagram quite different from the p > 1 cases.
The crucial point is that for p = 1 the loop terms (see Eq. (2)) are of the BCS-like form λ dx(ψ ↑ (x)ψ ↓ (x) + H.c.), i.e., a strongly relevant one-body piece. This might appear as just a minor complication, at first glance, since quadratic terms can be easily diagonalized by a Bogoliubov transformation. Closer consideration, however, leads to reconsider the whole mapping. Fermionic minus signs have no role whatsoever in the mapping of a classical statistical mechanics problem. The natural statistics to use is always the bosonic. In the present case, a hard-core constraint will be necessary, in order to implement the appropriate configurational space (for instance, the non-crossing constraint for steps of the same type). In a one-dimensional quantum problem, the choice of the statistics is, quite often, not a big problem, as we can transform, by a Wigner-Jordan transformation, hard-core bosons into fermions, with a transformed hamiltonian which has exactly the same form (only boundary conditions have to be considered carefully). In our case, however, pairing terms of the type a i,↑ a i+1,↓ , which (see below) are essential to describe finite terraces, do not transform into simple fermionic BCS-like terms, and become non-local after a Wigner-Jordan transformation. This will force us to work with hard-core bosons.
Our approach, in summary, is as follows. We assume, as in Ref. [16], that the only relevant extended defects are monoatomic steps, which can be either up or down. Steps of the same kind are forbidden to cross, while steps of different type can cross. Moreover, steps interact with each other, have kinks, and can form finite terraces on the surface. These steps are then mapped onto world-lines of hard-core bosons in one dimension. Kinks on the steps correspond to hopping terms in the quantum Hamiltonian. Pairs of up-down steps which are created and annihilated to form finite terraces on the surface, give number-non-conserving terms in the quantum Hamiltonian. [20] Pairwise interactions between the steps are taken into account by corresponding two-body terms in the quantum model.
The present work is concerned with the case of a low-index unreconstructed surface. Extensions to the case of vicinals, for which the long-ranged nature of the step-step interactions is an essential ingredient, are left to a future study.
Our main goals in working out this type of approach to PR are the following: (a) to build a formulation providing a more direct access to the physics of PR, which is somewhat hidden in the RSOS formulations; (b) to explore more directly the role of step-step interactions; (c) to study step-step correlation functions and terrace width distributions, not available so far. As it turns out, we have found that this approach is quite successful on all three accounts.
The paper is organized as follows. In Sec. II we present in detail the classical statistical mechanics model of interacting steps, which is then mapped onto the corresponding 1D quantum model of hard-core bosons in Sec. III. In Sec. IV we consider in detail the spin-1 limit, obtained by setting the on-site step-step repulsion to infinity, and then map the general case to a problem of two spin-1/2 Heisenberg chains. Sec. V contains a summary of bosonization plus finite-size scaling calculations done in order to extract the phase diagram of the model. Sec. VI summarizes the relevant order parameters and correlation functions investigated. In Sec. VII we present our results for the overall phase diagram of the model. Sec. VIII illustrates our results for the step-step correlations and the terrace width distributions. Finally, Sec. IX contains a discussion of the results and some conclusive remarks.
II. THE MODEL
We assume the only relevant extended defects involved in the surface PR transition to be steps, which can be either up or down. These steps interact with each other, they can have kinks, and they can form finite terraces on the surface. Fig. 1 shows a schematic picture of a surface with steps.
Our model will be defined on a square lattice, and we will assume the steps to run preferentially in one direction (the vertical direction in Fig. 1). The steps are only allowed to make simple nearest neighbor kinks. Steps running in the horizontal direction are assumed to be energetically expensive and neglected. Hence, our surface is, by construction, very highly anisotropic. We define the model by assigning its transfer matrix along y. Denoting by |S(j) and |S(j + 1) the configuration of the j-th and j + 1-th horizontal strips, we have where β is the inverse temperature, and (see also Fig. 2): • δ S represents the energy cost (per unit length) of a step running along the y-direction; N (j) S is the number of steps in the strip j; • δ K N (j,j+1) K is the energy cost of N (j,j+1) K kinks between strip j and j + 1; • δ Ts N (j,j+1) Ts is the energy cost for the creation of N (j,j+1) Ts terraces of "size" s between strip j and j + 1. We will always assume s = 1, or 0 (see Fig. 2).
• δ ex N (j,j+1) ex is the energy associated to the crossing of N (j,j+1) ex pairs of opposite steps between strip j and j + 1; • V step−step = V + V ⊥ , with V and V ⊥ describing respectively the interaction between steps of the same kind and of the opposite kind. For V , we assume a generic repulsive interaction with V k−i possibly possessing an elastic long-range tail of the form ≈ |k − i| −2 . Here n i,↑(↓) is 1 if there is a step up (down) at site i. Similarly, we assume V ⊥ to be given by The sign of the terms in V ⊥ , particularly at short-range, depends on microscopic details and need not be specified at this stage.
If we assume periodic boundary conditions in y direction, i.e., |S(N y + 1) = |S(1) , the partition function of this system is
III. THE QUANTUM MODEL
It is well known that, in the strong anisotropy (or time-continuum) limit, many D dimensional classical problems can be mapped onto D − 1 dimensional quantum problems. [18] This relationship is established by means of the path integral formalism. In particular, the up and down steps are mathematically equivalent to the world lines of spin-up and spin-down hard-core bosons, and the preferential direction in which the steps run plays the role of time in the quantum problem. The hard-core condition is imposed in order to implement the non-crossing condition for steps of the same type, a physically justified restriction, in view of the large energetic cost of double-step regions. The non-crossing constraint for steps of the same type would be automatically satisfied by the Pauli principle if we were to deal with spin-1/2 fermions. (See below for more comments on the problem of quantum statistics.) We consider the following quantum Hamiltonian with a i,σ representing the destruction operator for a spin σ hard-core boson and withN =N ↑ +N ↓ the total number of particles. We will work in the subspaceN ↑ =N ↓ , for a low-index surface. (For a vicinal surface of angle φ, we would have (N ↓ −N ↑ ) = L tan φ.) Within a path-integral approach, it can be shown [18] that the ground state properties of this quantum Hamiltonian correspond to the temperature properties of the classical step model, whose transfer matrix is given by Eq. (3), in the large anisotropy limit. Specifically, the classical parameters turn out to be given by: where ǫ is the Trotter discretization time for the quantum path-integral. The mapping is asymptotically correct only in the limit ǫ → 0. This a) implies, clearly, a strong anisotropy limit for the classical problem and b) does not allow a straightforward identification of a classical low-or high-temperature limit. Indeed, if all the parameters of the quantum problem are of order one, taking β → 0 or β → ∞ makes Eq. (8) incompatible with the requirement that the left-hand side should be a small quantity (of order ǫ). In other words, the mapping is justified so long as Kinetic couplings = (δ K , δ T0 , · · ·) ≫ T ≫ (δ S ,Ṽ , · · ·) = Potential couplings , and nothing can be said, in principle, about the infinite temperature limit. This should be always kept in mind when considering the infinite temperature limit from the quantum model point of view (see, e.g., sec. VII B).
It is also worth stressing that the (hard-core) boson statistics of the a operators in the Hamiltonian is crucial to the nature of the phases and transition lines in the phase diagram. Indeed, unlike other terms in the Hamiltonian, the terrace creation terms cannot be translated into simple (i.e., local) fermionic BCS-like terms by a Jordan-Wigner transformation. In such instances, the correct statistics to use is undoubtedly the bosonic one, as fermionic minus signs do not appear in a classical statistical mechanics problem. This point seems to be not always appreciated in the literature. [28] We now make contact with previous work in the context of surface physics. The model in Eq. (7), with t ex = 0 and t ⊥ 0,1 = 0, has been considered, in its continuum version, by Balents and Kardar. [16] (See also Refs. [12] and [15] for related work.) In the absence of t ⊥ -terms, particles are taken to be fermions. The emphasis of Ref. [16] was on (p × 1) reconstructed surfaces, particularly with p = 2. The effect of finite terraces, i.e., closed loops of steps, was argued to be irrelevant for p > 2, marginal for p = 2, and strongly relevant for p = 1. [16] The unreconstructed (p = 1) case, however, was not pursued at all. As just argued, the p = 1 case cannot be tackled in terms of fermions. The effect of the finite terraces on an unreconstructed surface -the t ⊥ terms in Eq. (7) -is one of the points addressed in detail in the present work. Moreover, we show that restricting the analysis to a simple Hubbard-type on-site interaction does not lead to the full richness of the phase diagram; nearest-neighbor interactions are essential in order to stabilize, for instance, a DOF phase.
For a special choice of the parameters, the model in Eq. (7) reduces to a well studied problem. Consider the case in which both V and V ⊥ are truncated to nearest neighbors, V ,⊥ j−i = V ,⊥ 1 δ j,i+1 , in the limit of infinite on-site repulsion of opposite steps, V ⊥ 0 → ∞. The limit V ⊥ 0 → ∞ enforces, in absence of t ex , a non-crossing condition for opposite steps as well, and allows only three states per site, which we can easily map onto a spin-1 variable as follows: It is then straightforward to show that all possible matrix elements of H in Eq. (7) coincide exactly with those of the spin-1 Heisenberg chain if the following parameter choice is made: A nonzero t ex term, when present, translates into a quartic spin term and will be shown to be relevant in stabilizing the rough phase for finite repulsive V 1 . The phase diagram for this special case, in the surface physics relevant region −µ = D > 0, can be directly borrowed from the literature. [3,21] The Heisenberg spin-1 chain was also obtained by another route, namely by the quantum mapping of RSOS models, by den Nijs and Rommelse, who gave a very detailed discussion of the surface physics interpretation of the different phases. [3] The region in which a DOF phase is stabilized is found for J z > 0, i.e., it corresponds to repulsive V 1 and attractive V ⊥ 1 (see Eq. (11)). The latter condition is, however, not crucial, as we shall see later on. The flatness of the DOF phase is directly related to the Haldane gap in the spin-1 chain. [3] If the interactions are much bigger than the cost of a unit of step (J z ≫ D, see line (a) in Fig. 3), at low temperatures (i.e., for large J z /J xy ) the system will reconstruct into an ordered sequence of up-down steps (the Néel phase of the spin-1 chain). By increasing temperature (i.e., lowering J z /J xy ) the system undergoes an Ising transition to a DOF phase, in which the positional order of the up-down sequence of steps is lost. [3] If the cost of the unit step is larger than the interactions, (J z ≪ D, see line (b) in Fig. 3), the low temperature phase is flat, and the (preroughening) transition to the DOF phase has non-universal exponents. [3] If we impose the condition V ⊥ 0 → ∞ without assuming Eq. (11), what we are considering is always a three-stateper-site problem, but the resulting Hamiltonian does not have the simple bilinear form (10) in terms of the spin-1 operators. We will refer to this general case as a spin-1 chain. The specific case in Eq. (10) will be referred to as Heisenberg spin-1 chain.
B. The general case: Mapping to two spin-1/2 chains In order to study the model in Eq. (7) for more general parameter values, in particular for V ⊥ 0 = ∞, it is convenient to abandon the spin-1 representation and map onto a problem of two coupled spin-1/2 chains, where the total number of states per rung is four, instead of three. Introducing the usual spin-1/2 representation for each of the two species of hard-core bosons, and performing a π-rotation for the S i,2 spins around the x-axis, which amounts to a particle-hole transformation for the down-bosons, S ± i,2 → S ∓ i,2 and S z i,2 → −S z i,2 , one can rewrite the Hamiltonian (7) as the following model of two spin-1/2 chains (α = 1, 2 denoting the chain) with opposite magnetic fields: where, for simplicity, we have considered only interaction terms up to nearest neighbors and we have omitted the we denote the Fourier transforms of the potentials, and The magnetic field h is related to the chemical potential µ in the following way: Notice that after the spin-rotation, performed to get a standard S + 1 S − 2 coupling between the chains starting from the a † ↑ a † ↓ boson term, the signs of the V ⊥ terms are all changed, since S z i,2 → −S z i,2 . For the same reason, the chemical potential terms transform into opposite magnetic fields for the two chains. The conditionN ↑ =N ↓ reads, after the canonical transformation, as zero total magnetization for the spins, i S z i,1 + S z i,2 = 0. [22] V. LOW-ENERGY HAMILTONIAN FROM FINITE-SIZE DIAGONALIZATION With a well defined quantum spin chain problem at hand, it is standard practice to study its phase diagram by a combination of field-theoretical arguments and finite-size exact diagonalization data.
At weak coupling, a standard field-theory approach to one dimensional quantum systems consists in applying bosonization techniques. This was done by Strong and Millis for the case of two coupled spin-1/2 Heisenberg chains, i.e., a special case of Eq. (13) The procedure can be easily extended to our case. Introducing symmetric and antisymmetric combinations of the bosonic phase fields, [23,24] which represent the bosonic sound-like excitations of the system in the gapless phase, the low-energy Hamiltonian reads: where a is a short distance cut-off, and A, B, C, and D are coupling dependent (but cut-off independent) constants. Let us consider the A-sector first. The A-sector can be gapless only if 1/ √ K A > 2 and, simultaneously, 2 √ K A > 2. This is, of course, impossible. Thus, the A-sector flows to strong coupling and develops a gap. [23] The Hamiltonian for the S-sector, renormalized by the A-sector, will be of the form: The cosine term is relevant and opens up a gap when K S < 1. Thus the system undergoes a KT transition when K S → 1. As discussed in the following sections, this is associated to a roughening transition.
If the symmetric sector flows to the free Klein-Gordon Hamiltonian (i.e., the Luttinger model) in some range of parameters, the low energy spectrum of the two chains, for L → ∞, will have the form of a spinless Luttinger model describing symmetric excitations. Expressing Θ J(N ),S in terms of bosonic creation operators we can write: [24] where N S and J S are the symmetric sector total number (N ) and current (J) operators, v S is the renormalized sound velocity, v N = v S /K S , and v J = K S v S . [24] In order to compute v N , we note that the simplest charge excitation not involving the current part consists in adding 2 particles to the system. Thus we have: To compute v J , notice that if a magnetic flux Φ is concatenated with the ring, the current part of the energy spectrum is modified in this way: where E(0) is the ground state energy and E(k = 2π/L) is the energy of the lowest excited state of momentum k = 2π/L. As a consequence, K S can be equivalently computed from the finite-size extrapolation of v J (L)/v S (L), of v S (L)/v N (L), or of v J (L)/v N (L). If the finite-size data are compatible with a Luttinger Liquid picture, i.e., with a spectrum of the form (14), then these three extrapolations should converge, as L → ∞, to a single value.
VI. ORDER PARAMETERS
We now define the order parameters and correlation functions we have to consider in order to study the phase diagram of our model. We will basically deal with four correlation functions, whose behavior in the different phases is summarized in Tab. 1.
• The height-height correlation, defined by G h (r) = (h r − h 0 ) 2 , diverges logarithmically as in the rough phase, with K ≥ 1, while it remains limited in the flat, Néel, and DOF phases. [25] At the roughening transition, K takes the universal value of 1 [1,15]. This gives a simple criterion for determining whether a phase is rough or not. In fact, the coefficient K coincides with the Luttinger exponent K S for the symmetric sector, [16] which can be extracted by finite-size scaling of exact diagonalization data (cfr. Sec. V).
• The string correlation function, defined by [3] (We introduced the notation S z i = n i↑ − n i↓ .) The phase factor contributes a plus (minus) sign if there are an even (odd) number of steps between site 0 and site r. In the DOF and in the Néel phases, a step up (down) is preferentially followed by a step down (up). In these configurations, G s (r) gets a contribution equal to 1 every time sites 0 and r are occupied by a step. Thus, in the DOF phase and in the Néel phase G s (r) decays exponentially to the square of the mean density of steps. [15] • The staggered magnetization, defined by A Néel phase will be signaled by a non-zero staggered magnetization N , while N is zero in the rough, DOF and flat phases.
• The flatness order parameter, defined by [3] F has a non zero value only in the ordered flat phase (in the DOF phase the exponential fluctuates between 1 and −1 as r is increased).
VII. THE OVERALL PHASE DIAGRAM
Our model, even if the interactions are truncated to first neighbors, contains many parameters. Rather than trying to describe the phase diagram in an exhaustive form, we will focus our discussion on a few questions, that we consider quite relevant with respect to the surface physics interpretation of our model. This will take us to consider in detail some special planes cut through the phase diagram, and will also give us an idea of its global structure.
Let us consider, once again, the Heisenberg spin-1 phase diagram in Fig. 3. From the surface physics point of view, it presents some unpleasant features: since the increasing temperature curve, for a given surface, is a line through the origin (the origin corresponds to the infinite T point; cfr. Eq. (8)), every "surface" with repulsive interaction between steps of the same kind (V 1 > 0) has a preroughening transition at finite temperature, and no rough phase at finite T . On the other hand, if V 1 is attractive there is only roughening.
In relation to these problems, we will discuss the following main questions: A Is the attractive V ⊥ 1 term between opposite sign steps essential in order to stabilize a DOF phase? B Is there a choice of the parameters for which our model can describe a surface with a finite roughening temperature?
C How does the presence or absence of a preroughening transition depend on the relative strength of the step-step interactions to the cost per unit length of a step.
D What is the role of the opposite step on-site repulsion V ⊥ 0 .
In the following we address these points directly.
A. The role of attraction between opposite steps: Spin-1 chain with V ⊥ 1 = 0.
In order to explore the different roles of the two interactions V ⊥ 1 and V 1 , we have first studied the effect of a repulsive V 1 , keeping V ⊥ 1 = 0. For the time being, we still work with the spin-1 condition, i.e., we impose an infinite on-site repulsion of opposite steps (V ⊥ 0 = ∞). For this choice of parameters, we do not find any point with V 1 > 0 in which the finite-size data might indicate a vanishing gap. This is compatible with the results of den Nijs and Rommelse about the location of the KT-transition in the Heisenberg spin-1 phase diagram. [3] The system undergoes a roughening transition only at infinite temperature.
In Fig. 4 we draw a qualitative phase diagram for values of the parameters µ and V which are relevant for surface physics (i.e., positive energy cost for a step, and repulsive interaction between steps of the same kind). It is quite remarkable how the DOF phase survives the turning off of the attraction between steps of the same kind. As a matter of fact, taking V ⊥ 1 = 0 leads to the disappearance of the reconstructed (Néel) phase from the physically interesting region of the phase diagram, and, therefore, to an even larger DOF phase. (We will further discuss the roles of V ⊥ 1 and V 1 in stabilizing the DOF phase later in this Section).
B. Infinite or finite roughening temperature? The role of tex.
In order to discuss the point concerning the roughening temperature, we observe that, as T → ∞, the kinetic terms tend to be the only relevant pieces of the Hamiltonian, see Eq. (8). Thus, we now consider the model in absence of potential terms (V ), and for zero chemical potential µ. For the time being, we also take t ex = 0. The crucial role of the t ex -term will be discussed afterwards. In this case, the Hamiltonian reduces to that of two coupled XY chains: Notice that exchanging t with t ⊥ 1 is simply equivalent to renaming the sites (2i, σ) to (2i, σ) and vice-versa. This is illustrated pictorially in Fig. 5. Thus, a model with t ⊥ 1 /t = τ is completely equivalent to one with t ⊥ 1 /t = 1/τ . In Fig. 6(a) we plot the finite-size gaps as a function of the system size L for t ⊥ 0 = 0, and different values of t ⊥ 1 /t between 0 and 1. Given the negligible curvature of the straight-line fits, the data seem to suggest that the gap extrapolates to 0 in all cases. In Fig. 7 we plot the finite-size value of the Luttinger exponent K S , determined as explained in sec. VII B. The data for K S confirm the scenario of a gapless (i.e., rough) system. Notice that K S seems to extrapolate to values larger than 1, indicating a rough phase that should survive to the turning on of a suitably small repulsive V 1 .
We now address this point in more detail. Consider the t ⊥ 1 = t case (with t ⊥ 0 = 0), in which K S seems to extrapolate to the largest value. Denoting by s s ′ , with s, s ′ = ±1, the four possible configurations at each site, we can define the following four states: It is now straightforward to verify that, at each site, the state |0 − is decoupled from the three remaining ones. The Hamiltonian H can then be considered a spin-1 Hamiltonian acting on the subspace spanned by the three states |↑ , |↓ , and |0 + . As it can be checked by explicitly calculating all matrix elements, H, when restricted within this subspace, coincides with the Heisenberg spin-1 Hamiltonian at J z = 0 and J xy = 2t . As argued by den Nijs and Rommelse, [3] the location of the KT transition in the Heisenberg spin-1 phase diagram is, very likely, exactly at J z = 0. Thus, two coupled XY chains with t ⊥ 1 = t and t ⊥ 0 = 0 have a K S actually equal to 1, and what we see in Fig. 7 is only due, very likely, to finite-size effects.
The effect of turning on t ⊥ 0 , while keeping t ⊥ 1 = 0, leads to a completely different picture. In this case, for t ⊥ 0 = t the system is gapped, as suggested by the finite-size data of Fig. 6(b). The physical reason for the different behavior of the t ⊥ 1 and t ⊥ 0 terms can be understood by considering the limiting cases of large values for these parameters. For t ⊥ 0 → ∞, the ground state tends to have |0 + at each site, with a large gap (of order t ⊥ 0 ) to other exited states. For t ⊥ 1 → ∞ (at t ⊥ 0 = 0), on the other hand, the system reduces, by the previously described duality property, to two uncoupled XY chains, and must, therefore, be gapless.
The previous considerations lead us to conjecture that for any choice of t ⊥ 1 (as long as t ⊥ 0 = 0), two XY chains are gapless and have K S = 1. On the contrary, turning on t ⊥ 0 , at t ⊥ 1 = 0, immediately opens up a gap. These conclusions have important consequences on the stability of the rough phase. Since K S attains, at best, the marginal value of 1, turning on any positive V 1 immediately opens up a gap, and the rough phase is confined to infinite temperature.
We will now demonstrate that, if we allow for the possibility of step-crossing events, t ex > 0, the gapless phase survives the turning-on of a positive V 1 , and every "surface" has a rough phase for high enough T .
In order to show this, we add to the Heisenberg spin-1 Hamiltonian a t ex term, see Eqs. (7,12), with t ex = t = J xy . Fig. 8 shows the phase diagram for this case. Qualitatively, it is very similar to the Heisenberg spin-1 case (see Fig. 3), except for small values of the potentials, where the t ex term changes the structure of the phase diagram. In fact, for µ = 0 we observe a gapless phase extending for positive values of V 1 , up to V 1 ≈ 0.4: this is demonstrated in Fig. 9 were we plot the Luttinger exponent K S along the line µ = 0. This finding is in accord with bosonization: the t ex term, unlike the t ⊥ terms, increases K S and leads to a stabilization of the rough phase. Indeed, for two XY-chains it is easy to show that, up to lowest order in t ex , Another remarkable feature of the phase diagram in Fig. 8 is that, at variance with the ordinary Heisenberg case (t ex = 0), the temperature line for a given "surface" crosses the DOF region only if the cost of a step, δ S , is sufficiently small as compared to the interaction between steps, V 1 . We have illustrated this by sketching in Fig. 8 temperaturelines for three different situations. For the case labeled A, the energy cost of a step is high with respect to the interaction energy between steps, and there is no preroughening. In the case labeled B, δ S / V 1 is smaller, and a DOF phase is present at intermediate temperatures. Finally, for case C, the interaction between steps is the most relevant energy, and the low temperature phase is 2 × 1 reconstructed.
C. Presence or absence of preroughening: Role of interactions versus step line tension.
We now want to discuss in some detail what happens if we leave the condition V ⊥ 1 = −V 1 , without going to the extreme case V ⊥ 1 = 0, discussed in Sec. VII A. We illustrate this by choosing V ⊥ 1 = −V 1 /10, while keeping t ex = t = 1 and V ⊥ 0 = ∞ (infinite on-site repulsion of opposite steps). This choice of parameters describes a class of surfaces in which the attraction between steps of opposite kind is much smaller than the repulsion between steps of the same kind.
In Fig. 10 we plot the phase diagram for this choice of parameters. The system is Néel ordered for very large values of V 1 ; the value of the ratio V 1 /|V ⊥ 1 | determines the location of the DOF-Néel phase boundary (see Fig. 8 and Fig. 4; recall that, for V ⊥ 1 = 0, the Néel phase is absent for physical values of µ). The most relevant comment to the phase diagram in Fig. 10 regards the conditions upon which the temperature trajectories of an actual surface model cross the preroughening line. It is clear, in fact, that depending on the ratio between the cost of a step (per unit length) δ S and, say, the interaction energy between steps of the same kind V 1 , a surface can have: i) only roughening (case A), or ii) first preroughening and then roughening (case B). Now, with V ⊥ 1 = −V 1 /10, the "critical ratio" (δ S / V 1 ) crit , below which preroughening is possible is of the order of 1/10, much smaller then in the V ⊥ 1 = −V 1 case (where (δ S / V 1 ) crit ≈ 1). Given the fact that δ S is typically the largest "diagonal" energy, this implies that a physical temperature trajectory will be, most likely, in the region where only roughening occurs. If, and how, long-range interactions might change this picture is an interesting and open problem. At last, we want to discuss briefly what happens to the DOF phase if we allow double occupation of a site, i.e., if we do not take the limit V ⊥ 0 → ∞. To demonstrate that the restriction to V ⊥ 0 = ∞ is not crucial, we consider the case system is an Heisenberg spin-1 chain at the isotropic point, corresponding to a DOF phase. [3] In Fig. 11 we plot the finite-size values of the flatness order parameter F (open symbols) and of the DOF correlation function G s (L/2) (full symbols) for decreasing values of V ⊥ 0 . The data suggest that the system remains DOF all the way down to V ⊥ 0 ≈ 0. We have verified that a similar scenario is found if we turn on the t ex term or a small |µ|. Thus, our finite-size data suggest that the spin-1 condition (V ⊥ 0 = ∞) is not essential in order to stabilize a DOF phase.
VIII. STEP-STEP CORRELATIONS
Correlation functions involving steps can be calculated numerically, for a given finite size, at any point in the phase diagram of our model. We will discuss here two correlation functions, i.e., step-step correlations and terrace width distributions. Let n S be the average density of steps of a single species (up or down). In general n S is always different from zero, even in the flat phase, since we do not discriminate between steps that traverse the entire sample and steps that form loops (i.e., finite terraces).
Step-step correlations are defined as follows: Step ↑ (0) Step σ (r) = 1 n 2 S n 0,↑ n r,σ , with σ =↑, ↓. If translational symmetry is not broken, we must have, at large distances, N ↑σ (r → ∞) → 1. The distribution of terrace sizes (along the x-direction only!) is the probability of having two steps a distance r apart without any other step in between. There are two different kind of terraces we can look at: those delimited by two steps of the same type, and those between two different steps. Thus, we define: where, again, σ =↑, ↓. The string operator in square brackets enforces the absence of additional steps between 0 and r. Fig. 12 illustrates the behavior of N ↑↑ and N ↑↓ , at three different points in the phase diagram of the Heisenberg spin-1 chain: a rough case (J z = −0.5, µ = 0, triangles), a DOF case (J z = 1 and µ = 0, squares), and a flat one (J z = 1 and µ = −2, pentagons). The flat case results are very simple: both N ↑↑ and N ↑↓ converge exponentially fast (with a very short correlation length) to the large distance limit of 1. In the rough phase, instead, we have verified that the approach to 1 shows a power law tail. This is easy to prove. Rewrite first N ↑σ in terms of density and "spin" correlations: where the + and − signs apply, respectively, to σ =↑ and σ =↓, n i = n i,↑ + n i,↓ , and S z i = n i,↑ − n i,↓ . Within a bosonization approach, [23] the operators n i and S z i involve (after particle-hole transformation for the ↓-bosons) only the antisymmetric and symmetric sectors, respectively. The antisymmetric sector is always gapped (see discussion in Sec. V and Ref. [23]), so that density-density correlations are exponential. In the rough phase, however, the symmetric sector is gapless, and S z − S z correlations have a uniform power-law tail of the form which is precisely the term responsible for the logarithm in the heigth-heigth correlation function G(r) = (h r − h 0 ) 2 .
[25] The DOF case results, finally, show a different behavior, with a sizeable oscillating component of the correlations. This behavior, however, reflects only a short-range effect, caused by the neighboring reconstructed (Néel) phase: the oscillating part has to decrease to zero at large r, since no breaking of translational symmetry occurs in the DOF phase. [2] We finally discuss briefly the behavior of the distributions of terrace sizes (for simplicity, once again, in the Heisenberg spin-1 case). While in principle it is important to know what is the probability for the surface to be flat over a distance r, this quantity has never been calculated so far.
Let us consider, first, the behavior of P ↑↓ in the rough phase. Fig. 13(a) is a plot of the logarithm of P ↑↓ versus a scaled distance 2n S r, for several points taken inside the rough phase of the Heisenberg spin-1 phase diagram. We observe that the general behavior of P (r) is exponential in the size of the terrace r, P (r) ≈ e −r/λ and that a good collapse is obtained for all data if the distance r is scaled to the average separation between two steps, 1/(2n S ), i.e., λ ∝ 1/(2n S ). The scattering of the data for the largest r's is due to finite-size effects. The behavior of P ↑↑ (r) is found to be qualitatively similar.
In the DOF phase we find that the terrace size distribution probability is again exponential with size, but now λ does not scale with the density of steps, as it did instead in the rough phase. Fig. 13(b) illustrates the behavior of P (r) at a DOF point, corresponding to the isotropic Heisenberg point of the spin-1 chain. P (r) at a rough point is also reported for comparison. We observe that, as anticipated, the behavior of P ↑σ is, once again, exponential in r (i.e., the "DOF checkerboard" has no typical length!). Superimposed on the leading exponential, the DOF case results show a strong oscillating short range component which is again due to the neighboring reconstructed (Néel) phase. Two more features are worth noticing. First, compared to the rough case, P ↑↓ (r) is larger in the DOF case for r = 1, and then substantially smaller for larger values of r (and decreasing with a larger exponent). Second, in the DOF case P ↑↑ (r) is one order of magnitude smaller than P ↑↓ (r), while the difference is much smaller in the rough case. These features are reasonable in view of the diluted antiferromagnetic ordering of steps, typical of the DOF phase.
Experimentally, terrace sizze distributions could in the future be extracted, e.g., from STM data. [26]
IX. SUMMARY AND CONCLUSIONS
In this paper we have presented and discussed a statistical mechanics model for studying the possible phase transitions of an ideal, unreconstructed surface. The elementary objects upon which the model is based are the natural extended defects of an unreconstructed surface, i.e., steps and terraces. This starting point is, in our opinion, physically more transparent then the usual microscopic RSOS-model description. Our model allows, in principle, the description of a real surface and, in perspective, one could test it with realistic step-step interactions.
We have tackled our problem of interacting steps by mapping it, in a well known way, onto a one dimensional quantum problem of interacting hard-core bosons. Although this mapping is exact only in the strong anisotropy limit, it can provide very useful information about the phases and the nature of the transitions also in more general instances. Moreover, some realistic cases, like (110) surfaces of fcc metals, are actually quite anisotropic.
The quantum Hamiltonian, see Eq. (7), contains standard terms, like nearest-neighbor hopping (describing kinks on the steps), potential terms (describing interactions between steps), and chemical potential (cost per unit length of a step), as well as terms describing i) terrace creation/annihilation (through BCS-like number non conserving terms), ii) opposite step crossing events. The latter two terms are crucial, in many ways.
Terrace terms are important to describe correctly the universality classes of the relevant transitions. This is known in the literature, [27, 16,15] but never explored in details in the present context. Moreover, in our case, the terrace terms also force us to work with hard-core bosons, as the standard Wigner-Jordan transformation to fermions does not lead to a simple local fermionic Hamiltonian. This point is sometimes overlooked in the literature. [28] The term describing the crossing of opposite steps is important in order to stabilize a gapless (i.e., rough) phase for finite repulsive interactions between steps of the same kind. This, in turn, leads to a finite roughening temperature for the classical model.
Finite-size exact diagonalizations and bosonization techniques have been used to unveil the richness of the phase diagram. In the limit of V ⊥ 0 → ∞ and for a particular choice of parameters (the potentials, for instance, are truncated to first neighbors and set to V 1 = −V ⊥ 1 ), the model maps exactly onto the Heisenberg spin-1 chain Hamiltonian. The latter was also obtained, by den Nijs and Rommelse as the quantum mapping of RSOS models; [3] it presents a DOF phase for V 1 > 0, but does not describe, in that case, a surface with a finite temperature roughening. (On the other hand, if V 1 is attractive there is only roughening).
Taking the Heisenberg chain as a starting point, we have then explored the phase diagram for other choices of parameters, obtaining results that we believe to be relevant with respect to the surface physics interpretation of our model. Summarizing, we have seen that: 1. The Heisenberg spin-1 restriction V 1 = −V ⊥ 1 is not crucial in stabilizing the DOF phase. In particular, we observe a DOF phase even for V ⊥ 1 = 0 (see Sec. VII A). Moreover, a DOF phase is present not only for V ⊥ 0 = ∞ (spin-1 case) but also when V ⊥ 0 is finite, as long as positive. 2. If we add to the Heisenberg spin-1 Hamiltonian a t ex term, we observe a gapless phase extending for positive values of V 1 . Every surface has a rough phase for high enough T . This is true also for other choices of the potentials (see Sec. VII C). Moreover, if we do not include in the Hamiltonian the t ex term, the rough phase does not survive when one turns on a V 1 > 0 (see sec. VII B). Thus, the opposite step crossing term is crucial in order to obtain a model describing, at least at a coarse grained level, a physical surface.
3. The relative values of the interactions and of the cost per unit length of a step decide whether a surface has a stable DOF phase for a certain range of T . The temperature trajectory crosses the DOF region only if the cost of a step, δ S , is sufficiently small as compared to V 1 (see Fig. 8). Given the fact that δ S is typically the largest "diagonal" energy, this implies that a physical temperature trajectory will often be in a region where only roughening occurs.
In conclusion, we have found that: (i) a model based on steps can describe preroughening (PR), as well as roughening; (ii) the steps must be treated as hard-core bosons rather than fermions; (iii) the qualitative role of step-step interactions in driving PR, known already from RSOS models, is recovered in this picture; (iv) correlation functions involving steps can be calculated in a quite straightforward way. The main one, never studied so far, which we have considered, is the terrace size distribution. Here we find simply an exponentially decreasing probability for increasing size. This result should be amenable to experimental testing, for example by STM; (v) In view of the additional simplicity of step models, it should be feasible, in the future, to study the role of long-ranged interactions, a problem without hope of solution within RSOS models.
ACKNOWLEDGMENTS -We thank M. Fabrizio, A. Parola, S. Sorella, and F.D.M. Haldane for many useful discussions. We acknowledge financial support from INFM, through Projects LOTUS and HTSC, from EU, through ERBCHRXCT940438, and from MURST, through COFIN97.
If the uniform part of the spin-spin correlation function S z 0 S z j behaves, for large j's, like −K 4π 2 j 2 , G h (r) will diverge logarithmically as 2K π 2 ln(r). [28] M. Kardar and R. Shankar, Phys. Rev. B 31, 1525 (1985), for instance, consider a problem of domain walls in the context of commensurate-to-incommensurate transitions of adsorbate layers, and set up a model including dislocation terms (exactly analogous to our t ⊥ 1 term), in absence of interactions, but solve the model through a Bogoliubov transformation, treating the particles as fermions, rather than hard-core bosons, as we believe one should, as the anticommuting of steps seems devoid of physical significance.
FIGURE CAPTIONS
FIG. 1. Scheme of a surface with up (↑) and down (↓) steps. The heights of the terraces are explicitly indicated. The other symbols refer to the Boltzmann weights considered: δK (cost of a kink), δT (cost of a terrace creation), δex (cost of a step crossing), V and V ⊥ (interactions between parallel and opposite steps); the black dots indicate where terraces are created or destroyed.
FIG. 2. Schematic representation of a kink, the beginning of a size-1 terrace, a size-0 terrace, and a step crossing between strip j and strip j + 1 and the relative energetic costs.
FIG. 3. Phase diagram for the spin-1 Heisenberg chain. [3] Here and in the following we consider only negative values of the chemical potential µ, that are relevant for the surface physics problem. Lines (a) and (b) are discussed in the text. Fig. 6(a). As argued in the text (see sec. VII B), KS should converge to 1 for L → ∞ and the apparent extrapolation to values larger than 1 is very likely due to finite size effects. | 2019-04-14T02:18:23.327Z | 1998-09-04T00:00:00.000 | {
"year": 1998,
"sha1": "d9ea2ac05c42cf531f0fb5d7cddf39b0c22cbf86",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/cond-mat/9809091",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "dabae35bcb0a06aa02eb9efa591fba5fa81bc273",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
260463190 | pes2o/s2orc | v3-fos-license | Human Placenta and Markers of Heavy Metals Exposure: Esteban-Vasallo et al. Respond
We appreciate the interest of Pigatto et al. in our review (Esteban-Vasallo et al. 2012). We understand their concern regarding mercury amalgams; however, the purpose of our review was to summarize the available information on total mercury, cadmium, and lead levels in human placental tissue, obtained from studies that reported original quantitative data. Published evidence suggests a possible association between mercury released from mercury-containing dental amalgam fillings and levels of this metal in diverse fetal tissues (kidney, brain, and cord blood) (Drasch et al. 1994). In contrast, studies focusing on human placenta and amalgams are scarce and their results inconsistent. The only two studies included in our review that assessed a possible relationship between dental fillings and total mercury—a small study in Taiwan (46 women) (Hsu et al. 2007) and another in Jamaica (52 women) (Grant et al. 2010)—found no association. Only Ask et al. (2002) reported higher mercury levels in mothers with a higher number of fillings, but they studied inorganic mercury and not total mercury.
None of the studies mentioned by Pigatto et al. in their letter (Clarkson and Magos 2006; Gundacker and Hengstschlager 2012; Richardson et al. 2011) includes original data, although we did identify an additional reference from those articles that might provide more data on this issue, a symposium abstract by Ursinyova et al. (2006). In this abstract, the authors described a significant correlation between the number of amalgams and placental mercury levels in 409 women; however, these findings have not yet been published in a full report that would allow us to better evaluate the results. In addition, Wannag and Skjaerasen (1975) seemed to provide original information, but we were unable to find this paper for our review. In this context, we have to disagree with Pigatto et al.; in our opinion, the association between mercury exposure from dental amalgam fillings and levels of this metal in human placenta cannot yet be considered as well-established.
Human Placenta and Markers of Heavy Metals Exposure
http: //dx.doi.org/10.1289/ehp.1206061 In their review, Esteban-Vasallo et al. (2012) discussed the use of human placenta to evaluate bio markers of exposure to heavy metals. They correctly concluded that the use of placental tissue specimens to assess heavy metal exposure is actually under used. Surprisingly, they did not mention the well-documented relationship between mercury released from mercury-containing dental amalgam fillings and mercury disposition in placental tissues (Clarkson and Magos 2006;Gundacker and Hengstschläger 2012;Richardson et al. 2011).
Studies have suggested an association between mercury levels in placental tissues and the observed mercury dental amalgams in women (Ask et al. 2002;Palkovicova et al. 2008;Richardson et al. 2011). Elevated placental mercury levels have been reported in dental workers who, throughout pregnancy, were exposed to mercury vapor (Hg 0 ) released during preparation of mercury amalgam in dental offices (Guzzi and Pigatto 2007;Wannag and Skjaeråsen 1975). As noted by Drasch et al. (1994), the motherto-fetus transfer of mercury Hg 0 from amalgams has been reported in human autopsy samples, and elevated levels of total mercury have been observed in the brain, liver, and kidney of human fetuses; these levels have been linked to the number of maternal amalgam-restored surfaces.
Trans placental exposure to heavy metals may affect child growth and cause neurodevelopmental delays. Thus, further efforts should be made to measure and quantify maternal exposure to heavy metals in placenta to estimate environmental prenatal exposure. (Esteban-Vasallo et al. 2012). We understand their concern regarding mercury amalgams; however, the purpose of our review was to summarize the available information on total mercury, cadmium, and lead levels in human placental tissue, obtained from studies that reported original quantitative data. Published evidence suggests a possible association between mercury released from mercury-containing dental amalgam fillings and levels of this metal in diverse fetal tissues (kidney, brain, and cord blood) (Drasch et al. 1994). In contrast, studies focusing on human placenta and amalgams are scarce and their results inconsistent. The only two studies included in our review that assessed a possible relationship between dental fillings and total mercury-a small study in Taiwan reported higher mercury levels in mothers with a higher number of fillings, but they studied inorganic mercury and not total mercury.
None of the studies mentioned by Pigatto et al. in their letter (Clarkson and Magos 2006;Gundacker and Hengstschlager 2012;Richardson et al. 2011) includes original data, although we did identify an additional reference from those articles that might provide more data on this issue, a symposium abstract by Ursinyova et al. (2006). In this abstract, the authors described a significant correlation between the number of amalgams and placental mercury levels in 409 women; however, these findings have not yet been published in a full report that would allow us to better evaluate the results. In addition, Wannag and Skjaeråsen (1975) seemed to provide original information, but we were unable to find this paper for our review. In this context, we have to disagree with Pigatto et al.; in our opinion, the association between mercury exposure from dental amalgam fillings and levels of this metal in human placenta cannot yet be considered as well-established.
The influence of mother's dental amalgam fillings on prenatal and postnatal exposure of children to mercury [Abstract]. In their article, Strak et al. (2012) connected real-world exposure to markers of acute lung function and inflammation. However, some points in the paper require further explanation. Strak et al. used fractional exhaled nitric oxide (FE NO ) as a marker of lung inflammation. Exhaled NO is produced throughout the respiratory tract and shows significant variability in source strength across the respiratory tract (Barnes et al. 2010;Kharitonov and Barnes 2001). Factors such as particle size, hygro scopicity, composition, and concentration; lung function parameters; and environ mental temperature and humidity Gangamma 2006, 2009), which vary across experimental locations and between participants, modify particle deposition sites in the lung. These changes in the deposition site may influence the amount of NO exhaled. In their paper, Strak et al. (2012) did not discuss how these parameters influenced their conclusions. Thus, how the linear regression model they used accounts for these influences needs to be explained. Inflammation in the lung resulting from air pollution exposure involves various cell types, such as epithelial cells in upper airways and macrophages and recruited neutrophils in the lower respiratory tract. A significant source of exhaled NO is epithelial cells in the upper airways, which are associated with eosinophilic inflammation (Barnes et al. 2010;Kharitonov and Barnes 2001). Many components of particulate matter (PM), such as endotoxin or bacteria, induce neutrophil inflammation in the lung, but the effects of these components may not be reflected in the concentration of exhaled NO. Thus, FE NO measurements as a marker of inflammation could easily be misinterpreted by attributing a particular part of the total inflammatory response within the lung to air pollution. Strak et al. (2012) did not discuss such possibilities.
In their article, Strak et al. (2012) did not provide sufficient details about the NIOX MINO monitor (Aerocrine 2010) they used to measure exhaled NO concentration. I assume that NO measurement involves flow measurement and diffusion of NO to a sensor. Temperature and humidity of exhaled air or body temperature of the subjects likely interfere with these opera tions. Strak et al. did not describe any of these parameters or how they may interfere with NO measurement. Moreover, the absolute values of FE NO observed during the experiments are not readily available. However, in the "Discussion," Strak et al. indicated that the observed variations between FE NO measure ments that are associated with particle number concentration (PNC) were most likely within the range of 5-15%. The technical specification of the instrument used for NO measure ment has precision values of 5 ppb or 10% for concentrations > 30 ppb (Aerocrine 2010). Strak et al. used the difference between two sets of readings (preexposure and post exposure) as the input data for regression calculations. Thus, measurement error associated with the calculations could be much higher than that for a single set of measurements. Therefore, many of the observed differences in NO values were likely to fall within the error range of the instrument. Strak et al. should have discussed the propagation of error in the measurements or provided sufficient experimental data on the precision of the measurements. They should also have explained how the regression analysis is not biased by such instrument errors. Strak et al. (2012) reported measurement of PNC with a condensation particle counter (CPC model 3007; TSI 2007), but their Table S2 did not report the accuracy or limit of detection of this instrument. CPC measure ment depends on parameters such as ion concentration and particle composition, but because the measurements in the paper were from different environments, it is likely that these parameters varied significantly across the sites. Moreover, the CPC has a low sampling flow rate, and it is not clear whether this sampling rate is suitable for ambient measure ment (aspiration efficiency in case of fluctuations in ambient wind velocity).
Overall, the article is an excellent attempt by Strak et al. (2012) to use non invasive methods to understand the acute response of the respiratory system in response to air pollution exposure. However, a careful explanation of theory behind the experiments, experimental design, and limitations of measure ment methods (if any) should have been discussed in the article. | 2018-04-03T03:30:43.229Z | 2013-01-01T00:00:00.000 | {
"year": 2013,
"sha1": "d0debeaea97d13972ce60a659411751e3c1a6366",
"oa_license": "public-domain",
"oa_url": "https://doi.org/10.1289/ehp.1206061r",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "d0debeaea97d13972ce60a659411751e3c1a6366",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
263636945 | pes2o/s2orc | v3-fos-license | e Recruitment and merit employee selection. An HR paradox
The technological advancement makes it imperative for organisations to run human resources processes online. This paper examines the effectiveness of e recruitment most especially on its appropriateness in selecting the best candidates since the method does not provide the room for personal touch. The study employed cross sectional research design with the target population of 420 HR Officers from the Tanzanian Roads Authority (TANROAD), Tanzania mainland. The researcher used systematic sampling technique to obtain pre-determined 40 sample size. The findings indicate that, many organisations in Tanzania do not use e recruitment due to poor ICT infrastructures, unreliable electricity power, wrong perception of recruiters on e recruitment and some weaknesses associated with e recruitment method. The findings indicated further that, e recruitment compromises objectivity since there are some attributes like emotional intelligence, confidence, body language that cannot be captured when conducting e recruitment. The study recommends the use of e recruitment by the employing organisations since its advantages outweigh the disadvantages. This can be achieved by changing the mindsets of recruiters and investing more in ICT facilities
INTRODUCTION
With work organisations becoming digital, the practices of recruitment have changed drastically.With the rising use of digital tools, HR professionals transitioned from their traditional methods of printing job vacancies in publications to using E-Recruitment methods for tracking and hiring applicants.E-Recruitment, also known as "Online -Recruitment", is a method used by HR professionals to assist the recruitment process by using technology or web-based tools.It is an automated process of tracking, attracting, interviewing, and hiring candidates by utilising online stages and HR software. [1]
Opportunities of E-Recruitment
E-Recruitment is a new approach to recruitment in Tanzanian context and very few organisations practice this technique.E recruitment could offer an organistion the following benefits; • It gives a 24 hours access to an online collection of resumes. [2]n online recruitment has great potential for any organization as it's up to date recruiting method provides current information; open up geographical borders searching for talents and is time and cost saving (Pin et al 2001).e-HRM practices such as e-recruiting, e-learning, e-performance appraisal can be seen as activities that help the firm meet its objectives Keywords: Quick responses, Posting jobs online, Online -Recruitment.
through on the relational aspect of e-HRM.e-HRM can also be distinguished as the utilization of IT for supporting and networking at least two (individual and/or collective) actors in their shared performance of HR tasks.e-HRM therefore, is viewed as a way of implementing HR strategies, policies and practices in organizations through a conscious and directed support of and/or the full use of web technology-based channels.e-Human resource management forms an integral part of workforce management which has a bearing related on organizational outcomes, behaviour of employees in an organization. [3]
Challenges of E-Recruitment
E-Recruitment since its inception has turn out to be successful but it has faced quite a number of challenges hurdles in the path of success.Some fails to provide correct information online as they are not computer savvy.They tend to commit mistakes like filling their name wrongly, their native place wrongly, their qualifications etc. online resumes easily get duplicated and hence chances of neglecting the real candidates instead of duplicate increases.As resumes are uploaded so there is no surety of authenticity and correctness of information provided by personnels.Some challenges are the quality and the quantity of candidates through the web tools.Many organizations have reported getting large number of applicants from unqualified people.In case of absence of internet connection candidates cannot check any portal or site.Some of challenges are associated with e recruitment are; fake profile, high fees for access casual attitude of job seekers and lack of personal touch and privacy issues. [4]ome company makes their website quite multifaceted due to over engineering which makes it difficult for job seekers to find relevant opportunities and apply for the same as not everyone is computer savvy.Also, employers cannot judge the personality of candidates online as there is lack of face-to-face interaction.In case of candidates comes out to be a total change than what was expected at the time of interview, it leads to complete waste of time for employers as they have to restart the process again.Sometimes it's difficult to find a candidate within budget and stipulated time frame, in that case it's quite challenging for employers to find a talent as per their desire.The job-portals have the challenge of filtering the information they showcase and removing the fake job offers as well as the job seekers. [5]ccording to Armstrong, the positives associated with internet recruitment come with a number of challenges.In getting broader exposure, employers also may get more unqualified applicants.Internet recruitment creates additional work for HR staff members who now need to review more resumes, more e-mail and the need to install expensive software to track the numerous applications.A related concern is that many of the individuals who access job sites are just browsers who are not actively looking for jobs. [6]other major concern is that some applicants may have limited internet access, especially the individuals from lower socio-economic groups and from certain regions of the world.Also, privacy is another potential disadvantage of this new process: sharing information gleaned from people who apply to job boards or even company websites has become common, but information sharing is being done in ways that raise ethical issues and violate discrimination. [6]he challenges identified by other scholars include: • Screening and checking the skill mapping and authenticity of million of resumes is a problem and time-consuming exercise for organizations.• There is low internet penetration and no access and lack of awareness of internet in many locations across the world.• Organizations cannot be dependent solely and totally on the online recruitment methods.In countries like India and Nigeria, the employers and the employees still prefer a face-to-face interaction rather than sending emails.Other major challenges with e-Recruitment centre on the quantity and quality of candidates using web-based tools, the lack of knowledge of e-Recruitment within the HR community, and limited commitment to e recruitment by senior managers.For example, many applications from unqualified candidates have been received by organizations using e-Recruitment systems, at the same time, the lack of knowledge of e-recruitment among HR professionals and the limited commitment of senior managers have hindered the effective implementation of e-Recruitment in some organizations.Furthermore, recruiting through the internet has raised concerns among potential applicants about keeping their personal information secure and confidential, many organizations' recruitment sites display privacy statements that detail how the information applicants provide will be stored and used.However, data security remains a major concern, particularly when it comes to online testing and making hiring decisions.Shrivastava and Shaw noted that the accuracy, verifiability, and accountability of applicants' data are also major issues for managers whose organizations use e-Recruitment system. [7]n addition, Robertson also noted that the lack of personal interactions during the process of applying for employment online limits the flow of communication between potential employees and the employer, leading to frustration on the part of the job candidates and missed opportunities to share or gather additional information by employers.Storey (2007) also noted that online testing raises issues related to applicants' reactions to the testing, the equivalence of online and pencil-and-paper tests, adverse impact, and protecting candidates identities.Therefore, before adopting any kind of online selection methods, organizations should carefully study the impact of these methods and the strengths and weakness of the methods. [8]ccording to Lepak and Snell (1998) the HR Function must confront four seemingly contradictory pressures.HR departments are required to be simultaneously strategic, flexible, efficient, and customer-oriented.Certain authors have suggested that the use of technology may enable them to achieve these goals.Recruitment plays a critical role in enhancing organizational survival and success.The recruitment process has been profoundly affected by major changes: the retirement of the aged group, an increasing need for flexibility and responsiveness, and complex modes of communication.The development of new "social and sociable" media technology called "Web 2.0" offers companies and their recruiters new perspectives.Despite the growing importance of e-recruitment, research in this area remains very limited and applicantoriented. [9]ccording to African Development Indicators 2006, as cited by the International Records Management Trust (2007) the key component of Tanzania Public Service Reform Program was to promote and improve the e-governance and service delivery through aggrandizing the underlying framework of ICT so as to deliver the required services in new technology.The national ICT policy was approved by the cabinet in 2003 and developed by the ministry of communications and transport.The general mission of ICT policy is to facilitate the economic growth, encouraging investments, social development and knowledge partake within and outside the country through ICT.
With the ease in the process of applying for a job online, it also means that underqualified and fraud candidates might apply for the job role.With hundreds of applicants, many of them will not be suitable for or serious about the role, thus diluting the quality of talent pool.There is also a problem of lack of personal touch which limits the recruiter to scrutiny the applicant's emotions, confidence and the ability to keep eye contact.Absence of all these variables create recruitment and selection paradox. [10]uring online interviews on the free video platforms like Skype or Zoom video calls, it is possible to encounter technical faults.It can be quite embarrassing for a recruiter to be suddenly switched out of a conversation or call due to an electrical outage, while having an unstable internet connection can be awkward.This also means if the company isn't good at technology, they might encounter such glitches more often, thus, making e-recruitment an yet saluted recruitment approach in the Tanzanian context.
The study intended to examine the extent of usage of e recruitment in Tanzanian work organisations and the cost effectiveness of using e recruitment.It further ascertained the objectivity of e -recruitment and establish the remedial action that could have been taken by the recruiting organizations in an event of the disadvantageous e recruitment.It finally assesses the technological infrastructures possessed by Tanzanian work organisations that are necessary in handling erecruitment.
Methodology
The study employed cross sectional research design.The researcher chose cross sectional survey design to help measure prevailing attitudes and practices of HR managers in relation to e recruitment.The large number of respondents was interviewed using a pre-designed questionnaire, thus allowing collection of a significant amount of data in an economical and efficient manner.The target population for this study was 420 HR Officers from the Tanzanian Roads Authority (TANROAD), Tanzania mainland.
The researcher used systematic sampling technique to obtain pre-determined 40 sample size.
To get this sample, the researcher had to get the list of names of these HRs via telephone communication and the names were arranged in alphabetical order.Then, the list was divided by the predetermined sample size to obtain kth interval.Therefore, from the list, the sample size was obtained by picking every 7th individual and thus, making a sample size of 60 respondents.Respondents were coded (HRO1……………HROn) to aid in the analysis of the collected data.
Data were collected through telephone survey where every respondent was requested to juxtaposition a convenient time for the interview with the researcher and this was made possible by the assistance of the chairperson of the Tanzania Human Resources Association (unregistered).The response rate was impressive (98%) since the researcher is senior person in the field of human resources management and contributes a lot in the solutions for Hrs management at TANROAD.
Findings
The findings indicate that, TANROAD does not employ e-recruitment method to recruit employees."We don't use e-recruitment at TANROAD because we have no infrastructures to support e recruitment.Infrastructural aspects like reliable internet, computers, teleconferencing rooms, internet of things (IoT) are some of the barriers to the usage of e recruitment plus, of course, lack readiness by the management.There is also the problem of electricity power cut off which does not encourage online recruitment approach since it could negatively affect the smooth running of the recruitment process" (HRO8).These views were also strongly held by (HRO9-18).
Although e-recruitment is not in use at TANROAD, respondents had the opinion that the objectivity of e-recruitment is highly questionable.They revealed that, using e recruitment does not provide room for personal touch.In their opinion, HRO15 and 28 said "e recruitment is not effective since the panelists can't assess candidate's body language, level of confidence and emotional intelligence".They also reported that, "e-recruitment attracts a lot of applicants some of whom are underqualified and thus making it difficult to scrutinize all the applicants at the initial stage".This indicates that, e-recruitment system is not capable in filtering candidates who do not meet the selection criteria.This an important information to the system developers because, including such a component in the system will make it easier for the HR department to handle online recruitment.
From the table below, all the respondents agreed that e recruitment approach is not capable of assessing important attributes in a candidate.Thus, this could be ascertained as the major weakness of the e recruitment system that, if employed, human resource officers should be conversant with.
ICT Infrastructure was another concern in relation to the application e recruitment.Respondents (HRO24-34) revealed that, most of the connections are in small dimensions.These findings commensurate the findings by Sife (2013) who asserted that Tanzania is suffering from ICT poor infrastructures, illiteracy in computer usage and unreliable electricity power supply.Computers and its hardware are imported from abroad.This situation caused the high price in both computers and internet services and reduced the number of internet users.
Cost effectiveness in the application of E recruitment was also a concern of this paper and the researcher attempted to find out whether or not e recruitment has cost advantage over physical/ traditional recruitment approach.Though TANROAD does not use e recruitment approach to recruit prospective employees, the participants (HRO22-31) revealed that physical/traditional recruitment is not expensive when compared with e recruitment.They submitted that the costs that could be used to recruit online are same for physical recruitment since recruiting companies do not pay for transport, meal and accommodation.These cost variables are incurred by the applicants.These findings are important since they may encourage firms to maintain flexibility whenever necessary.Using e recruitment is very advantageous to applicants as it reduces travel and accommodation expenses.
Conclusion and Recommendations
E recruitment is advantageous to applicants and recruiting organisations.Study findings indicate that e recruitment saves recruitment cost, reach many applicants and it is a convenient recruiting method when compared to physical recruitment.Notwithstanding the benefits associated with e recruitment, many organisations in Tanzania do not use this method.Limited access to ICT services, unreliable electricity and the high cost of ICT facilities are some of the barriers that were reported hindering application of e recruitment.On the other hand, e recruitment claimed of lacking personal touch thus making it difficult assess candidate's level of confidence, emotional intelligence and body language during the interview session.
The study recommends the use of e recruitment by the employing organisations since its advantages outweigh the disadvantages.This can be achieved by changing the mindsets of recruiters and investing more in the ICT facilities.It's also recommended that the organisations fix powerful and automatic generators that could curb the problem of power cut off during the interviews.
•
Wider geographic search • Quick responses • Time saving • Cost saving Benefits to employer • Advertising benefits Easy to apply .• Large number of opportunities Benefits to jobs seekers.• Wider geographic .• Posting jobs online is cheaper than advertising in the newspapers.• It does not involve intermediaries.• There is reduction in the time for recruitment (over 65% of the hiring time).• It facilitates the recruitment of right type of people with the required skills.• It enhances improved efficiency of recruitment process. | 2023-10-05T15:06:49.785Z | 2023-06-30T00:00:00.000 | {
"year": 2023,
"sha1": "fd421754c6d27511802b924fc599d125da00e834",
"oa_license": "CCBY",
"oa_url": "https://jmseleyon.com/index.php/jms/article/download/667/642",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "7e76c269fe162fa46e5d4422cc91ebc3cc1631d7",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": []
} |
236155120 | pes2o/s2orc | v3-fos-license | An Efficient Multi-objective Evolutionary Approach for Solving the Operation of Multi-Reservoir System Scheduling in Hydro-Power Plants
This paper tackles the short-term hydro-power unit commitment problem in a multi-reservoir system - a cascade-based operation scenario. For this, we propose a new mathematical modelling in which the goal is to maximize the total energy production of the hydro-power plant in a sub-daily operation, and, simultaneously, to maximize the total water content (volume) of reservoirs. For solving the problem, we discuss the Multi-objective Evolutionary Swarm Hybridization (MESH) algorithm, a recently proposed multi-objective swarm intelligence-based optimization method which has obtained very competitive results when compared to existing evolutionary algorithms in specific applications. The MESH approach has been applied to find the optimal water discharge and the power produced at the maximum reservoir volume for all possible combinations of turbines in a hydro-power plant. The performance of MESH has been compared with that of well-known evolutionary approaches such as NSGA-II, NSGA-III, SPEA2, and MOEA/D in a realistic problem considering data from a hydro-power energy system with two cascaded hydro-power plants in Brazil. Results indicate that MESH showed a superior performance than alternative multi-objective approaches in terms of efficiency and accuracy, providing a profit of \$412,500 per month in a projection analysis carried out.
Abstract
This paper tackles the short-term hydro-power unit commitment problem in a multi-reservoir system -a cascade-based operation scenario. For this, we propose a new mathematical modelling in which the goal is to maximize the total energy production of the hydro-power plant in a sub-daily operation, and, simultaneously, to maximize the total water content (volume) of reservoirs. For solving the problem, we discuss the Multi-objective Evolutionary Swarm Hybridization (MESH) algorithm, a recently proposed multiobjective swarm intelligence-based optimization method which has obtained very competitive results when compared to existing evolutionary algorithms in specific applications. The MESH approach has been applied to find the optimal water discharge and the power produced at the maximum reservoir volume for all possible combinations of turbines in a hydro-power plant. The performance of MESH has been compared with that of well-known evolutionary approaches such as NSGA-II, NSGA-III, SPEA2, and MOEA/D in a realistic problem considering data from a hydro-power energy system with
Introduction
Hydro-power is one of the most important sustainable energy sources in countries with a huge fluvial resource, such as Brazil. The water resources management, combined with the growth in demand for electricity and climate change are impacting factors in the flow regime of rivers, directly interfering in the development of economic activities for the production of hydro-electric energy. Compared to other renewable resources, hydro-power has exceptional advantages, such as the ability to generate electricity without producing any pollution and to provide water flow control in the rivers (Sharma et al., 2004). A big challenge in hydro-power is the modelling and operation of systems that generates energy using two or more hydro-power plants (HPPs) in a cascade process. This method of conducting the electric dispatch production is known as the Operation of Multi-Reservoir System (OMRS) (Roefs & Bodin, 1970). As in the case of single hydro-power plants, in OMRS it is needed to define an optimal schedule for the production units, usually on an hourly basis, to maximize the electrical power obtained from a given water volume. In the case of multi-reservoirs, the optimization problem is usually very hard with non-linear objective functions, an extremely large search space dimension and, in many occasions, several objectives with different constraints to be fulfilled (Barros et al., 2003).
There are different methods to solve OMRS problems described in the literature, that can be roughly divided into two classes: conventional methods and bio-inspired meta-heuristics. In general, conventional methods are to some extent deterministic algorithms. Cai et al. (2001) and Yoo (2009) used Linear Programming (LP) methods to maximize hydro-power generation. The works by Zheng et al. (2001) and Catalao et al. (2010) addressed the maximization of reservoir volume with use of Nonlinear Programming. Mixed-integer Linear Programming (MILP) is another classic tool used to minimize the maintenance costs and usage of water in hydro-power plants in Canto (2006), Ge et al. (2012) and Chen et al. (2016). Methods based on Lagrangian Relaxation to minimize the total costs of production were discussed by Guan & Zhang (1995) and Scuzziato et al. (2020). Dynamic Programming techniques were also adopted to obtain the optimal management operation as proposed in Marano et al. (2012). Fuzzy models have also been applied to conduct the dispatch operation in HPPs, as in Moeini et al. (2011) and Zhang et al. (2017).
Bio-inspired algorithms have also been successfully applied to different problems in hydro-power. For example, Naresh & Sharma (2002) and Xie et al. (2018) used different types of neural networks to solve hydro-scheduling, a subproblem of OMRS. Genetic Algorithms (GAs) have been used to solve several types the electric dispatch in OMRS: to provide optimal operation of these type of facilities (Leite et al., 2002), to maximize the power production for a case scenario in Turkey (Cinar et al., 2010) and to maximize the power production of small communities in Honduras (Tapia et al., 2020). Solutions inspired on swarm intelligence adopting the Particle Swarm Optimization (PSO) have been applied to minimize the use of water in power generation as described on Wang et al. (2012). PSO has been also used to minimize environmental impacts of power generation (Xin-gang et al., 2020) and to minimize the production costs (Mandal & Chakraborty, 2012). Considering the ecological environment problem described in Zhang et al. (2013a), the authors have applied a Differential Evolution (DE) algorithm to solve the electrical dispatching problem. DE algorithm versions were also applied to maximize the volume of water in reservoirs (Guedes et al., 2015).
Related works have presented high degrees of success in these practical engineering problems. However, there are still certain weaknesses when conventional or bio-inspired techniques are used to solve OMRS related problems. In many cases, LP methods failed to address the widespread nonlinearity in the basic feature information of HPP reservoirs (Cai et al., 2001). Nonlinear programming often showed inaccuracies due to linearization of nonlinear constraints when addressing the non-convex objective function in HPP system (Zheng et al., 2001). Sometimes applying the dynamic programming approaches can be mainly limited by dimensionality problem in which OMRS is involved (Marano et al., 2012). The great challenge for neural network methods is conducting the selection of computational parameters (Naresh & Sharma, 2002), a time consuming task mostly incompatible with the real time nature of the dispatch problem. Population-based and bio-inspired methods based on evolutionary algorithms can be easily trapped in local optima due to the premature convergence problem (Zhang et al., 2013b;Guedes et al., 2015).
In addition to these issues related to the algorithms focused on the OMRS problem, note that the short-term HPP scheduling problem in OMRS can have more than one objective: some plant operators might need not only to maximize the efficiency of the energy production process, but also to keep the turbine flow close to a target value or to optimize the water balance between the reservoirs. Despite the multi-objective nature of the problem, the majority of existing methods perform a scalarization to transform the problem into a single-objective one. Nevertheless, solving a multi-objective problem via a scalarized mono-objective approach can lead to a crucial information loss (Marcelino et al., 2020).
A wide variety of multi-objective Evolutionary Algorithms (MOEAs) have been proposed and successfully applied to many real-world optimization problems (Zhou et al., 2011). Some MOEAs employ the concept of Pareto Dominance to find a set of non-dominated solutions, which represent a set of efficient solutions considering the objective functions of the problem at hand. As an example, we can cite the Non-dominated Sorting Genetic Algorithm (NSGA-II) (Deb et al., 2002a), the Strength Pareto Evolutionary Algorithm (SPEA2) (Zitzler et al., 2001) and the Multi-objective Particle Swarm Optimization (MOPSO) (Padhye et al., 2009), to name a few. Despite the popularity in academia, the use of MOEAs in industry is not so common. Specifically for the OMRS scenario, very few works tackle the problem in its multi-objective formulation. The NSGA-II has been applied to maximize the river habitat quality and hydro-power generation (Cioffi & Gallerano, 2012).
The SPEA2 has been used in Hidalgo et al. (2015) to minimize daily release from the plant and the number of times that the status of the unit generator is changed. An approach using the Improved Partheno Genetic Algorithm (IPGA) has been conducted to optimize a system with two HPPs in China (Wang et al., 2015). The use of PSO to solve complex multidimensional problems has grown significantly due to its simplicity and easy applicability. Currently, new algorithms inspired by swarm intelligence have been widely adopted for solving highly nonlinear, multi-modal, NP-Hard and multi-objective problems, and have proven successful in those cases (Almufti, 2017). Some works that follow these approaches are: the Improved Multi-Objective Particle Swarm optimization (IMOPSO) algorithm proposed by Zhang et al. (2018), a Multi-objective Particle Swarm Optimization (MOPSO) version (Feng et al., 2017) and a Parallel Multi-PSO (PMPSO) described in Niu-W-J. et al. (2018). Specifically, swarm-based algorithms proved to be very efficient and fast for solving problems in the energy field (Baumann et al., 2017;Marcelino et al., 2018bMarcelino et al., , 2019. Thus, in this work we propose a hybrid swarm algorithm, aiming to use the best mechanisms coming from evolutionary computation within the well-founded framework inherent to swarm intelligence. In the last years, other Pareto dominance-based MOEAs have been proposed to deal with problems having three or more objectives. Recently, a large number of specialized algorithms have been proposed and applied to different topics such as big data optimization (Yi et al., 2018), cyberphysical social systems , interval multi-objective optimization problems (Sun et al., 2020), distributed manufacturing problems (Jiang et al., 2020), vehicle routing , signal processing (Li et al., 2021), and correlated subjects (Li et al., 2018). In this paper we will further discuss the performance of two MOEAs that have proven to be powerful to deal with problems with any number of objectives: the MOEA based on decomposition (MOEA/D, by Zhang & Li (2007)) and the reference-point based non-dominated sorting algorithm (NSGA-III (Deb & Jain, 2014a,b)). These are standard, baseline algorithms, on top of which further approaches have been proposed.
MOEA/D (Zhang & Li, 2007) is a decomposition-based MOEA that emphasizes convergence and diversity of population. The problem is decomposed into a set of subproblems and then optimized simultaneously. A uniformly generated set of weight vectors associated with a fitness assignment method is usually used to decompose the original problem. Improved and blended versions of MOEA/D have been proposed in the literature. An improvement proposed in Zhang et al. (2020) using the Information Feedback Models (IFM) demonstrated competitive results when compared to the standard version in a set of large scale benchmark functions. A modified multi-objective evolutionary algorithm with decomposition plus random local search (MMOEA/D-RL) was proposed in Jiang et al. (2020) to solve a distributed manufacturing problem. The central idea of MMOEA/D-RL is that the weight vectors are initialized randomly, and then the neighbors of each solution are determined accordingly. Sophisticated procedures are used to improve the algorithm performance and the results have showed a competitive performance when applied to a real-world problem.
Another decomposition-based algorithm using a localized control variable analysis approach (called LSMOEA/D) was proposed by Ma et al. (2021). In the LSMOEA/D method, the guidance of reference vectors is incorporated into the control decision variable analysis, leading to a competitive performance when solving benchmark problems. Qin et al. (2021) proposed an algorithm to improve the speed of convergence in large-scale multi-objective problems. In benchmark functions with 5000 decision variables, the largescale evolutionary multi-objective algorithm assisted by directed sampling (LMOEA-DS) showed competitive results when solving problems with three conflicting objectives. However the authors concluded that the LMOEA-DS suffers from a common weakness of decomposition-based algorithms: their performance heavily depends on the degree of match between the distribution of the reference solution and the offspring.
NSGA-III algorithm is a domination-based MOEAs in which the domination principle plays a key role. In its famous counterpart, the NSGA-II, the crowding distances of all individuals are calculated at each generation and used to maintain the population diversity. Inheriting the non-dominated sorting from NSGA-II, in the NSGA-III the reference points are employed to keep the diversity. The NSGA-III has been used to solve various type problems such as information feedback models (Gu & Wang, 2020) and large-scale optimization problems (Yi et al., 2020). Improved and blended approaches have also been proposed in the literature to solve different problems. An improvement using the Information Feedback Models (IFM) scheme obtained competitive results in solving large-scale many-objective problems (Gu & Wang, 2020). In the same way, refinements in the NSGA-III can be seen with use of simulated binary, uniform, and single point type crossover (Yi et al., 2020).
Development of algorithms deriving from MOEA/D and NSGA-III has gain attention in the last years. From the best of our knowledge, those algorithms have not yet been applied to solve the OMRS problem. Motivated by this fact and by the successful real-world applications, standard versions of MOEA/D and NSGA-III have been used to assess the performance of our proposed approach. Moreover, since the OMRS problem studied in this paper has a bi-objective nature, the well-known and successful MOEAs, NSGA-II and SPEA-2, have also been included in the performance assessment carried out.
Although the maximization of energy production in OMRS can be modelled by a common objective function, the maximization of the volume of simultaneous reservoirs is yet unexplored. In this work, we deal with these two conflicting objectives, aiming to guarantee the maximum efficiency of each turbine-generator set, while taking into account the hydraulic losses of the system. Keeping this in mind, this work proposes a new MOEA applied to the short-term dispatch of an HPP in OMRS. Our analysis takes into account a cascading system composed of two hydro-power reservoirs serving multiple interconnected power plants in Brazil.
To solve the electric dispatch in OMRS operation process, the novel Multiobjective Evolutionary Swarm Hybridization (MESH) is proposed and discussed. MESH is based on C-DEEPSO (Marcelino et al., 2018a(Marcelino et al., , 2020, a mono-objective evolutionary algorithm with recombination rules borrowed from PSO or, alternatively, a mono-objective swarm optimization method with selection and self-adaptive properties. The rationale here is due to the performance superiority of C-DEEPSO when applied to mono-objective versions of diverse power systems problems. Taking advantage of swarm intelligence methods and coupled with operators from evolutionary computation techniques, the proposed approach is compared with four algorithms, NSGA-II, SPEA2, NSGA-III, and MOEA/D. The experimental results show that MESH is extremely competitive in solving the short-term electric dispatch to HPPs in the multi-reservoir operation system. Therefore, MESH acts as an electrical dispatch controller system capable of offering optimized solutions for the daily planning horizon. Furthermore, MESH guarantees the maximal production with a good use of water resources, since the obtained solutions are able to maximize the water volume of the reservoirs. This characteristic differentiates MESH from the other techniques previously discussed. More specifically, this paper presents the following contributions: • a novel mathematical modelling for the hydro-power unit commitment in a multi-reservoir system finding optimal water discharge and power; • a Multi-Objective Evolutionary Swarm Hybridization (MESH) algorithm to solve the proposed unit commitment problem; • the proposed approach, MESH, has been compared to a set of benchmark problems and the results indicate a competitive performance; • usage of a realistic data from a Brazilian hydro-power energy system with two HPPs in a cascade scenario; • an in-depth performance assessment of MESH comparing to four different and well-known algorithms, NSGA-II, SPEA-2, MOEA/D, and NSGA-III in the hydro-power unit commitment problem; • obtained results indicate a competitive performance favoring MESH in terms of efficiency and accuracy when applied to the hydro-power unit commitment problem; • a projection analysis has been carried out indicating a profit of $412,500 per month solving the problem using the proposed approach.
The rest of the paper has been organized as follows: Section 2 describes the mechanisms of the MESH as an hybrid method able to solve continuous problems as the electric dispatch in the OMRS operation process. Section 3 details the short-term electrical dispatch mathematical modelling of hydropower plants in cascade operation. Section 4.1 comprises the experiment of MESH using continuous benchmark functions, and the comparative analysis of MESH with other methods. Section 4.2 shows the experiments results of short-term multi-objective electric dispatch in cascade operation. Finally, Section 5 illustrates the final remarks regarding the overall MESH performance.
Multi-objective Evolutionary Swarm Hybridization algorithm
Evolutionary algorithms (EAs), a popular class of meta-heuristics in the area of optimization research, are techniques inspired by the processes of biological evolutionary structures. In a multi-objective view, EAs are able to provide feasible solutions for two or more objectives at the same time. Currently, new approaches are being developed in a merged way, which can also be considered as methodologies to hybridize. These hybrid methods consider mixing better operators of different algorithms to obtain a more efficient optimization tool. In this context, the combination of Differential Evolution (DE), Particle Swarm Optimization (PSO), and the sorting operator from the Non-dominated Sorting Genetic Algorithm (NSGA-II) represents a promising way to create superior optimizers in multi-objective optimization problems.
Motivated by the competitive performance of the previously proposed C-DEEPSO algorithm (Marcelino et al., 2018a(Marcelino et al., , 2020 in different problems related to power systems solutions, in this work we propose a novel hybrid algorithm for multi-objective problems, the Multi-objective Evolutionary Swarm Hybridizion (MESH). In swarm optimization, the exploration of the search space made by a particle aims to follow the best solutions already found both in the particle itself and in its neighborhood, allowing to scan the search space and find new solutions for better evaluation. The exploration is carried out by updating the positions and velocities of the particles at each iteration (see Figure 1). The process is repeated for a pre-defined number of iterations or until a pre-defined convergence criterion is reached. The success of the search for an optimal position of a particle depends not only on the performance of the particle individually, but also on the information shared with the swarm. This joint skill of the swarm has been attributed to the concept of Swarm Intelligence. The swarm optimization to solve complex multidimensional problems has grown significantly due to its simplicity and easy applicability.
In this context, the MESH method proposed here has been initially developed to represent problems in a continuous search space. In MESH, the recombination is governed by the so called Movement Rule, in the same way as in the C-DEEPSO algorithm. This rule is given by Equations (1) and (2): in which X sn is a position obtained by using the recombination mechanisms of Differential Evolution (DE). The subscript n denotes the current generation. X n is the current particle or solution. The term X gb addresses the best solution ever found by the population. V n is the velocity of the individual. The term C represents a N × N diagonal matrix of random variables sampled in every iteration, according to a Bernoulli distribution with success probability P , as described in Figure 2 that exemplifies the "star topology" proposed. MESH has a memory archive file (the M B) in which a subset of the best solutions from the last population is stored. The superscript * indicates the corresponding parameter that undergoes evolution under a mutation process. Typically, the mutation of a generic weight w of an individual follows a simple additive rule as described by Equation (3), in which τ is the mutation rate that must be set by the user. N (0, 1) is a number sampled from the standard Gaussian Distribution. Mutation of X gb , which is carried out for every particle, is performed according to: The binary matrix C is obtained using the rule: randomly generate N values within the [0, 1] interval for each dimension inside of each solution. The randomly generated value is compared to communication rate P . If this random value is greater than P , the element C ij of C matrix receives 0, otherwise 1.
MESH uses the Movement Rule from C-DEEPSO with a multi-objective approach for handling two goals. Basically, in memory (M B) the MESH algorithm employs the non-dominated sorting operator from NSGA-II to identify and update the Pareto frontier throughout the search process. The solutions in this memory are used in turn as the new attractors X sn from Equation (2). The memory is updated on each iteration by combining the Pareto front of the population with the non-dominated solutions stored. The sorting operator of NSGA-II is applied to this augmented set of solutions aiming to identify the non-dominated ones. If the Pareto front is larger than the maximum memory size, then the crowded-distance from the NSGA-II operator is applied to keep the memory size. Inspired by the guide particle concepts from (Padhye et al., 2009), the MESH has a process to obtain guides according to different solutions: Individual Guide (G i ), that is the set of the best solution found by the particle (the choice of which particle from individual guide array to use is randomly selected) and Swarm Guide (G s ), that corresponds to a solution found by the swarm that is greater than the current particle solution. The swarm guide is applied in the memory archive or in the current swarm. The G s is calculated by using Equation (5): Equation (5) refers to the Sigma method, proposed in (Padhye et al., 2009). The σ assigns a value to each particle in the swarm to estimate the distance in the objective functions space. All solutions belonging to the search space that are in same line will receive the same sigma (σ) value. The idea behind the σ method is to use the particle suitability values in each objective function as the coordinates, thus the global best for a particle is another particle with the nearest sigma coordinates. Therefore, Equation (5) exemplifies how the sigma coordinates are calculated for a three dimensional objective space. Specifically, in MESH, another alternative is to combine the sigma method with the best overall choice procedure. In this process, the particle swarm guide is the one that is closest to the next upper boundary of the current one. If the current boundary of the particle is the first one, the choice will be made with memory (M B). It is shown in Figure 3.
The vectors used for differential mutation in MESH are sampled from three different groups: a group containing only particles from the front population equal to or greater than the particle, a group with memory particles, or a combination of the previous two groups. In this work, the DE/Rand/1/Bin strategy is implemented. The diagram of the differential mutation operator is shown in Figure 4. General functioning of MESH is described in Algorithm (1). In this work, we adopted the non-dominated sorting procedure Figure 3: Diagram for choosing a particle swarm guide in MESH. In the method shown on the left, all particles of the population choose from memory. In the method shown at right, the choice is made based on the next upper boundary in relation to that this particle belongs. The first boundary particles in turn use memory. and the crowd-distance operator proposed in (Deb et al., 2002a). Algorithm (2) shows the pseudo-code to update individual guide array. According to Krasnogor & Smith (2005), it is now well established that Apply dominance mechanism in the Memory (M B) and Apply crowd-distance operator if the frontier is bigger than memory size; while stopping criteria is not satisfied do 10 Apply mutation mechanism into swarm;
11
if the mechanism needs a swarm guide then 12 Update swarm guide using Equation (5) Update swarm guide using Equation (5); 20 Copy the current swarm;
21
Mutate the strategy parameters in swarm and copy w I , w A , w C using Equation (3);
22
Mutate X * gb using Equation (4) in current swarm and copy;
23
Apply movement rule in current swarm and copy using Equation (1);
25
Apply dominance mechanism in the swarm and copy;
26
Select the best particles based on frontiers and if necessary Apply crowd-distance ; Algorithm 2: Pseudo-code to update individual guide array pure population-based algorithms are not well suited to refinement of complex spaces and that hybridization with other techniques can significantly improve search efficiency. In this way, the MESH algorithm joints different concepts of swarm intelligence and evolutionary optimization to be a viable approach to solve real world problems. The MESH algorithm allows an efficient combination of the PSO and DE algorithms, as it employs typical inspiration/recombination at the swarm intelligence inherited from PSO and the mutation rules present in DE. In addition, it incorporates the dominance ordering process (dominance mechanism) and crow-distance operation observed in the NSGA-II (see (Deb et al., 2002a)), as well as the use of the swarm guides process, to escape from non-promising regions in the search space.
A computational MESH complexity analysis (see Algorithm 1) has been carried out. Let the population size, number of objectives, number of decision variables (dimension), and memory size be N P , M , D and M B, respectively. The population is sorted by dominance (Steps (5), (14) − (18) and (25)) with time complexity O(M × N × log(N )). In Step (10), mutation mechanism is executed with time complexity O(D × N P × M B). Each individual guide update (Step (24)) has time complexity O(N P × D). (21) and (22)) are executed with time complexity O(D 2 ). Movement rule (Step (23)) is applied to each particle with constant time, leading to a time complexity of O(N P ). Finally, the next generation is selected with time complexity O(M ×N P 2 ×log(N P )) and memory is updated with time complexity O(M × N P × log(N P )). From the above results, after omitting the low-order terms, total time complexity of MESH algorithm is O(T ×M ×N P 2 ×log(N P )), which is polynomial in N P . A complete MESH code version is available at https://github.com/gabrielmatos26/MESH. Therefore, addressing our proposing in a full view, MESH is a hybrid algorithm that incorporates the movement rule from PSO, mutation scheme from DE, and the non-dominated sorting mechanism from NSGA-II. The central idea is to use swarm intelligence coupled with operators from evolutionary computation. MESH includes a swarm guide mechanism, with two options to choose from: (1) it uses the information from the best space positions saved in a memory population (that keeps part of the best individuals at each generation) or (2) the best solution found on the non-dominated Pareto front. Its mutation operator contemplates sampling from both the current swarm and vectors saved in memory, with the option of selecting vectors from both populations in this process. To explore the space MESH makes use of the evolutionary strategies inherent in DE. Therefore, the combination of these mechanisms (swarm guide and mutation operation) makes it capable of carrying out a more specialized search in the attraction basin without keeping the population trapped. MESH is built to solve continuous problems. In this work we present preliminary results in benchmark functions and we adapt its operation to solve the electrical dispatch problem in the hard OMRS process in hydroelectric plants.
Hydro-power Dispatch Problem: a OMRS in cascade mode
The operation planning purpose of an electric power system is to meet the requirements of cost, reliability, and optimal consumption of energy resources. In hydroelectric systems, such as the Brazilian system, the correct use of energy, available in limited quantities in the form of water in the reservoirs, is a problem with a very complex characteristic. The compromise between immediate decisions and the future consequences of these decisions makes the problem challenging and highlights the importance of proper planning. In this work, the time horizon adopted is the daily schedule, which is a problem of local operation with the operators of the plants and is considered a short-term process.
The planning operation of cascade hydroelectric systems (using the OMRS approach) is a particularly challenging problem, due to the complexity of its modeling and its characteristic of spatial and temporal coupling. Decisions to operate in a reservoir directly affect the levels of the other reservoirs downstream, and decisions about the storage or use of water affect the future level of the reservoirs, which may lead to a risk of deficit or leakage. Therefore, the operation of a hydroelectric systems must focus, in addition to the electrical operation, on the issue of the operation of the reservoirs, which leads to a problem of space and time coupling, i.e. a dynamic problem. The electrical dispatch producing of hydro-power plants is a typical problem in the optimal fields of OMRS. Attaining optimal operation rules are crucial for making the most of the comprehensive benefits. Thus, this work proposes a new mathematical modeling to provide electric production in cascade mode based on the mathematical model described in Marcelino et al. (2015Marcelino et al. ( , 2021. Here we have improved the previous modelling, taking into account the reservoir parameters (see Table 1 for the modelling notation). In the proposed electric dispatch model described in this work, the power production, in M W/h, is obtained by Equation (6): +ρ4 uj hl 2 uj,t + ρ5 uj Qt 2 uj,t ] × [Hb u,t − ∆ Huj,t ] × Qt uj,t , hl uj,t = Hb uj,t − ∆ Huj,t , Hb in which g is the acceleration of gravity, 9.8 m · s −2 . To convert horsepower into megawatts we used the constant k = (10 −3 × m −1 ). The terms ρ0 uj , · · · , ρ5 uj are operative coefficients of turbine-generator (j) at HPP (u). hl uj,l is the is net water head of unit (j) at time (t) in HPP u. ∆ H ujt is the sum of pen-stock losses. Hb u,t is hydraulic head of the reservoir and Qt uj,t is the water discharge of unit (j) at time (t). f cm u,t is the height upstream of HPP u at time t. Terms a 0,u , · · · , a 4,u are the coefficients for the fourth order polynomial of HPP (u) to obtain f cm u,t . f cj u,t is the height of the HPP downstream at time (t), and b 0,u , · · · , b 4,u are the coefficients for the fourth order polynomial of HPP (u) that defines f cj u,t .
Optimization modelling
The economic dispatch of HPPs in a cascade mode is a typical optimization problem in hydro-power energy systems. In this context, most of the mathematical dispatch models in hydroelectric plants are static models, since the water balance is disregarded as the hydraulic head of the reservoir, which is an input parameter. Moreover,the volume of the reservoir is not considered in modelling. When the water balance is incorporated in the model, naturally the problem starts to be considered as a model of a dynamic system, is the amount of HPP in the system U is acceleration of gravity j is the HPP turbine-generetor Ju is the total turbine-generators in HPP t is the time ph uj,t is the power (M W ) generated in turbine-generator (j) of HPP (u) in time (t) ψu,t is the reservoir volume (hm 3 ) of HPP (u) in time (t) c is a constant to convert the water discharge (m 3 × s −1 ) in water volume (hm 3 ) in time (t) Qa u,t−1 is the affluent flow (m 3 × s −1 ) that comes to reservoir of HPP (u) in time (t − 1) w is the HPP index to means that the defluent flow comes in reservoir of HPP (u) td is the time of the water needs to move of HPP (w) to (u) Qt w,td is the turbine flow (m 3 × s −1 ) that comes to reservoir of HPP (u) in time (td) from HPP (w) Qv w,td is the flow rate (m 3 × s −1 ) drained through the spillway from HPP is the liquid evaporation (mm) over time in a day; A u,t−1 is the water area (km 2 ) occupied in reservoir of HPP (u) in time (t − 1) Dmu,t is the power demand required measured in M W for HPP (u) in time (t) ε is the error variation (+/-0.5%) tolerated in the power produced in HPPs indicates the operating status of the generating unit (j) at HPP (u), 0 for disabled and 1 for active g is the acceleration of gravity in 9.8m × s −2 k is the constant to convert horsepower into megawatts k (10 −3 × m −1 ); ρ0 uj , . . . , ρ5 uj are operatives coefficients of turbine-generator (j) at HPP (u) hl uj,l is the is net water head of unit (j) at time (t) in HPP (u) ∆ H ujt is hydraulic head of the reservoir Hbu,t is the HPP turbine-generetor f cmu,t is the height upstream of HPP (u) at time (t) a 0,u , . . . , a 4,u are the coefficients for the fourth order polynomial of HPP (u) to obtain f cmu,t f cju,t iis the height of the HPP downstream at time (t) b 0,u , . . . , b 4,u iare the coefficients for the fourth order polynomial of HPP (u) that define f cju,t since the level of the reservoir changes over time. For a scenario of cascading HPPs, the water balance is essential, as there is an interference in the reservoir level of one power plant due to the influence of the flow rates of another HPP. In our proposed modelling here for cascade mode operation, the volume of the reservoir at time t can be described by Equation (11), in which ψ is the volume of a reservoir; (u) and (w) are HPP indexes, (u = w); (td) is the time needed to cause water displacement between (u) and (w); υ is the reservoir volume; Qa is the affluent flow; Qt is the turbine flow; Qv is the flow rate, E is liquid evaporation and; A is the area occupied by water in the reservoir. Once the evolution of the reservoir level is considered in the model, the value of the gross drop is no longer an input parameter and becomes a variable depending on the downstream and upstream quotas. Thus, the cascade dispatch model proposed in this paper is formulated as follows:
Objective Functions
Maximize the power production (F1) in which the rate of the sum of ph uj,t and the sum of Qt uj,t , from Equation (12), determines the amount of energy that plant u is capable of producing given a volume of water. Maximizing this function implies generating energy with a lower water flow.
Maximize the water levels in the system's reservoirs (F2) Maintaining a high level in the reservoirs of the system increases the robustness of the system to future drought periods. At the same time, the higher the reservoir level, the higher the upstream quota will be, leading to greater energy efficiency in power generation.
Constraints
The electric dispatch problem in OMRS scenario of HPPs is subjected to the following equality and inequality constraints: (1) The first constraint refers to the water balance of the reservoir of a HPP in the system. Thus, the Equation (14) models the coupling of the operation of the HPP reservoirs in the system, (14) in which the term (u) is the identifier index of HPP. U is the amount of HPP in the system and (j) is the HPP turbine-generator. Term J u is the total turbine-generators in HPP and (t) is the interval time; ph uj,t is the power (M W ) generated in turbine-generator (j) of HPP (u) in time (t). Term ψ u,t is the reservoir volume (hm 3 ) of HPP (u) in time (t). Constant c is used to convert the water discharge (m 3 × s −1 ) in water volume (hm 3 ) in time (t). Term Qa u,t−1 is the affluent flow (m 3 × s −1 ) that comes to reservoir of HPP (u) in time t − 1 and (w) is the HPP index to means that the defluent flow comes in reservoir of HPP (u). Term td is the time of the water needs to move of HPP (w) to (u). Qt w,td is the turbine flow (m 3 × s −1 ) that comes to reservoir of HPP (u) in time (td) from HPP (w). Term Qv w,td is the flow rate (m 3 × s −1 ) drained through the spillway from HPP (w) to (u) in time (td). Qt uj,t−1 is the turbine flow (m 3 × s −1 ) used in turbine-generator (j) of HPP (u) in time (t − 1). Term Qv u,t−1 is the flow rate (m 3 × s −1 ) discharged in HPP (u) in time (t − 1). E u,t−1 is the liquid evaporation (mm) over time in a day and A u,t−1 is the water area (km 2 ) occupied in reservoir of HPP (u) in time (t − 1).
(2) The second constraint, provided in the Equation (15), indicates that each plant in the system must deliver a power approximately equal to the requested demand, in which the power demand required Dm u,t is measured in M W for HPP u in time t. Term ε is the error variation (+/-0.5%) tolerated in the power produced by Brazilian HPPs.
(3) Equation (16) shows that the third constraint limits the volume of the reservoir to an interval relative to the limits of the minimum and maximum operating quotas, in which ψ min u and ψ max u are the volume boundaries of reservoir in HPP (u).
(4) Fourth constraint, seen in Equation (17), indicates that a plant's outflow must respect a limited range. These limits work as controls to prevent floods in regions on the river downstream from the HPP, and also for the use of water for navigation and the ecosystem in the river and in its surroundings, (5) Fifth constraint, Equation (18) states that the turbine flows must respect the capacity limits of their respective generating units, in which Qt min uj and Qt max uj are turbine flow boundaries in turbine-generator (j) of HPP (u). Qv max u is the maximum value for water flow rate of HPP (u).
(6) Sixth constraint imposes a maximum limit for the flow according to Equation (19), in which Qv u,t is the flow rate (m 3 × s −1 ) discharged in HPP (u) in time (t). Qv max u is the maximum value for water flow rate of HPP (u).
(7) Seventh constraint, seen in Equation (20), states that if the volume of the reservoir exceeds its maximum operating limit, the excess water must be eliminated by the spillway. This constraint imposes a maximum limit for the flow rate, in which ψ u,t is the reservoir volume (hm 3 ) of HPP (u) in time (t). Term ψ max u is the maximum volume bounder of reservoir in HPP (u). Qv u,t−1 is the flow rate (m 3 × s −1 ) discharged in HPP (u) in time (t − 1). Term c is a constant to convert the water discharge (m 3 × s −1 ) in water volume (hm 3 ) in time (t).
(8) Eighth constraint indicates that the power generated must also respect the limits of the capacity of its generating unit, in which ph uj,t is the power (M W ) generated in turbine-generator (j) of HPP (u) in time (t). Terms ph min uj and ph max uj are the boundaries of power generation. Z uj,t indicates the operating status of the generating unit j at HPP u, 0 for disabled and 1 for active.
in which Z uj,t indicates the operating status of the generating unit (j) at HPP (u), 0 for disabled and 1 for active. To satisfy these nine constrains, we apply a penalty factor (p) to the objective functions F 1 and F 2. Thus, the fitness functions applied of the OMRS problem are defined by Equations (23) and (24), according to:
Experiments and results
This section presents the experimental results of the paper. We have structured the experiments carried out in two different parts: first, a set of experiments on continuous benchmark functions analyzes the performance of the proposed MESH algorithm in well-known problems, and we have used these results to set the best configuration of the algorithm. Then, we have tested the MESH approach in a real problem of electric dispatch problem in a cascade operation with multiple reservoirs, comparing the results obtained with other state-of-the-art MOEAs.
Evaluation of the MESH performance in continuous benchmark functions
In this section, the experimental performance of the MESH algorithm in some well-known continuous benchmark problems is analyzed. The goal is twofold: (i) to determine the best algorithm configuration considering the problem sets, and (ii) to compare the best version of MESH with four algorithms (NSGA-II, SPEA-2, MOEA/D, and NSGA-III) for solving the problems. Thus, the experimental setup is divided into the following case studies: 1. to determine the best algorithm configuration for MESH, we use a wellknown set of benchmark functions. Several algorithm configurations are employed to solve the problems and a statistical inference is applied to determine the best configuration. 2. to verify the MESH performance a preliminary experiment is conducted. We use the same set of benchmark functions to compare our algorithm with the standard algorithms (NSGA-II, SPEA-2, MOEA/D, and NSGA-III). For that, statistical inference techniques have been adopted, such as: analysis of variance (ANOVA) and, multiple comparison test (Tukey) as described in (Montgomery, 2012).
In all experiments, the best non-dominated set of last generation and the hypervolume are used as an indicator for assessing the algorithm's performance. We performed the computational simulation using an AMD Ryzen 7 3700X with CPUs@3.60 GHz and 32 GB RAM, with Arch Linux operating system. The MESH code was implemented in Python 3.9 language programming.
Algorithm configuration
In this experiment, we aim to identify a good algorithm configuration for MESH such as to choose the particle guide, the sampling vector and the mutation strategy. The particle guide can be chosen according to the following: A particle from memory (E1) and a particle close to the upper bound to the actual Pareto front (E2). The three sampling vectors can be: swarm (V1); memory (V2) and a combination between V1 and V2 generating the (V3). We have tested the following mutation strategies' options (taken from Differential Evolution algorithm): DE/Rand/1/bin (D1); DE/Rand/2/bin (D2); DE/Best/1/Bin (D3); DE/Current-to-best/1/bin (D4); DE/Currentto-rand/1/bin (D5). In this way, 30 (2×3×5) different MESH configurations have been analysed.
As an example, one possible setting of MESH could be E2/V1/D1, meaning the swarm guide is chosen by the Pareto Front solution, the vector selected for mutation will be from the memory and differential mutation is done with sampled vectors of the swarm population and memory under the DE/rand/1/bin strategy. Each algorithm's configuration is run 30 times using the well-known Ziztler's benchmark functions (ZTD1, ZDT2, ZDT3, ZDT4 and ZDT6) (Zitzler et al., 2001) and, using the hypervolume as the performance indicator. The statistical protocol as described in (Marcelino et al., 2018a) is applied. The ANOVA and the Tukey-test (Montgomery, 2012) are performed. As an example, the boxplot for all algorithms' run using ZDT1 can bee seen in Figure 5.
Visually, since there are boxes that do not overlap, statistical differences can be identified. An ANOVA test has been performed, in which the p-value obtained is lower than the significance level adopted (<0.05), indicating that there is a difference among the means of the hyper-volume. Tukey's test has been conducted to identify the differences among the samples. Figure 6 shows the result obtained.
From the results we see that five versions of MESH stand out from the others. Since the higher the hypervolume values, the best is the algorithm performance, E1/V1/D1 and E2/V2/D1 configurations are the best ones. A similar behavior has been observed in the other ZDT functions. So, for the remaining tests, only E1/V1/D1 and E2/V2/D1 versions of MESH have been applied.
Performance assessment
To validate the proposed MESH algorithm, we have performed a set of tests using the ZDT and DTLZ benchmark functions. The functions ZDT1, ZDT2, ZDT3, ZDT4, ZDT6, DTLZ1, DTLZ2, DTLZ4, and DTLZ7 (Zitzler et al., 2001;Deb et al., 2002b) are employed here. The two MESH configurations, E1/V1/D1 and E2/V2/D1, are compared to the standard NSGA-II, SPEA-2, NSGA-III, and MOEA/D.It is worthwhile to notice that the parameters of all six algorithms, such as mutation and crossover rates, have not been fine-tuned. Since the main goal of this experiment is to validate the proposed approaches, no fine-tunning of the parameters has been done. In absence of a more informed choices, we have used the usual values found in the literature.
For all problems, the algorithm population is set to 50 solutions. This value is also valid for the main and secondary populations such as memory, copy, file, or offspring. The parameter's values used in the algorithms are indicated in Table 2. Each algorithm is run 30 times. Figures 7 and 8 show the combined Pareto fronts for both MESH versions (E1/V1/D1 and E2/V2/D1), NSGA-II, SPEA-2, NSGA-III, and MOEA/D, for the ZDTs and DTLZs functions, respectively. The analytical Pareto front of each problem is also showed.
Observing Figure 7, note that both MESH versions show to be competitive when compared with the other algorithms in all functions. MESH (in both versions) finds Pareto solutions very close to the true Pareto front in functions ZDT1 and ZDT2, with better visual results than the other three al- gorithms (SPEA-2, NSGA-II and NSGA-III). Moreover, in ZDT3 and ZDT4, both versions of MESH visually obtain better results when compared to SPEA2 and NSGA-III (in ZDT3), and SPEA2 and NSGA-II (in ZDT4). In ZDT6, a visual analysis is not trivial to perform. On the other hand, in Figure 8, we can see that the tested algorithms are close to the analytical Figure 7: Combined Pareto front for MESH versions, SPEA-2, NSGA-II, NSGA-III, and MOEA/D for the problem tests described in ETHZ (2020). In all graphs, the horizontal axis represents the objective function "F1" while the vertical axis is the objective function "F2". The analytical Pareto front is labelled as "Baseline". The number of decision variables has been set to 5 for all functions. Deb et al. (2002b). In all graphs, the horizontal axis represents the objective function "F1" while the vertical axis is the objective function "F2". The analytical Pareto front is labelled as "Baseline". The number of decision variables has been set to 10 for all functions.
Pareto front. In DTLZ1 only the E2V2D1 version finds a competitive set of solutions. In the DTLZ4 function, only E1V1D1 provides solutions and, visually, SPEA2 presents the set furthest away from the analytical result. Graphical analysis can be a good indicator of the results obtained by the algorithms. However, a statistical test needs to be done to compare the algorithms' performance. Using the hypervolume as a performance index, an ANOVA is applied to compare the algorithms throughout the 30 runs of the algorithms. If ANOVA states there is a statistical difference between the hypervolume means of the algorithms, the Tuckey test is applied to si-multaneously assess all pairwise comparisons and to identify any difference between two means that are greater than the expected standard error Montgomery (2012). Table 3 shows the hypervolume results (mean and standard deviation). The algorithm with the superior performance is indicated in bold for each problem. The mean and standard deviation values are preliminary measures, but in many cases, they are not sufficient for a more effective analysis of the results. Thus, an ANOVA test is once again performed, aiming to find possible differences between the means. Using a significance value of 5%, a p-value below 0.05 is found indicating that there is a difference among the means. Thus, a Tukey-test is carried out, to identify where the differences between the samples are.
The ranking provided by the Tukey-test is also shown in Table 4. The results indicate that both MESH versions obtain competitive results since they are classified together with the MOEA/D algorithm, in the first place, in ZDT1, ZDT2, and ZDT3. MESH is the best algorithm in ZDT4, and ties with NSGA-II, NSGA-III, and SPEA2 in ZDT6.
Regarding the DTLZ functions we can note that the E2V2D1 version of MESH is as efficient as NSGA-II, NSGA-III, and MOEA/D in DTLZ1. In DTLZ2, the MOEA/D has the better result when compared to others. MESH, with E1V1D1, shows a significant difference in relation to the others in DTLZ4. And finally, in DTLZ7, SPEA2 is more efficient covering a better set of solutions for this problem. Therefore, MESH is able to get significant results in six of nine known benchmark functions tested in this work. In a general analysis, it is possible to say that MESH is a competitive algorithm when applied to solve continuous problems like the ZDT and DTLZ benchmark functions.
Electric dispatch simulation in cascade HPPs -an OMRS scenario
In this section we analyze the performance of the proposed MESH algorithm in a real problem of electric dispatch problem in a cascade operation, with multiple reservoirs. The same experimental methodology described in Section 4.1 is employed in this case. The MESH configurations, E1/V1/D1 and E2/V2/D1, are compared to standard NSGA-II, SPEA2, MOEA/D, and NSGA-III versions. Thus, the experimental setup is divided into the following parts: 1. to assess the MESH performance to solve the electric dispatch problem in a cascade operation with multiple reservoirs, we performed the simulation model. The proposed meta-heuristic is compared with the other algorithms. The algorithms have been constructed taking into account the structures and characteristics of the real application problem studied in this work. The obtained results are analyzed using a statistical inference comparison of the results obtained by MESH versus the others using the same methodology proposed in preliminary experiment, and 2. to analyze the results found by MESH solving the electrical dispatch problem, highlighting the positive impact of using MESH as a power production control system.
Simulation modelling in OMRS scenario
For guiding an optimal operation of cascade reservoirs and giving full play to capacity benefits of HPP stations, the mathematical model is established based on the principles of (1) maximizing the power production and (2) maximizing the reservoir volume of cascaded HPPs. In our approach, the spatial coupling of an HPP energy system with two cascade reservoirs is made. The cascade system used for simulation is composed of an HPP "U1" with a maximum capacity of 528 MW/h consisting 8 turbine-generator units and another HPP "U2" that is downstream from U1 with a maximum capacity of 396 MW/h composed of 6 turbine-generator units installed.
The reservoirs of the two HPPs are identical and have a maximum volume of 19528 hm 3 and a minimum volume of 4250 hm 3 . The initial volume for both reservoirs is 80%, which represents a robust scenario in which there is a good availability of water in the reservoir and the height of the hydraulic head guarantees a good yield for the generating units. In this work, a restarting strategy is used to address the dynamic optimization inherent in the proposed model. Thus, whenever the model is changed over time, a new optimization is performed. The experiments carried out to validate the model have a time interval of one hour, over 24 hours as shown by Figure 9.
In the simulation, each iteration receives two types of input variables. The set of static variables is defined before the start of the simulation and their values are independent between iterations. Dynamic variables are transmitted from one iteration to the next. From the Combined Pareto front from 30 runs, the solution more central to the set is used in the next iteration. From this solution, the states of the reservoirs and the defluent flows are transmitted to the next iteration as dynamic input variables. The dynamic power generation system adopted is shown in Figure 10. As there is no other HPP downstream of U1, the terms of defluent flow (Qt w,td and Qv w,td ) are null for the U1 water balance. In the water balance of U2, on the other hand, the time taken to move water between U1 and U2 is td = 2 hours. In this simulation system, the coefficients adopted for the upstream and downstream and efficiency production polynomials are, respectively: • a 0 = 5.30E +02 , b 0 = 5.15E +02 and ρ 0 = 1.46E −01 ; • a 1 = 6.30E −03 , b 1 = 1.61E −03 and ρ 1 = 1.80E −02 ; • a 2 = −4.84E −07 , b 2 = −2.55E −07 and ρ 2 = 5.05E −03 ; • a 3 = 2.20E −11 , b 3 = 2.89E −11 and ρ 3 = −3.52 05 ; • a 4 = −3.84E −16 , b 4 = −1.18E −15 , ρ 4 = −1.12E −03 and ρ 5 = −1.45E −05 .
The limits of the defluent flow rates are defined in the interval of [400, 2500] m 3 × s −1 . Table 5 shows the affluent flow rate, Qa (m 3 × s −1 ) and the requested power energy demand, Dm (M W ) for both HPPs in a cascade mode operation within 24 hours.
Result analysis and discussion
In our experimental design, the first iteration of hourly demand for each algorithm uses dynamic variables, as well as the other iterations. Each algorithm is executed 30 times. The algorithm parameters have been set as in the preliminary experiment described in subsection 4.1.2. The Pareto Fronts are combined and the dominance operation is performed to generate a final Pareto front. The most central solution of the set is used as inputs for the next hour energy generation. Figure 11 shows the Combined Pareto front of some simulation hours, including the area that delimits the region dominated by the solution used for the usual control dispatch mode (UCDm, when the demand is divided equally for each turbine-generator) in HPPs. In the first hour of the simulation, h = 0, the MESH with the E2V2D1 configuration is the furthest from the origin, suggesting that this configuration generates better solutions. In addition, E2V2D1 is the only algorithm that does not have any points dominated by UCDm. It is noted that NSGA-II shows a Pareto set containing a number of diverse solutions. MOEA/D presents a set of solutions that is not capable of efficiently contemplating the objective of maximizing the volume of reservoirs (F2). E1V1D1 presents a diversified Pareto set in which it visually dominates SPEA2 and NSGA-III solutions. It is clearly noted that like MOEA/D, the NSGA-III is able to find a set of solutions that meet the conflicting goals simultaneously, the maximization of productivity (F1) and volume of reservoirs (F2).
From hour 0 to 17, the Pareto Front of the algorithms follows a pattern: solutions produced by E2V2D1 configuration are the most distant to origin, MOEA/D and NSGA-III maintain dispersed sets until 5th hour (MOEA/D) and 8th hour (NSGA-III), followed by NSGA-II, then SPEA2 and finally the E1V1D1 configuration, as exemplified in Figure 11. It is possible to notice that, after 5th hour, MOEA/D is no able anymore to provide solutions. After the 18th hour, due to the increase in the demand for energy in the HPPs, the feasible search space is reduced, thus the algorithms have a greater difficulty in generating a complete Pareto set. This fact is justified by the change in production increased by approximately 100 MW/h between 17th and 18th hours (see Table 5). However, we emphasize that only E2V2D1, SPEA2, NSGA-II and E1V1D1 find solutions that can be used by the system dispatch control.
Excepting E2V2D1, all algorithms generate solutions dominated by UCDm, in the daily control of system operation. The points found by E2V2D1 are more advantageous in terms of relation to the points of the other algorithms for keeping a high level of the reservoir. As we are proposing a new cascade dispatch model, the optimal Pareto set of this real problem is unknown. Once again, we have used the hypervolume metric (Zitzler et al., 2001) to assess the algorithm performance. Note that the first hour of the simulation is the only iteration in which all the algorithms have the same initial states and, therefore, are optimizing the model under identical conditions. Figure 12 shows the hypervolume boxplot of the first hour generation for all algorithms.
Boxplots are not only useful to analyze the range and distribution of the data, but sometimes it can provide information about the true difference among the means. If the notches in the boxplots do not overlap, it can be concluded, with 95% confidence, that the true means do differ. Keeping that in mind and observing Figure 12, it is possible to conclude that: • there are differences among the true means of algorithms; • it is not possible to conclude if there is a statistically significant difference between the true means of E1V1D1, NSGA-II, and NSGA-III algorithms.
To statically assess the difference in performance of the tested algorithm, an ANOVA with 5% of significance level is applied. With a p-value < 0.05, it is possible to state there is a statistically significant difference between the algorithms' means. In sequence, Tukey test is applied indicating which specific group's means (compared with each other) are different. Figure 13 shows the result of the Tukey test confirming that MESH in version E2V2D1 configuration generates solutions with larger hypervolume values, indicating a superior performance.
The same assessment has been made for the all hours on a daily schedule. The MESH version, E2V2D1, achieves the highest hypervolume results, differing statistically from the MOEA/D, SPEA2, E1V1D1, NSGA-II, and NSGA-III. The experimental results have showed that the proposed MESH is able to control the operation of a large multi-reservoir system producing power successfully. MESH demonstrates the effectiveness comparable or better than those presented by standard algorithms from literature. MESH has complied with all constraints imposed by the electric dispatch problem and it shows a safe approach to the operation.
Next, we aim to verify the electrical significance of MESH solutions. For that, the central Pareto Front solution for each hour is established, since this represents a compromising solution between the both objectives: (1) maximizing the power production and (2) maximizing the reservoir volume. Figure 14 shows the MESH ability to produce power respecting the constraints and saving water in the daily operation of HPP with eight turbinegenerators. Is it important to note that this plant is operating in low demand. Even though, MESH is able to obtain optimized flows capable of saving water resources in the power production. Figure 15 exemplifies the power generated using the eight power units (turbine-generators) and the efficiency obtained for each unit in daily electric dispatch. We can see in the Figure 15 that the MESH, as an electric dispatch control, is able to produce power energy respecting the boundaries since we can verify that the power generated is between 35 and 60 MW/h for each turbine-generator unit.
It is also possible to note that the plant works in a good efficiency in which each unit reached between a 91% and 93% yield. We see that the generation. The right side shows the total water discharged. The line represents the small error in power production and the water savings, respectively. Legend: Usual WD (water discharge) means the operation used in usual electric dispatch control; and Optimized WD (water discharge) is the water provided by the MESH control operator. closer to nominal demand, the greater the efficiency thus generating greater water savings. The central solution from Pareto Front also represents the results of the HPP that is downstream of U1. Figure 16 shows the total power generated and the total water discharge used by MESH in HPP -U2. Figure 17 exemplifies the power generated using the six turbine-generators and the efficiency obtained for each unit in daily electric dispatch. As we can see in the Figure 17, the production behavior of the HPP that is downstream of U1 is a slightly distorted graphic. generation. The right side shows the total water discharged. The line represents the small error in power production and the water savings, respectively. Legend: Usual WD (water discharge) means the operation used in usual electric dispatch control; and Optimized WD (water discharge) is the water provided by the MESH control operator. At HPP U2, the power demanded is even lower, which for a long period the plant produced energy at 50% of its nominal capacity. However, MESH is able to find optimized dispatches that guarantees the important constraints of the problem. We can see that all six generating units operate between 91% and 93% of production capacity. These optimistic results show that MESH is capable of operating plants in a Multi-Reservoir System scenario.
MESH respects all the constraints imposed on the problem, guarantees the optimal dispatch for the cascade system, carries out the water balance maximizing the volume of water in the reservoirs, and operates the generating units at a high level of efficiency. In order to demonstrate the MESH efficiency as a control system for the electrical dispatch operation, the data report is available in Table 6. From the solution obtained using MESH, the water flow savings when compared with the HPP usual control dispatch mode -UCDm -is around 73.57 m 3 /s for the U1 and 19.24 m 3 /s in daily dispatch. Expanding these results, this is equivalent to saving approximately 264.8 million liters in U1 and 69.3 million liters in U2 of water using the optimization obtained by the MESH approach. The achieved result of energy production by MESH, in which all the turbine-generator sets work in good capacity (between 91% and 93%) on U1 and U2 power plants, means a percentage gain in electrical production of 0.15% according to water savings. In practice, according to the plant's production manager, a percentage of 0.1% generates a monthly monetary profit of $275,000 a month. Thus, MESH can achieve a monetary profit of around $412,500 for the cascade system providing the amount of 14.91GW at operation. The choice of the solution for this analysis is totally empirical, however such a solution exemplifies that the set of Pareto optimal solutions found by MESH is efficient in practical terms of electricity production in the Brazilian scenario.
Conclusions and final remarks
In this paper we have proposed a novel hybrid algorithm for multi-objective optimization, the Multi-objective Evolutionary Swarm Hybridization -MESH. This new optimizer can be used to address problems with conflicting or competing objectives. The guide, non-dominance and crowd distance operators are the main features introduced in MESH to make it a multi-objective algorithm, together with some novel characteristics inherit from Differential Evolution, which improves the search capabilities of the algorithm. Several tests on different benchmark problems have been conducted for choosing the best algorithm configuration for MESH. The MESH approach, in two different versions, has shown competitive results in ZDT and DTLZ benchmark problems when compared to state-of-the-art algorithms SPEA-2, NSGA-II, MOEA/D and NSGA-III. Furthermore, results obtained after applying MESH to OMRS, a real world electrical dispatch problem, are statistically robust and indicate a superiority of MESH against other well-established MOEA's.
Regarding the electrical dispatch in cascade mode operation, it is possible to evaluate that the proposed mathematical modelling is capable of making the generation system more efficient, with a projected water savings of around millions of liters per hour. The simulation done has showed that the MESH configurations are sensitive to the problem to be optimized. The best MESH version to solve the electric dispatch in cascade operation is the E2V2D1. Thus, when the swarm guide is obtained from a particle to the upper bound of the actual Pareto front and the sampling vector is extracted from the memory, MESH works effectively as a electric dispatch controller of cascading plants. The MESH solution is able to generate a profit of approximately $412,500. We believe that, as a future work, a technique for choosing solutions to be used in this dynamic model can be adopted. Such an approach allows realtime decision making, so that, every hour, a Pareto solution is chosen as an input for the next generation, so that it can generate better solutions. In the end, the amount of water saved in the generation can be even larger. | 2021-07-22T01:16:25.732Z | 2021-07-20T00:00:00.000 | {
"year": 2021,
"sha1": "6db23aacb8e036b2e8967b23be770611318f8220",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "6db23aacb8e036b2e8967b23be770611318f8220",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Computer Science",
"Engineering"
]
} |
119239652 | pes2o/s2orc | v3-fos-license | Singularities in Spherically Symmetric Solutions with Limited Curvature Invariants
We investigate static, spherically symmetric solutions in gravitational theories which have limited curvature invariants, aiming to remove the singularity in the Schwarzschild space-time. We find that if we only limit the Gauss-Bonnet term and the Ricci scalar, then the singularity at the origin persists. Moreover we find that the event horizon can develop a curvature singularity. We also investigate a new class of theories in which all components of the Riemann tensor are bounded. We find that the divergence of the quadratic curvature invariants at the event horizon is avoidable in this theory. However, other kinds of singularities due to the dynamics of additional degrees of freedom cannot be removed, and the space-time remains singular.
I. INTRODUCTION
The space-time singularity is one of the most important signs that Einstein gravity has to be modified at high energies. The singularity theorems [1][2][3] state that space-time singularities are inevitable in Einstein gravity provided that gravity is coupled to matter which obeys energy conditions which are natural from the point of view of classical physics (there are some additional technical assumptions which are automatically satisfied in the symmetric space-times we are considering). There are many arguments supporting the view that the Einstein action can only be a low energy effective theory for gravity. First, it is not a renormalizable theory, and hence cannot yield a consistent quantum theory in the ultraviolet. Gravitational interactions will inevitably lead to higher curvature correction terms to the action. Similarly, gravitational interactions of matter field will lead to correction terms in the effective action for gravity. It is a long-standing hope that curvature singularities will be removed in a consistent quantum theory of gravity. Specifically, one could hope that the two most famous gravitational singularities, the Big Bang singularity of homogeneous and isotropic cosmology, and the Schwarzschild singularity at the center of a spherically symmetric black hole metric, will be removed in a complete theory of quantum gravity.
In this paper, we will explore the question of singularity removal at the level of modified effective gravitational actions. If we were able to construct a gravitational theory without singularities, it would provide a candidate for an effective theory of a consistent theory of quantum gravity.
In the context of cosmology, various scenarios to obtain a nonsingular Universe have been investigated. Inflation was initially proposed as a candidate for a nonsingular cosmology [4]. The simplest way to obtain an inflationary cosmology is to maintain the Einstein gravitational action and to assume the presence of a scalar field whose potential energy can lead to almost expo- * d.yoshida@physics.mcgill.ca † rhb@hep.physics.mcgill.ca nential expansion [5]. However, it was shown that such a scalar field-driven inflationary universe has an initial time singularity [6,7] if the scalar field matter satisfies the null energy condition It was also shown that an inflationary Universe which is described by the usual spatially flat Friedmann-Lemaître -Robertson-Walker (FLRW) coordinates is past incomplete [8], and hence singularity freeness cannot be discussed restricting attention to this coordinate region.
Nonsingular cosmological background space-times (which might even lead to alternatives to inflation as a theory of cosmological structure formation, e.g. the "matter bounce" scenario [9]) have been constructed in the context of Einstein gravity by invoking matter which violates the null energy condition. There are models with a cosmological bounce (see e.g. [10][11][12][13][14][15] for reviews on bouncing cosmology ), or "genesis" models such as Galileon Genesis [16][17][18][19][20][21][22][23] . However, a generic instability for non-singular bouncing solutions was proven in Refs. [24,25] in a class of scalar-tensor theories, the so called Horndeski theories [26][27][28] and its multi-field extensions [29]. Stable non-singular solutions have then been investigated in the framework of scalar-tensor theory [30] which goes beyond the framework in which the assumptions of the no-go theorems have been derived, and it also goes beyond the usual effective field theory approach to gravity [31][32][33].
Another way to obtain a non-singular cosmology is to consider higher curvature corrections [34,35] as in the Starobinsky model [4] of inflation. An example of such a higher derivative gravity model aiming to remove ther singularity is the infinite derivative gravity model of [36][37][38], where the theory includes all powers of derivatives of the Ricci scalar. In this paper we would like to focus on another possibility of obtaining a non-singular gravitational theory with higher curvature terms which was proposed by Refs. [39,40], and called the "limiting curvature construction". It is a gravitational theory in which extra terms are added to the Einstein action with the purpose of limiting certain curvature scalars. The idea of the construction is to limit one scalar curvature polynomial to finite values by introducing a Lagrange multiplier scalar field and adjusting its potential. In this way, we can limit any number of curvature scalars by introducing the corresponding number of Lagrange multiplier scalar fields. However, the difficulty comes from the fact that there are an infinite number of curvature polynomials. Thus even if we ensure that a finite number of curvature polynomials, e.g. R and R µν R µν , have finite values, other curvature polynomials, e.g. R µνρσ R µνρσ , could possibly diverge. Thus, the choice of which curvature polynomials to bound is very important.
In the case of homogeneous and isotropic space-times, then since the Riemann tensor is given by the Hubble function H and its derivativeḢ, the finiteness of the Riemann tensor is ensured if we control these two quantities. However, this is not sufficient to remove all singularities. It is possible to have geodesically incomplete space-times where no curvature invariant blows up. The idea in [39,40] was to adjust the Lagrange multiplier construction such that at high curvature the cosmological solutions approached a known non-singular solution, namely de Sitter. Non-singular cosmological solutions based on the limiting curvature construction have been investigated in Refs. [39][40][41]. The background dynamics of a contracting Universe was first studied in Refs. [39,40] and then that of an expanding Universe corresponding to inflationary and genesis scenarios was studied in Ref. [41]. It was also shown that cosmological solutions are stable in a wide region of cosmological history.
If the limiting curvature theories are to give a good guide to the ultimate quantum theory of gravity, they should not only work well for cosmological situations, but also be able to remove other kinds of singularities appearing in Einstein gravity such as the Schwarzschild singularity. The first example of a non-singular black hole space-time was given by Bardeen as a solution of the Einstein-Maxwell theory (see Ref. [42] for a review of Bardeen's model and other non-singular black hole solutions). Motivated by the recent developments in modified theories of gravity, non-singular spherically symmetric solutions have been investigated also in the context of modified gravity, for example in F (R) gravity with an anisotropic fluid [43] and in mimetic gravity [44,45]. Since the limiting curvature construction prevents the divergence of curvature invariants, it is natural to expect that spherically symmetric solutions of these theories might be non-singular. In fact, a non-singular black hole solution in the 1+1 dimensional space-time in the limiting curvature theory was obtained in Ref. [46]. However, it was never clarified whether in this construction the Schwarzschild singularity can be removed in 1+3 dimensional space-time. The purpose of this paper is to study whether the 1+3 dimensional Schwarzschild singularity can be removed in a theory with limiting curvature invariants. We will hence investigate static, spherically symmetric solutions with various choices of controlled curvatures and potentials of the scalar Lagrange multiplier fields.
Our paper is organized as follows. In the next section, we will review the limiting curvature construction of [39,40] and propose another class of theories where each component of the Riemann tensor is controlled. In Section III, we will investigate static, spherically symmetric solutions in a theory with bounded Gauss-Bonnet term, which is a ghost free subclass of the limited curvature theories. We will find that two kinds of singularities remain, one is the Schwarzschild singularity and the other is dubbed as a thunderbolt singularity. The appearance of the Schwarzschild singularity can be understood since the construction does not bound all curvature polynomials. Next, we investigate a theory in which both the Ricci scalar and the Gauss-Bonnet term are limited (Section IV). However we will find that the Schwarzschild singularity still cannot be removed. In Section V, we then investigate a theory in which all Riemann curvature tensor elements are bounded. Then we will succeed to remove the divergence of the quadratic curvature scalars. However we will find other kinds of singularities due to the additional degrees of freedom generated by higher derivative interactions. The final section contains a summary of our results and discussions on the difficulty of obtaining non-singular spherically symmetric solutions using the limiting curvature construction.
II. GRAVITATIONAL THEORY WITH LIMITING CURVATURES
Let us review the gravitational theory with limiting curvature scalars proposed in Refs. [39,40]. The action of this theory is given by with the Lagrangian density where M pl is the reduced Planck mass, g µν is the spacetime metric, R is the Ricci scalar of the space-time and I i are dimensionless scalar curvature polynomials constructed from Riemann tensor R µ νρσ and their covariant derivatives, Here, we introduced only a single dimension-full parameter M L just for simplicity. Note that it is natural to expect M L = O(M pl ) if we regard the origin of the modification terms in the action as a quantum effect of gravity. This theory includes n dimensionless Lagrange multiplier scalar fields χ i and their potential term V (χ i ), which play an important role in limiting the curvature scalars I i . From the variations with respect to χ i we obtain the equations, (2.4) If we use a potential whose derivatives are finite for all field values of χ i , only solutions with finite curvature scalars I i are consistent with the equations of motion. Thus we can eliminate any curvature singularity where one of the curvature scalars I i diverges. However since there are an infinite number of curvature scalars constructed from R µ νρσ and their derivatives, it is still nontrivial whether curvature scalars other than I i are finite or not. For example, if we consider a theory with n = 1 and I 1 = R, then the Schwarzschild singularity would remain because the Ricci scalar vanishes for Schwarzschild.
A guideline for the choice of the bounded curvature scalars I i was proposed in [39,40] and called the limiting curvature hypothesis. The idea is to find some invariant I which has the property that I = 0 has only a definite class of non-singular space-times (e.g. de Sitter space-times) as solutions, and then to choose the potential for the Lagrange multiplier field associated with I such that at high curvatures I is driven to zero. More generally, the idea was to force the solution to approach a well-defined nonsingular space-time when all curvature invariants I i take their limiting values corresponding to χ i → ∞. For example, in the case of homogeneous and isotropic FLRW space-time, the Riemann tensor is given by the Hubble function H and its time derivativė H. Thus the assumption of the limiting curvature hypothesis is satisfied if we control two curvature scalars I 1 | FLRW ∝ H, I 2 | FLRW ∝Ḣ by a potential that satisfies V ,χ1 → const and V ,χ2 → 0. As investigated in Ref. [41], such curvature scalars are realized in terms of R µν and its covariant derivatives. However, this choice of curvature invariants does not work for vacuum solutions like Schwarzschild because R µν vanishes in the Schwarzschild space-time. Thus for our purpose, which is to remove a curvature singularity in a spherically symmetric spacetime, we need to consider other curvature scalars that prevent the divergence of R µνρσ R µνρσ . We will investigate this kind of theories in the sections III and IV. Note that as soon as we abandon the assumption of homogeneity and isotropy, the dynamical system becomes much more complicated since the equations are now true partial differential equations. Hence, we should expect that it is more difficult to prevent singularities.
It should be noted that if the equations (2.4) can be solved for χ i , one can eliminate χ i from our action just by plugging in these solutions. Then we obtain pure metric theory including higher derivatives; where F is given as the Legendre transformation of V ; Thus the theory with limited curvature can be regarded as a higher curvature modification of Einstein gravity.
For example, the limiting curvature theory with n = 1 and I 1 = R/M 2 L corresponds to F (R) gravity. Before closing this section, let us suggest a way to limit the curvature without assuming any particular symmetry of space-time. This would be complicated to achieve in the framework of the theory (2.2), but it easily realized if we consider a slightly modified theory which can be called a gravitational theory with limited curvature tensor. Here a tensor field χ µνρσ is introduced instead of scalar fields χ i . V (g µν , χ µνρσ ) is a scalar function of χ µνρσ , which controls the Riemann tensor. Variation with respect to χ µνρσ gives the equations, Then we assume with a constant κ at the limiting values χ µνρσ → ∞. Since the right hand side of (2.10) is nothing but the Riemann curvature of the constant curvature space, which is (Anti) de Sitter space-time for positive (negative) κ or Minkowski space-time for κ = 0, we know that the solution will approach a non-singular space-time at limiting values of the Lagrange multiplier fields -a conclusion which holds without assuming any special symmetry. However, it is not clear that the asymptotic region can be reached without encountering singularities, singularities which would be different from curvature singularities. We will investigate the spherically symmetric solutions of this kind of theory in Section V. Let us introduce trace and traceless parts of χ µνρσ by, similar to the definition of the Ricci tensor, the Ricci scalar and the Weyl tensor. Then by introducing the traceless part of R µν and χ µν bŷ Rg µν , (2.12) our action can be written in following form: (2.14) Thus, the variations with respect toχ µνρσ ,χ µν and χ give the following equations limiting the curvature tensors, Similar to theories with limited curvature scalars, we can write this theory in the form of a pure metric theory. In this case, we obtain so-called F (Riemann) gravity [47]; where F is a scalar constructed from g µν and the Riemann tensor R µνρσ , which is related with V via the Legendre transformation, where χ µνρσ (R) is defined as a solution of (2.9). Note that the equivalence between (2.8) and (2.18) holds only when the equation (2.9) can be solved by χ µνρσ .
III. SPHERICALLY SYMMETRIC SOLUTION WITH LIMITING GAUSS-BONNET TERM
A. Ghost Free Higher Derivative Gravity with Riemann Square Invariants As we have seen in the previous section, a theory with limited curvature scalars can be written in the form of a higher derivative gravitational theory (2.6). In general, higher derivative gravity models have pathological ghost degrees of freedom [35]. The presence of ghosts in higher derivative theories can be shown exactly in the case of un-constrained systems. This is known as Ostrogradsky's theorem [48]. However, there is room to construct ghost free higher derivative theory in constrained or gauge systems as in gravitation. The simplest example of a ghost-free theory is F (R) gravity [49,50]. Then it was shown that ghost-free higher derivative theories can be constructed even if the covariant derivative of R is included [51]. However these theories are not suitable for the purpose of eliminating the Schwarzschild singularity because they allow us to limit only the the Ricci scalar R and its derivatives and cannot limit R µνρσ R µνρσ , which blows up near the Schwarzschild singularity. Thus we need to consider a higher curvature theory which includes at least the Riemann square invariant. Note that a non-singular spherically symmetric solution is obtained in the framework of F (R) gravity in the presence of an anisotropic fluid [43]. We will not focus on such a case simply because the mechanism to avoid the singularity has nothing to do with our limiting curvature mechanism as discussed above.
An example of a ghost free higher derivative gravity with Riemann square term is proposed in the appendix of Ref. [28]. There, it was shown that F (Gauss-Bonnet) term is equivalent to a subclass of ghost free scalar-tensor theories called Horndeski theories [26]. Let us consider Einstein gravity with a F (Gauss-Bonnet) term, where the Gauss-Bonnet term G is given by, By comparing the action (3.1) with (2.6), we conclude that this is a theory with limiting curvature scalar I 1 with n = 1 and I 1 = G/M 4 L . Therefore this theory can be written in the form of original limiting curvature theories, Now the Gauss-Bonnet term is controlled by the potential V through the variational equation with respect to χ Since the Gauss-Bonnet term includes the Riemann square term which diverges at the Schwarzschild singularity, one may hope that the curvature singularity could be relaxed by forcing G to be finite.
B. Spherically symmetric, static, asymptotically flat solutions
Let us consider static spherically symmetric solutions of this theory (3.3). The dynamical variables are the metric tensor g µν and a single Lagrange multiplier field χ. Given the assumption of spherically symmetry, g µν and χ can be written as where dΩ 2 is the metric on the sphere, Then the Ricci scalar and the Gauss-Bonnet term can be written as, where represents the derivative with respect to r. Making use of these expression, we can write down the action in terms of f, h and χ.Then, taking the variation with respect to f and h, we obtain the following equations of motion, The final equation of motion results from varying with respect to χ and is given by (3.4), with the Gauss-Bonnet term given by Eq. (3.9). In order to limit the Gauss-Bonnet term, we need to use a potential V whose χ derivative V ,χ is finite. As an example of such a potential, here we shall focus on the potential The first derivative of V is given by which is finite for any χ. Hence, the Gauss-Bonnet term G is finite through Eq. (3.4).
First, let us focus on the region χ 1. There our potential (3.12) can be expanded as Then the equation of motion (3.4) gives the relation Thus, the condition χ 1 corresponds to G M 4 L . In this region the correction terms compared to Einstein gravity can be omitted and then the Schwarzschild spacetime is a solution. For the Schwarzschild space-time with mass M , the Gauss-Bonnet term can be evaluated as where r L is given by and G is the gravitational constant given by G −1 = 8πM 2 pl . Thus the condition χ 1 is equivalent to r L r. Here the ratio of r L to the Schwarzschild radius r g = 2GM is given by where M is the solar mass. Thus r L r g for the realistic situation, In the region χ 1, the correction from the Schwarzschild solution can be calculated perturbatively by assuming a 1 r series expansion of f, h and χ. For example, the next to leading order correction can be obtained as Since the perturbative approach is only valid for χ 1, it is difficult to solve the equations of motion beyond χ ∼ 1 analytically. We will solve them numerically by using (3.20) -(3.22) as the boundary conditions at some r r L . In order to see the effects of our modification, let us consider the case with r g ∼ r L , which corresponds to an asymptotically Schwarzschild solution with a very small mass. As we will see below, the behavior of the solution for r g < r L is different from that for r L < r g . Let us investigate each case separately.
Model1: Numerical solution with rL < rg
First let us focus on the case r L < r g , where higher derivative corrections become significant inside the event horizon expected from the asymptotic Schwarzschild space-time. Concretely we set the parameter as M L = (GM ) −1 . For this parameter, r L becomes r L ∼ 0.95r g < r g . The results of the numerical solution of the equations of motion with this parameter choice are given by Fig. 1. In the numerical work, we have used the initial conditions (3.20) -(3.22) at r = 25r g .
From the plot, we find that the numerical calculation stops at r ∼ 0.76r g . At this point, f vanishes but h is finite. This point is the horizon. Its value has been shifted inwards by the addition of higher curvature terms. More importantly, it has become a singular surface in spacetime. In order to clarify whether this point is a true singularity or an artificial singularity like a coordinate singularity, we plot the behavior of quadratic curvature scalars in Fig. 2. From Fig. 2, we find that both the curvature scalar R, R µν R µν and R µνρσ R µνρσ diverge at this point. Thus r ∼ 0.76r g is true curvature singularity. Note that although each quadratic curvature scalar is infinite, the Gauss-Bonnet term, which is the sum of these curvature scalars, is finite as expected.
FIG. 2. Quadratic curvature scalars in Model 1
The reason for the appearance of a singularity can be understood as follows. In Einstein gravity, the Schwarzschild solution written in terms of Schwarzschild coordinates has a coordinate singularity at the event horizon r = r g , where f vanishes and h diverges while maintaining the constraint f h = 1. The relation f h = 1 ensures that f = 0 is not a physical singularity as can be seen by using Eddington-Finkelstein coordinates. Then once we include small correction terms in the gravitational action, the Schwarzschild solution is slightly modified. The important point is that, as one can see from Eqs. (3.20) and (3.21), the change in f is generally different from that of h −1 , which leads to the breakdown of the relation f h = 1 near the event horizon. Terms with f h = 1 lead to the event horizon of the original Schwarzschild space-time becoming a true curvature singularity as a consequence of the modification of the gravitational theory. This is the reason why our solution has a curvature singularity at a finite value of r. Since a similar singularity, called "thunderbolt singularity", was discussed in the context of the quantum effects in 1+1 dimensional space-time [52] and in Hořava-Lifshitz gravity [53], we also call the singularity we encounter here as a thunderbolt singularity.
Model2: Numerical solution with rg < rL
The thunderbolt singularity might not appear when r g < r L because the effect of correction terms become significant at radii larger than where the event horizon of the Einstein action solution would be. Hence, it is possible that the horizon f = 0 will not be reached (and hence the singularity associated with this point would not be present). To check our expectation, let us investigate the solution with the parameter choice GM = (2M L ) −1 , which corresponds to r L = 1.51r g > r g . The numerical solution is then given in Fig. 3. Now we can continue original Schwarzschild singularity at r = 0 still exists. In fact, it has become a naked singularity since it is no longer shielded by a horizon. The fact that the singularity at r = 0 is not removed should not be too surprising because the requirement that G is finite is not sufficient to remove the divergence of other curvature scalars like R, R µν R µν .
To summarize this section, we found that in a theory with bounded Gauss-Bonnet term there are two kinds of singularities which arise for spherically symmetric configurations, the thunderbolt singularity and the Schwarzschild singularity. The latter one could be removed by limiting other curvature scalars in addition to the Gauss-Bonnet term. However, we would have to go beyond the framework of known ghost-free higher derivative gravity models. Thus, to remove singularities with the limiting curvature mechanism would not be compatible with ghost-free requirement. In the following sections, we will discuss the singularity avoidance in wider class of theories setting aside the issue of ghosts.
IV. LIMITING BOTH RICCI SCALAR AND GAUSS-BONNET TERM
A. How to ensure the finiteness of quadratic curvature invariants In the previous section, it was clarified that limiting only the Gauss-Bonnet term is not sufficient to remove the singularity at r = 0. Then, what is the condition to ensure finiteness of all quadratic curvature scalars at r = 0? Assuming the metric components are regular at r = 0, they can be expanded in a Taylor series, By plugging these expressions into Eq. (3.9), the Gauss-Bonnet term is given by where the expressions for G 0 and G 1 are The requirement that G is finite at r = 0 gives only two conditions for f m and h m , namely G 0 = 0 and G 1 = 0, and these conditions are not sufficient to ensure that other curvature scalars are finite at r = 0 . For example, The conditions G 0 = 0 and G 1 = 0 can be satisfied by appropriately choosing f 0 and f 1 . However, since the leading divergent term in the Ricci scalar, which is proportional to r −2 , comes from the first term in (3.8), it diverge unless h 0 = 1. This is the reason why the divergence at r = 0 appears in the framework of a F (G) theory. Then let us now impose finiteness of R in addition to that of G. We can expand the Ricci scalar explicitly as From the finiteness of R at r = 0 we obtain Then, plugging these expression into Eq. (4.4), we get Thus we find f 1 = 0. Moreover, from the expression (4.5), we can confirm that G 1 also vanishes when the condition f 1 = 0, as well as (4.7), are satisfied. Without loss of generality, we can set f 0 = 1 by rescaling the time coordinate. Now the metric components are given by whereR µν is the trace-free part of the Ricci tensor defined by Eq. (2.12). To summarize, if we impose the finiteness of R and G, finiteness of all quadratic scalar curvatures at r = 0 is ensured as long as the metric is regular at this point.
Model 3
In order to control both curvature scalars G and R, we have to include R as well as G in the argument of the arbitrary function F . This is called an F (R, G) theory, (4.14) In Ref. [41], it was shown that non-singular cosmological solutions can be obtained in this framework. However F (R, G) theory generally includes ghost degrees of freedom as can be explicitly seen by studying perturbations around Bianchi type I universes [54]. Here we pass over the ghost problem and focus only on the singularity problem. By comparing with (2.8), the theory (4.14) can be regarded as a limiting curvature theory with n = 2 and Thus it can be written as For simplicity we focus only on the case of V = V 1 (χ 1 ) + V 2 (χ 2 ). Variation with respect to χ 1 and χ 2 gives following equations to control R and G, Thus if we use potentials whose derivatives are finite for any value of χ 1 and χ 2 , the theory only has solutions with finite values of R and G.
Since we have not solved the problem which arises at a horizon if the condition f h = 1 is violated, the thunderbolt singularity could still exist. First, however, we shall focus only on the inside of the expected event horizon in order to see whether our mechanism to remove the Schwarzschild singularity works well or not. Thus we start the numerical calculations with the Schwarzschild boundary conditions at some value r < r g . We use the potentials where Thus V is finite for any values of the χ i fields. Then the analysis in the previous subsection implies that if χ i ± → ∞ at r → 0 and if f and h −1 are regular there, the Schwarzschild singularity is removed. However, it is non-trivial to show that this limit will be reached. Other singularities could appear for finite values of the χ fields. In a similar way to what was done in subsection III B, we can derive the equations of motion by plugging the spherically symmetric ansatz for f, h, χ 1 and χ 2 into the action (4.15) and taking the variations with respect to each variable. Then the asymptotic Schwarzschild solution with mass M is given by Eqs. smaller values of r). Fig. 5 shows the numerical solutions for the parameter choice GM = 10M −1 L and starting with Schwarzschild boundary conditions at r = 0.95r g . For this parameter choice, r L is given as r L = 0.21r g . One can see that h −1 diverges in the limit r → 0. Thus, one of the assumptions made in section IV A, which is that the metric components are regular at r = 0, is not satisfied. Therefore, the question of whether the quadratic curvature scalars remain finite is still nontrivial in this setting. However, from the numerical results we can compute these scalars. Fig.6 presents the results for the quadratic curvature scalars: We found that R µν R µν diverges at r = 0. Therefore r = 0 is still a singularity and we conclude that the Schwarzschild singularity cannot be removed even if we bound R in addition to G.
V. GRAVITATIONAL THEORY WITH LIMITING RIEMANN TENSOR
A.
How to obtain f h = 1 We have seen that there are two kinds of singularities which come, respectively, from the lack of limiting curvature on one hand, and the violation of the condition f h = 1 on the other (recall that the latter condition was crucial is showing that the horizon remains non-singular). In order to remove both singularities, we focus on theo-ries that satisfy the following two conditions: • The theory has a sufficient number of bounded curvature invariants to ensure the finiteness of all scalar curvatures up to quadratic order, namely R, R µν R µν , R µνρσ R µνρσ at r = 0 in order to remove the Schwarzschild singularity.
• The theory admits only solutions which satisfy f h = 1. In this way, there is a chance to avoid the thunderbolt singularity.
The first requirement would be satisfied if we control all components of the Riemann tensor. This is realized if we consider the theory with limited Riemann tensor given by (2.8).
Then let us investigate how the second condition can be realized in a theory with limited Riemann tensor. We use the following spherically symmetric ansatz for χ µνρσ , which is compatible with the form of the Riemann tensor derived from our spherically symmetric metric (3.5).
Here the indices I and J run over θ and φ, and the indices a, b, c, d run over t and r. For later convenience we introduce the following variables instead of A, B tt , B rr and C where χ is the trace of χ µνρσ as defined by (2.11). Now the components ofχ µνρσ are functions of ξ, and the components ofχ µν are functions of ζ and B.
The equations of motion can be derived by plugging the spherically symmetric ansatz (3.5) and (5.1) into our action (2.8) and varying it with respect to f, h, χ, ξ, ζ and B. Since we defined A, B tt , B rr and C so that the scalar quantities constructed from χ µνρσ are independent of f and h, the potential V can be written as a function of A, B tt , B rr and C, or as a function of χ, ξ, ζ and B. An important equation comes from the B variation, Here we fixed the ambiguity of the integration constant by redefining the time coordinate t. Thus, both challenges of preventing the divergence of quadratic curvature scalars at r = 0, and of removing the thunderbolt singularity which arises when f h = 1, are avoidable in the theory with limited Riemann tensor (2.8) with the potential (5.7). However, this does not guarantee that no other singularities emerge. To study this question we have to study the equations of motion in more detail.
B. Asymptotically Schwarzschild solution
Let us solve the equations of motion in the asymptotic region r → ∞. The remaining equations of motion are given by ) where A, B tt , B rr and C in Eqs. (5.9) and (5.10) are regarded as functions of χ, ξ, ζ and B is defined through (5.2) -(5.5). For simplicity, let us focus on the following form of the potential, (5.14) Then, from the expressions (5.11) -(5.13), one can see that the fields χ, ξ and ζ control R, C trtr andR tt , respectively.
Assuming that the potentials have the following form for χ, ξ, ζ 1, the asymptotic Schwarzschild solution can be obtained perturbatively as where GM and B 1 are arbitrary constants.
C. Reduction to first order differential equations In order to solve the equations of motion numerically, let us reduce them to first order form. We can do this making use of the Hamiltonian formalism. Our equations of motion can be derived from the Lagrangian L = 2L/M 2 pl sin θ which is given by where we introduced ∆ as We regard ∆ as one of the independent variables instead of h and now we have 6 dynamical variables q I = {f, ∆, χ, ξ, ζ, B}. Let us consider the Hamiltonian (regarding r as a time coordinate). By defining conjugate momenta p I = ∂L/∂q I as usual, we obtain the following two relations between the momenta and the first derivatives of the variables, We also obtain four primary constraints, Thus, the total Hamiltonian of this system is given by Since there are primary constraints (5.27) -(5.30) in this system, the variables q I and p I are not all independent.
Next we have to check the consistency of the constraints with the Hamilton equations. The r derivatives of C ∆ and C B can be calculated as whereλ B is given bŷ Thus the consistency equations for C ∆ and C B fix two Lagrange multipliers to be unless f = 0. Since the consistency equations for C ξ and C ζ do not include multiplier fields, they give two secondary constraints, Note that V 1 represents V 1,χ and not the r derivative of V 1 . The consistency equations for these secondary constraints are given as where the functions F ξ and F ζ are given by and the matrix M is given by Thus, if the matrix M has an inverse, namely if its determinant is not zero, then Eqs. (5.42) can determine the remaining multipliers as 47) and no more constraints appear. Now we have 6 constraints C ∆ , C ξ , C ζ , C B , C (2) ξ and C (2) ζ which can be solved for p ∆ , p ξ , p ζ , p B , p χ and f . Thus a complete set of equations of motion can be derived from the Hamilton equations for the remaining variables, ∆, p f , χ, ξ, ζ and B, which now reduce to ∆ = 0, (5.48) where the Lagrange multipliers are determined from (5.39) and (5.47). Since ∆ can be solved easily as ∆ = 0, i.e. f h = 1, we will solve the remaining 5 equations numerically. Note that we have assumed f = 0 and det M = 0 when solving the equations (5.36),(5.37) and (5.42). If either of these conditions is violated, the structure of the differential equations becomes singular in the sense that the number of independent initial conditions are changed. We can see this singularity as a divergence of the Lagrange multiplier λ I in the limits f → 0 or det M → 0.
Model 4
Now we are ready to study numerical solutions for given parameters and potentials. Let us consider the potentials, (5.54) Since the derivatives of V i , are finite for any x, solutions of the equations in this model have finite values of R,R tt and C trtr . Since V i → 0 in the limit where χ, ξ and ζ go infinity, solutions become non-singular Minkowski space-time in this limit. The numerical solution for the parameter choice GM = M −1 L (corresponding to r L = 0.95r g ) and for Schwarzschild boundary conditions with B 1 = 0 at r = 25r g is shown in Fig. 7. Even though all fields have finite values, a singularity appears at r ∼ 0.90r g . There the curvature scalars R,R µνR µν and C µνρσ C µνρσ are finite as shown in fig. 8. Then what is the origin of this singularity? The reason why we cannot extend our solution beyond r ∼ 0.90r g is because of the divergence of χ , ξ , ζ , and B . Through the Hamilton equations, divergences of these quantities come from the divergences of Lagrange multipliers. As mentioned, the Lagrange multipliers possibly become infinite when f = 0 or det M = 0. Since f = 0 at the singularity, we conclude that the singularity must be due to det M vanishing at r ∼ 0.90r g . This is confirmed by the numerical plot of det M given by Fig.9. Thus in this case, even though we can bound all quadratic curvature scalars, a singularity still appears because of the singular structure of the differential equationsin the limits f → 0 or det M → 0. Roughly speaking χ, ξ and ζ represent curvature components through the equation (5.11)-(5.13). Thus the divergence of their derivative corresponds to that of curvatures (not of the curvature scalar, but to a derivative thereof). Note that the asymptotic Schwarzschild solution (5.18) -(5.22) is not a stable asymptote of the modified equations of motion. We can see this from fig.10 where it is shown that if we integrate the equations in outward direction (towards larger values of r), starting with Schwarzschild data at some finite r, that the solution then runs away from the Schwarzschild solution. Thus our numerical solutions are not realistic even if there is no singularity since they do not asymptote at large values of r to an asymptotically Minkowski space-time. In the current study, we pass over this stability problem as well as the ghost problem and focus only on the singularity problem.
The problems which we have encountered in this model may not be general problems for this class of theories. Hence, it is useful to study another model, a model in which the source of the singularity in the previous model is cured.
Model 5
We will now numerically study solutions obtained for another potential. Since the singularity for Model 4 comes from a point in phase space where det M = 0, it could be removed by considering a potential which enforces det M = 0.
Let us consider the following potentials, Since V i (x) = (1 + x 2 ) −1 is positive for any finite x, det M is also positive.
Here we make the parameter choice GM = 20M −1 L , which corresponds to r L = 0.13r g . Fig.11 presents the numerical solution for Schwarzschild boundary conditions at r = 1.25r g . Now singularities appear at In the exact Schwarzschild space-time, f vanishes at the event horizon r = r g . Thus in order to avoid the appearance of f = 0, we need to have r g < r L like in Model 2 discussed in section III B, though the required parameter choice is not natural for realistic situations.
Let us investigate again a solution with the potential (5.56). This time, let us make the parameter choice GM = M L −1 , which corresponds to r L = 0.95r g . The entire region but there is singularity at r = 0.47r g , where ξ, ζ and B diverge.
We used the potential (5.56) so that det = 0 for finite values of the arguments, but det M can vanish if the arguments (χ, ξ and ζ) diverge. Actually, det M vanishes and λ diverges at the point r = 0.47r g (See Figs. 15 and 16).
One may think that the positivity of det M would be ensured if we use a potential like V (x) > K with a positive constant K. However such a potential cannot ensure the an overall upper bound on the curvature invariants because V (x) is unbounded because V (x) > V (x 0 ) + K(x − x 0 ) → ∞ in the limit x → ∞. Thus, we see that limiting curvature invariants by our construction is not consistent with avoiding the singular structure of the differential equations.
Note that since the metric component f is well behaved at the singularity, the quadratic curvature scalars are finite. This is shown in Fig.17.
VI. SUMMARY AND DISCUSSION
In this paper, we have discussed whether the Schwarzschild singularity can be resolved in a theory with limited curvature invariants, a theory in which cosmological singularities do not occur. In Section II, after reviewing the theory with bounded curvature scalars given by Eq. (2.2), the theory proposed in Refs. [39,40] which is able to produce non-singular cosmologies, we proposed a new theory in which all components of the curvature tensor are bounded by construction. The Lagrangian of this theory is given by Eq. (2.8). We also discussed the equivalence of these theories, (2.2) and (2.8), with higher curvature metric theories, (2.6) and (2.18) respectively. In Section III, we investigated static, spherically symmetric solutions of the new equations which reduce to Schwarzschild space-time at r → ∞. First, we considered Einstein gravity with bounded Gauss-Bonnet term given by (3.3), which is a ghost free subclass of limited curvature theories (2.2). We have given two numerical solutions (Models 1 and 2) for different parameter choices, and found that there still exist singularities, in fact singularities of two kinds. One is the thunderbolt singularity found in Model 1 where the event horizon of the original Schwarzschild space-time becomes a curvature singularity. Some quadratic curvature invariants such as R µν R µν diverge while the Gauss-Bonnet term is finite since it is explicitly constrained by the construction. This singularity comes from the breakdown of the relation f h = 1, which holds in Einstein gravity. The other singularity found in Model 2 is nothing but the original Schwarzschild singularity. The presence of the Schwarzschild singularity implies that limiting only the Gauss-Bonnet term is insufficient to remove the Schwarzschild singularity.
Next, we investigated a theory in which both the Ricci scalar and the Gauss-Bonnet term are bounded by construction, a theory given by (4.15) (Section IV). However, the numerical solution of the equations of motion discussed in Section IV B (Model 3) shows that even in this framework the Schwarzschild singularity cannot be removed.
Finally, we investigated a more general theory (2.8) in which all of the Riemann tensor elements are bounded explicitly (Section V). In Section V A, we found that the relation f h = 1 is automatically satisfied if we use the class of potentials given by Eq. (5.7). We derived the first order form of the equations of motion making use of the Hamiltonian formalism (Section V C) and found that the structure of the differential equations (e.g. the number of independent variables) is changed when either of the conditions f = 0 or det M = 0 is violated. We considered three types of specific models (Models 4, 5 and 6) (Section V D). All models yield some type of singularity. Model 4 leads to a singularity where det M vanishes. Though all quadratic curvature invariants are finite at this singularity, as expected from the construction, the additional degrees of freedom due to higher derivative interactions become strongly coupled at the singular point. In the case of the Models 5 and 6, we used a potential where det M > 0 for finite fields values. However, singularities remain in both models, again due to the singular structure of the differential equations. In Model 5, such a singularityappears when f = 0, and in Model 6 it arises because det M approaches 0 when the fields ξ, ζ and B diverge. Thus the singularity at finite r still remains even though the quadratic curvature invariants are finite at the singular point.
To summarize, we numerically studied the equations of motion for a spherically symmetric ansatz for the fields in various theories in which the curvature is bounded by construction. But in all cases, the solutions have singu-larities of various types. The results are summarized in Table I.
Models
Theory We would like to emphasize that our analysis in Section V gives a concrete counter-example to the strong form of the "limiting curvature hypothesis" according to which general singularities could be avoided by using a Lagrangian in which the curvature is explicitly bounded by construction. Thus, the limiting curvature hypothesis does not resolve general singularities, and another principle is required if we want to construct an effective theory of gravity in which no singularities arise. In the models in Section V, the origin of the singularity was the dynamics of additional degrees of freedom. Since limiting curvature theories are essentially higher derivative theories as shown in Section II, it is difficult to sufficiently well constrain the dynamics of such additional degrees of freedom. One possible avenue would be making use of the Palatini or metric affine formalism [55], where the connection which determines the curvature tensor is independent of the metric tensor. This is a promising avenue because higher curvature gravity in the Palatini formalism does not include additional ghost degrees of freedom [56].
It is fair to say that our models are toy models and there would be many problems even if the Schwarzschild singularity could have been removed. For example, we have not addressed the problem of ghosts, and the numerical stability of the equations (the question of whether asymptotically flat solutions are stable in the large r limit. The models in Sections IV and V in general have ghost degrees of freedom. One way to justify such a higher derivative gravity model would be to regard the theory as a low energy effective theory after some heavy fields have been integrated out. Naively speaking, since ghost modes appear because of higher derivative interactions which are suppressed by M L , the mass of the ghost modes should be of the order of M L . Then we need to regard our theory as an effective field theory valid at energies E M L . Since the curvature scale can be controlled by hand in our framework, a self-consistent procedure would to bound the curvatures to values corresponding to an energy scale smaller than M L by choosing a suitable potential. In this way, the extra terms in our gravitational action would be within the energy range of the effective field theory, while the ghost degrees of freedom would not.
Though our analysis is not a "no-go" result for nonsingular black hole solutions in an approach in which the curvature is bounded by construction, we conclude that the singularities cannot be removed generally if only the curvature is limited to finite values. | 2018-01-15T23:50:17.000Z | 2018-01-15T00:00:00.000 | {
"year": 2018,
"sha1": "d93485074eb4ee763d4141ed28eded14007f1c61",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1801.05070",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "d93485074eb4ee763d4141ed28eded14007f1c61",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Physics"
]
} |
15020268 | pes2o/s2orc | v3-fos-license | The consequences of selective inhibition of signal transducer and activator of transcription 3 (STAT3) tyrosine705 phosphorylation by phosphopeptide mimetic prodrugs targeting the Src homology 2 (SH2) domain
Herein we review our progress on the development of phosphopeptide-based prodrugs targeting the SH2 domain of STAT3 to prevent recruitment to cytokine and growth factor receptors, activation, nuclear translocation and transcription of genes involved in cancer. We developed high affinity phosphopeptides (KI = 46–200 nM). Corresponding prodrugs inhibited constitutive and IL-6 induced Tyr705 phosphorylation at 0.5–1 μM in a variety of human cancer cell lines. They were not cytotoxic at 5 μM in vitro but they inhibited tumor growth in a human xenograft breast cancer model in mice, accompanied by reduced VEGF expression and angiogenesis.
Signal Transducer and Activator of Transcription 3 (STAT3) is a Target for Anticancer Drug Design
Signal transducer and activator of transcription 3 (STAT3) is likely the most studied member of the STAT family of proteins. [1][2][3][4][5][6][7] STAT3 participates in the transcription of numerous proteins and is hypothesized to play indispensable roles in the development of a large number of human cancers, metastasis, angiogenesis and immune evasion. [8][9][10] Thus targeting STAT3 is regarded as a potential modality for the treatment of cancer. 4,5,[11][12][13][14] signals by being recruited to phosphotyrosine residues on growth factor and cytokine receptors. On binding via its Src homology 2 (SH2) domain, Tyr 705 becomes phosphorylated by associated JAK kinases, Src or the phosphotransferase activity of the receptor. Phosphorylated STAT3 (pSTAT3) dimerizes by reciprocal pTyr-SH2 domain interactions, is translocated to the nucleus, then acts as a transcription factor participating in the expression of acute phase response genes, vascular endothelial growth factor (VEGF), matrix metalloproteinase 9, Bcl proteins and others. Recently, other functions of STAT3 have been discovered. Unphosphorylated STAT3 was found to act as a co-transctription factor in complex with NFkB. 15 In non-transcriptional roles, STAT3, phosphorylated on Ser 727 , was found to be located in electron-transport complexes in the mitochondria 16 and in this state supported RAS transformation of cells. 17 Removing STAT3 from cells using siRNA, antisense or like techniques, or overexpressing STAT3 or dominant negative versions, likely will impact multiple STAT3 functions. Precise determination of the role of Tyr 705 phosphorylation can be accomplished by highly selective inhibitors targeted to the SH2 domain that block association with receptors and subsequent phosphorylation, dimerization and transcriptional activities. In this review we highlight our progress in the development of high affinity phosphopeptide ligands of the SH2 domain of STAT3, their conversion to cell-permeable, phosphatase-stable prodrugs and the evaluation of these in cellular and human cancer xenograft models of human cancer. We found that although selective inhibition of Tyr 705 phosphorylation is not cytotoxic to cancer cells in vitro, in vivo tumor growth inhibition can be achieved which may be driven by reduced angiogenesis.
Targeting SH2 Domains
SH2 domains are 100 amino acid domains that recognize phosphotyrosine and two to four residues to its Cterminus. 18,19 These domains are involved in the recruitment of signal transduction proteins to activated receptors of growth factors and cytokines and aberrant signaling by these pathways contributes to a variety of diseases such as cancer and asthma. The SH2 domains of Src, Lck, Grb2 and p85 were easily expressed and structures of complexes with phosphopeptides obtained by X-ray crystallography or NMR guided industrial, government and academic laboratories that developed several high affinity, elegant peptidomimetics. [20][21][22] In spite of this great effort, there is a paucity of literature describing the biological activity of these materials and no SH2 domaintargeted phosphopeptide mimetics have advanced to clinical trials. Two major challenges that have impeded the development of phosphopeptide-based SH2 domain inhibitors are the negative charge of the phosphate group that prevents passive diffusion across cell membranes and the lability of the phosphate group to phosphatase activity, which renders phosphopeptides unrecognizable. To overcome phosphatase-lability, researchers have replaced phosphate groups with carboxyl, phosphonate, malonate, phosphonomethyl, phosphonodifluoromethyl and heterocyclic groups that are negatively charged. 23,24 Bioreversible esters have been employed to block the negative charge of the phosphate or phosphonate oxygens in a variety of compounds including SH2domain-targeted peptides and mimetics. 25 The Garbay group employed S-acylthioethyl groups on a series of phosphopeptides targeting the SH2 domain of Grb2. 26,27 Cytotoxicity to cancer cell lines was observed at 1 mM concentration. Gay et al. employed the phenyl phosphoramidite approach to deliver phosphopeptides targeting Grb2 to tumor cells 28,29 with concentrations~25 mM required to inhibit the target. Stankovic reported pivaloyloxymethyl (POM) protection of phosphatase-stable phosphodifluoromethylphenylalanine in a Src SH2 domain inhibitor. 30 They were only able to append one POM group to the phosphonate. Although the mono-POM prodrug entered cells, no biological evaluation was reported. McKinney et al. reported a bis-POM protected phosphonodifluoromethyl analog of a phosphopeptide mimetic targeting STAT4 and STAT6 but no biological data were presented. 31
Development of High Affinity Phosphopeptide Mimics
Targeting the SH2 Domain of STAT3 Several groups have engaged in the development of peptidomimetic inhibitors targeting the SH2 domain of STAT3. The team of James Turkson and Patrick Gunning has developed inhibitors derived from the Tyr 705 sequence 32,33 and from screening and medicinal chemistry approaches. [34][35][36][37][38][39] Others have used the Tyr 705 sequence 40 as well as our lead, Peptide 1.6. 41 To develop high affinity and selective phosphopeptides targeting the SH2 domain, our laboratory screened a set of candidates derived from putative STAT3 binding sites on receptors for IL-6, EGF, IL-10 and G-CSF and found that Ac-pTyr-Leu-Pro-Gln-Thr-Val-NH 2 (termed Peptide 1.6) from the sequence surrounding Tyr 904 of the IL-6 co-receptor gp130, was a high affinity ligand (Fig. 1). Peptide 1.6 inhibited STAT3-DNA complex formation with an IC 50 = 150 nM, as judged by electrophoretic mobility shift assays. 42 Wiederkehr-Adam et al. found similar peptides using a combinatorial phosphopeptide approach. 43 Peptide 1.6 possesses the pTyr-Xaa-Yaa-Gln motif reported to be the recognition determinant of STAT3. 44,45 Substitution of the glutamine of Peptide 1.6 with alanine, glutamic acid and asparagine reduced affinity thereby supporting the requirement for glutamine at pY+3 for high affinity binding to the SH2 domain of STAT3. 42 To probe the molecular surface of the SH2 domain of STAT3 and to search for high-affinity modifications, we substituted natural and unnatural amino acids at each position of our lead peptide. We independently developed a fluorescence polarization assay to monitor the ability of phosphopeptides to compete with the N-terminally fluorescein-tagged version of Peptide 1.6 (FAM-Ala-pTyr-Leu-Pro-Thr-Val-NH 2 , FAM = 4 carboxyfluorescein) for binding to full length STAT3. 46 Haan et al. expressed just the SH2 domain of STAT3 but it only bound a phosphopeptide at pH 5.5. 47 Due to potential conformational variation at the lower pH that might not exist at physiological pH, we expressed fulllength protein for our assays. Peptide 1.6 exhibited an IC 50 of 290 nM. We utilized the truncated peptide Ac-pTyr-Leu-Pro-Thr-NH 2 (termed Peptide 3.1, IC 50 = 739 nM, Fig. 1) as the template for our studies. In spite of the reduced affinity of peptide 3.1, its smaller size meant significantly less synthetic steps in the mostly manual syntheses that produced the phosphopeptides we assayed. Affinity was recaptured with the modifications we incorporated. We showed that hydrophobic groups could be appended to the N-terminal nitrogen of pTyr (position pY−1) suggesting a hydrophobic patch exists on the protein surface adjacent to the phosphotyrosine binding pocket. 46 Leucine at pY+1 could be substituted with a variety of hydrophobic residues. Aliphatic amino acids such as norleucine and cyclohexylalanine provided higher affinity than aromatic phenylalanine. 46 Methylation of the nitrogen of Leu at pY+1 abrogated binding, 46 which supports experimentally the hydrogen bond between the NH of Leu 706 and the C = O of Ser 636 observed in the crystal structure of the STAT3 dimer published by Becker, et al. 48 Alanine scanning showed that proline at position pY+2 contributed significantly to binding 42 and throughout our studies 19 proline analogs were substituted at this position to probe this site. 46,[49][50][51] Of this group, cis-3,4-methanoproline (mPro) provided a two-fold increase in affinity 46 and this amino acid was utilized in later structure-affinity relationship studies. Peptide bonds containing proline (Xaa-Pro) can exist either in the cis or trans conformation. Pseudoproline derivatives, 2,2-dimethyl-1,3 oxazole-4-carboxylate and 2,2-dimethyl-5-methyl-1,3 oxazole-4-carboxylate, result in predominantly cis-peptide bonds, as opposed to native proline, predominantly trans. 52 Incorporation of these pseudoproline residues resulted in 63-69% cis conformation (proline, 2%) and decreased affinity 3-to 5-fold suggesting that the when bound to the SH2 domain of STAT3, Leu-Pro is in the trans conformation ( Fig. 1, peptides DRCIV-5C and DRCIV-7C). 49 Overall, we substituted 45 Gln surrogates at pY+3 to probe the binding site. 46,[53][54][55] Methyl substitutions on the side chain amide nitrogen were not tolerated and isosteric methionine sulfoxide resulted in a . 10-fold loss in affinity. 46 These results indicated the importance of hydrogen bond donation by the side chain amide group of Gln. Various cyclic and aliphatic glutamine surrogates were tolerated with slight losses in affinity. 49,[53][54][55] Threonine at pY+4 was replaced with a variety of groups: organic, heterocyclic and peptidic. 46,55 The most effective substitution was a simple benzylamide. 46 Taking these lessons into account, we incorporated a hydrophobic N-terminus, mPro, and a C-terminal benzyl amide to create SMI-48B2 ( Fig. 1), which had an IC 50 of 125 nM, a five-fold increase in affinity over Peptide 3.1.
Conformationally Constrained Phosphopeptides
Properly constrained peptide inhibitors can lead to increased affinity by presenting the contact groups in the proper orientation for binding to the target protein. By constraining the molecule to the bioactive conformation, the system does not lose the entropy of rotation of all of the peptides' bonds on binding, leading to a favorable entropic term in the free energy equation. The dihedral angle of C-Ca-Cβ-Cc of the phosphotyrosine residue in the STAT3 crystal structure is 174 degrees. 48 The phosphotyrosine mimic, 4-phosphoryloxycinnamate (pCinn), constrains this angle to 180°and resulted in a 5-fold increase in affinity of peptide 3.1 (PM-50D, IC 50 = 136 nM, Fig. 1). 50 Interestingly, pCinn resulted in a 11-fold loss in affinity for a phosphopeptide inhibitor of the Src SH2 domain. 56 Examination of the crystal structure of STAT3 bound to DNA 48 as well as models generated by us 57 led to the hypothesis that addition of a methyl group on the β-carbon of pTyr or pCinn would lead to greater hydrophobic interaction with the side chain methylene groups of Glu 638 , which lines the phosphotyrosine binding pocket. We developed synthetic methodology for β-methylcinnamate and found that this substitution increased affinity 1.5-3 fold in a series of peptides (e.g., PM-235E vs. SMI-247B2, Fig. 1). 53 To constrain the central dipeptide, Leu-Pro was substituted with a series of azabicyclo[4.3.0]-nonane-9-carboxylates (ABN), in which the side chain of leucine was incorporated in a 6-membered ring fused to the 5-membered ring of proline. 58 All stereoisomers of this bicylic lactam reduced activity. 50 However, substitution with the tricyclic heterocycle, Haic, increased affinity of our peptides . three-fold (DRCIV-35B, IC 50 = 231 nM, Fig. 1). 50 Chen et al. incorporated azabicyclo[6.3.0]undecane (ABU) and found that this substitution increased affinity 20-fold. 59 All of these dipeptide replacements constrain the y dihedral angle of the pY+1 residue. The size of the ring fused to the five-membered ring of proline is important. The eight-membered ring in ABU appears to allow the most optimal orientation of the Gln with respect to the phosphotyrosine, as compared with the seven-membered ring of Haic and the six-membered ring of ABN.
Structure of Phosphopeptides Bound to the SH2 Domain of STAT3
Structures of protein-ligand complexes are extremely useful in drug development programs. Unfortunately, STAT3 was difficult to crystallize and in the one structure we obtained, the electron density for the peptide (PM-50D) was too weak to determine its structure. 60 However, molecular modeling approaches provided some insights of phosphopeptide-SH2 domain interactions. In the first model we examined potential interactions between the phosphopeptide, Ac-pTyr-Leu-Pro-Gln-NHBn, and STAT3 using the structure of a phosphopeptide complexed with STAT1 61 as a template. 57,62 This model showed three hydrogen bonds between the Gln CONH 2 of the inhibitor and the protein, highlighting the importance of this residue for recognition and affinity (Fig. 1B). 46,54 In the second, docking and molecular dynamics simulations of the peptidomimetic inhibitor, pCinn-Haic-Gln-OH, showed that the glutamine binds in a slightly different pocket (Fig. 1C). A loop of STAT3 (residues 659-668) moves so that Met660 forms a hydrophobic interaction with the five-and six-membered rings of Haic. The main chain NH of Met660 hydrogen bonds with the OH of Tyr 657 which is involved with a hydrogen bond with the C = O of Haic. 50
Inhibition of STAT3 Phosphorylation in Intact Cells and Development of Phosphopeptide Mimic Prodrugs
To inhibit STAT3 phosphorylation in intact cells we employed a prodrug approach. 53,63 The phosphate was replaced with the phosphatase-stable phosphonodifluoromethyl group. 64 The negative charge of the phosphonate oxygens was blocked with the pivaloyloxymethyl group (POM), which is cleaved by carboxyl esterases (Fig. 2A). 65 Our first prodrug, BP-PM6 ( Fig. 2A), inhibited constitutive phosphorylation of STAT3 in human MDA-MB-468 breast tumor cells at a concentration of 10 mM, supporting the hypothesis that the compound entered the cells, was stripped of its POM groups by esterases and bound to the SH2 domain of STAT3, preventing receptor recruitment and phosphorylation of Tyr 705 . 63 The reduction of pSTAT3 also suggests that STAT3 is phosphorylated and dephosphorylated in a dynamic equilibrium, which we can perturb with phosphopeptide mimics. We converted several phosphopeptide mimetics into cellpermeable prodrugs (Fig. 2B). Although the range in affinity of the phosphate bearing root structures was within a factor of 3, the structures of these prodrugs had a striking effect on potency of inhibition of constitutive pSTAT3 in intact MDA-MB-468 cells in culture. 51,53,63 Addition of a methyl group to the β-position of the cinnamoyl moiety produced a slight increase in potency (BP-PM6 vs. PM-70G and PM-299G vs. PM-73G, Fig. 2C), which reflected the increase in affinity of the corresponding phosphopeptides for STAT3. 53 Interestingly, the potency of prodrugs in which the C-terminal benzylamide group (CONHCH 2 C 6 H 5 ) was replaced by a simple methyl group was enhanced . 10-fold (BP-PM6 vs. PM-299G and PM-70G vs. PM-73G, Fig. 2C). 53 This was not reflective of the intrinsic affinity of the corresponding phosphopeptides in which the benzylamide-containing peptides were 2-fold more avid than the corresponding methyl substituted peptides. 53 Replacing the CONHBn with an isosteric ether CH(CH 3 )OBn resulted in highly potent inhibition of constitutive pSTAT3 (PM-72G-1). 55 Replacement with a methyl group retained the potency in cells (Fig. 2C, PM-72G1 vs. PM-274G-1) whereas in the corresponding phosphopeptides the CH(CH 3 )OBn resulted in slightly more affinity for isolated STAT3 (2.5-fold) than the methyl group. 55 Prodrugs containing mPro were very highly potent inhibitors of STAT3 phosphorylation (PM-72G-1 and PM-274G-1, Fig. 2C). 53 As mPro is no longer commercially available and its synthesis by any of the reported methods is expensive and low yield, 66,67 we sought less expensive proline derivatives. 51 In a prodrug containing native proline (PM-296G, Fig. 3A), complete inhibition occurred at 10 mM. However, prodrugs containing the substituted prolines, mPro, 4,4-dimethylproline and 4,4-difluoroproline were all significantly more potent. Complete inhibition occurred at 500 nM. 51 It is unclear at this time why this difference occurs. However, one could speculate that there may be proteolysis of the proline peptide and that the substituted prolines may not fit in the active site of the putative protease.
Three prodrugs, PM-73G, PM-274G and PM-72G, were studied in detail (Fig. 2B). These compounds had β-methyl cinnamate and were varied at both the central dipeptide and the glutamine surrogate. After a two-hour exposure of MDA-MB-468 cells, significant inhibition of constitutive phosphorylation of STAT3 was observed at 100 nM and complete inhibition occurred at 500 nM (Fig. 2C). Commercially available mPro is a mixture of "L" and "D" enantiomers, and in the cases of PM-72G and PM-274 two prodrugs were isolated in the synthesis, designated with either a -1 or -2 to reflect the order of elution from the preparative HPLC. The second stereoisomers were much less potent, requiring 25 mM for complete inhibition, which reflects the relative affinities of the phosphopeptides. 46 Time course experiments showed significant inhibition at 30 min with recovery of pSTAT3 at about 8 h. 53 These materials inhibited Tyr 705 phosphorylation in a variety of human cancer cell lines including U266 (multiple myeloma), MDA-MB-231, SUM190, SUM 149 (breast), HCC-827 (lung) and SKOV3-ip (ovarian). They also inhibited IL-6 stimulated phosphorylation in MeWo and A375 (melanoma) and HeyA8 (ovarian) cells. 53 Based on similarity of binding free energies of phosphopeptides to a set of SH2 domains, Ladbury and colleagues postulated that selective inhibition of SH2 domains within cells is unlikely. 68,69 To test this hypothesis, we assayed for the effect of our STAT3 inhibitors on the activities of SH2 domain-driven pathways. Administration of epidermal growth factor (EGF) to MDA-MB-468 cells resulted in phosphorylation of STAT5. Our prodrugs did not inhibit that process, suggesting that they do not bind appreciably to STAT5 (Fig. 3B). Phosphatidylinositol-3kinase is recruited to receptors via the SH2 domains of the p85 regulatory domain, which activates the kinase domain leading to the phosphorylation of Akt. Our prodrugs did not inhibit constitutive Ser 473 phosphorylation, suggesting that they did not bind to p85 (Fig. 3B). Via its SH2 domain, Src kinase binds to the focal adhesion kinase and selectively phosphorylates Tyr 861 . 70 Our prodrugs did not impact this process indicating selectivity for STAT3 over Src (Fig. 3C). Significant inhibition of interferon-c stimulated phosphorylation of STAT1 was observed at 1 mM but 5 mM was required for complete inhibition (Fig. 3D). These are 10-fold higher concentrations that observed for the inhibition of pSTAT3. The amino acid sequences and the three dimensional structures of the phosphopeptide binding regions of STAT1 and STAT3 are nearly identical, 57 so cross reactivity is not surprising. However, at high concentration (25 mM) selectivity for STAT3 over these processes was abolished.
These results suggest that it is indeed possible to dial in specificity for specific SH2 domains in intact cells, but concentrations must be carefully regulated. Two features of our peptides contribute to the selectivity for the SH2 domain of STAT3: (1) the cinnamic acid-derived pTyr mimic, which reduced affinity for the Src SH2 domain 56 and (2) the glutamine surrogate. Most SH2 domains, e.g., Src, p85 22 and STAT5, 71 recognize hydrophobic residues at pY+3 and the hydrophilic side chain amide of the Gln mimic would not be accommodated in these binding pockets. The Ladbury analysis focused on the SH2 domain of Src.
Effect of STAT3 Inhibition on the Growth and Survival of Cancer Cells In Vitro
The possible linkage between STAT3 phosphorylation inhibition and cell survival has been a point of marked controversy based on many studies. Our efforts have for the first time provided effective tools for dissecting these responses and reveal them as distinct. We examined the effect prodrugs on proliferation of MDA-MB-468 breast cancer cells using 3-[4,5-dimethylthiazol-2-yl]-2,5-diphenyl tetrazolium (MTT) or sulforhodamine B (SRB) assays. PM-73G, PM-274G-1 and PM-274G-2 showed little or no ability to inhibit growth up to 50 mM (Fig. 4A), which is a concentration ). The center column shows inhibition by the Nle-mPro-containing prodrugs PM-72G-1 and PM-274G-1 and the reduced potency of the stereoieomers possessing "D" mPro. The right column shows the time course of inhibition of Tyr 705 phosphorylation (5 mM prodrug) and the dose response of the prodrug of one of the highest affinity phosphopepides, PM-173G. Gels are presented in pairs in which the upper is pTyr 705 STAT3 and the lower is total STAT3. With the exception of the time course study, cells were exposed to prodrugs for 2 h before lysis and protein determination by western blots.
100 times that at which constitutive pSTAT3 was completely inhibited. However, both stereoisomers of PM-72G inhibited growth at with an IC 50 of 10-15 mM. Since PM-72G-2, which contains the "D" stereoisomer of mPro and required 25 mM to inhibit pSTAT3, also inhibits cell growth, it is likely that inhibition by both of the PM-72G versions is due to off-target effects due to the C-terminal Gln-benzyl ether group, which is missing in structural analog PM-274G and in PM-73G. Realizing that levels of pSTAT3 recover after about eight hours following exposure to prodrug, assays were repeated with daily administration of PM-73G and PM-72G-1 (Fig. 4B). The IC 50 for PM-73G was between 25 mM and 50 mM, which is 50 times higher than the 0. (Fig. 4D). Thus the cinnamoyl-Haic-Apa sub-structure is not cytoxic on its own and the phosphonate contributes to the observed cell death. At the relatively high concentration of 25 mM in MDA-MB-468 cells, PM-73G inhibited phosphorylation of FAK pTyr861 and Akt Ser473 after two hour treatment, as well as EGFstimulated STAT5 phosphorylation (Fig. 4E). Thus, at high concentrations, selectivity for individual SH2 domains is compromised and cytotoxicity correlates with off-target effects.
Collectively, these results challenge the hypothesis that Tyr 705 phosphorylation of STAT3 is required for cell growth and proliferation in vitro. Further challenges come from inhibition of pSTAT3 by JAK kinase inhibitors. The first example came from Kreis et al., who noted that treatment of several melanoma lines with Pyridone 6 completely inhibited STAT3 phosphorylation but had no impact on cell growth. 72 Hedvat et al. reported that the JAK2 inhibitor, AZD1480, at concentrations that completely inhibited pSTAT3, had no effect on the proliferation of MDA-MB-468 (breast), DU145 (prostate) and MDAH2774 (ovarian) cancer cells in vitro. 73 This was reported in subsequent publications from this group 74,75 and others. 76 Looyenga et al. found that the JAK1/2 inhibitor ruxolitinib did not affect growth of lung cancer cell lines in vitro. 77 Treatment of ovarian cancer cells with the anti-IL-6 monoclonal antibody siltuximab inhibited STAT3 phosphorylation but did not affect proliferation of cells grown on plastic as adherent cultures. 78 Thus it would appear that selective inhibition of Tyr 705 phosphorylation is not cytotoxic to cancer cells of epithelial origin in vitro. Furthermore, if an agent or compound is killing these cells, it is acting by off-target effects. Controversy still exists as Zhang et al. recently reported that an apparently selective small molecule (not a phosphopeptide mimic) targeting the SH2 domain of STAT3 displays cytotoxicity in vitro. 39
Effect of Selective STAT3 Inhibition In Vivo
In spite of the lack of cytotoxicity, we evaluated the ability of PM-73G to inhibit tumor growth in vivo using the MDA-MB-468 breast tumor model. 79 In an initial intratumoral (IT) administration trial, we found that tumor growth was inhibited which was accompanied by a reduction in tumor microvessel density and VEGF protein. IT administration of concentrations as low as 8 mM resulted in inhibition of pSTAT3 in tumor sections as determined by immunohistochemical staining of tumor sections using anti-pSTAT3 antibodies.
To determine the utility of systemic administration, mice bearing MDA-MB-468 tumors were treated with 170 mg/kg of PM-73G administered intraperitoneally (i.p.) daily for 5 d followed by two days rest over four weeks. 79 (Rodents have circulating carboxyesterases which prematurely deprotect the POM group, thus necessitating such a high dose.) Tumors from the treated animals grew at much lower rates than those given the vehicle (20% Trappsol/PBS) (Fig. 4F). Immunohistochemical analysis revealed nearly complete inhibition of vascularization. Two hours after administration of PM-73G pSTAT3 levels were significantly reduced, compared with vehicle (Fig. 4F). Thus selectively inhibiting STAT3 phosphorylation impedes communication with the microenvironment, i.e., VEGF signaling and angiogenesis. Necroscopic examination revealed no organ toxicity and no changes in complete blood counts (cbcs). To the best of our knowledge, this is the first example of a phosphopeptidebased prodrug targeting an SH2 domain showing the ability to inhibit its target by systemic administration. This contrasts with the compounds of Zhang et al., which utilize carboxyphenyl groups to target the phosphotyrosine binding pocket. 38,39 Of the four cell lines examined in Figure 4C, MDA-MB-468 was the most sensitive to growth inhibition in vitro. The others were not evaluated in vivo so it is unclear if the observed reduction in tumor growth and microvessel density is cell line-dependent. However, our results are similar to those recently reported for JAK2 kinase inhibitors AZD1480 75 and ruxolotinib 77 that employed other cell lines. Whereas inhibition of STAT3 phosphorylation is not intrinsically cytotoxic, anti-tumor activity is the result of impaired communication between tumor cells and the microenvironment, e. g., VEGF production and activity. AZD1480 impacts immune cell recruitment to the tumor, supporting the proposed role(s) of STAT3 in immune surveillance and tumor immunity. 75
Synthesis of Amino Acid Surrogates and Peptidomimetics
The structure-affinity and structure-activity studies described in this communication utilized phosphotyrosine, Leu-Pro dipeptide and glutamine surrogates that were not available commercially. Synthetic strategies had to be developed for these materials which was a major part of the program. Readers are referred to references 46, 54 and 55 for the synthesis of glutamine mimics; 49, 50, 51 and 58 for the synthesis of pseudoproline peptides, Leu-Pro mimics and proline analogs; 50, 53 and 63 for constrained tyrosine mimics; and 53 and 63 for the synthesis of the prodrugs.
Summary
At the outset of the program our laboratory had two goals: (1) to develop cell-permeable phosphopeptide mimics targeting an SH2 domain and (2) to use this technology to inhibit an important cancer target, STAT3. Our chemistry effort developed very high affinity phosphopeptides and were able to convert these into prodrugs which hit their target in vivo with systemic (ip) administration. Our data and that of Zhang et al. 38,39 suggest that the SH2 domain is indeed druggable, despite the failed attempts of the industrial, government and academic labs mentioned above. As mentioned, the bis-POM prodrug strategy is useful for proof of principle studies, but it suffers from premature loss of one of the POM groups due to both esterase activity and chemical hydrolysis. Improvements to the bio-reversible ester strategy are ongoing. Although we do not have a clinical candidate as of yet, our selective phosphopeptide mimics have been useful tools for the study of Tyr 705 phosphorylation. From our work and the studies of the JAK inhibitors, the dogma of STAT3 signaling is shifting. For epithelial tumors, it appears that phosphorylated STAT3 is not necessary for tumor cell survival. However, the original studies on the effects of STAT3 on VEGF expression and signaling 80,81 have borne true. Inhibition of STAT3 Tyr705 phosphorylation appears to be new antiangiogenesis strategy. | 2016-05-09T00:30:21.415Z | 2012-10-01T00:00:00.000 | {
"year": 2012,
"sha1": "9965e490bf86ec7c88b30ed398ae47edc7eceaf1",
"oa_license": "CCBYNC",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.4161/jkst.22682?needAccess=true",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "9965e490bf86ec7c88b30ed398ae47edc7eceaf1",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
268139108 | pes2o/s2orc | v3-fos-license | Visualizing and Evaluating Microbubbles in Multiphase Flow Applications
: Accurate visualization of bubbles in multiphase flow is a crucial aspect of modeling heat transfer, mixing, and turbulence processes. It has many applications, including chemical processes, wastewater treatment, and aquaculture. A new software, Flow_Vis, based on experimental data visualization, has been developed to visualize the movement and size distribution of bubbles within multiphase flow. Images and videos recorded from an experimental rig designed to generate microbubbles were analyzed using the new software. The bubbles in the fluid were examined and found to move with different velocities due to their varying sizes. The software was used to measure bubble size distributions, and the obtained results were compared with experimental measurements, showing reasonable accuracy. The velocity measurements were also compared with literature values and found to be equally accurate.
Introduction
Fluid flow is a central subject in engineering, encompassing all traditional engineering disciplines.It holds significant importance across a wide array of contexts [1,2].Research on gaseous fluxes contributes to the advancement of various technologies, including the design of machinery such as turbines and combustion engines, as well as the production of automobiles, aircraft, and spacecraft.Moreover, it plays a crucial role in civil engineering projects such as harbor design, modeling tidal and river flow patterns, and coastal area protection.In chemical engineering, understanding flow behavior in process equipment like reactors [3] and pipe networks is essential.Additionally, it finds application in medicine for modeling blood flow through arteries and veins.
While the flow of single-phase fluids is well understood, the dynamics of multiphase flow remain less so and are currently a focal point of research.Mass transfer processes involved in separators and reactors heavily rely on detailed knowledge of multiphase fluids, particularly the motion of droplets/bubbles in the continuous phase.New methods for determining droplet distribution and motion are scarce, and visualization-based techniques offer a promising direction.
A primary requirement for an accurate multiphase flow measurement technique is a high degree of temporal and spatial resolution, as the flow significantly varies over time and space.In comparison to probe techniques such as hot-film and optical probes employed in multiphase flow, visualization techniques offer several advantages, including minimal flow disturbance, quick response time, high visual resolution, and the ability to identify the
Fluid Flow Visualization
Flow visualization has existed alongside fluid flow studies for as long as fluid flow research has been conducted.Experimental flow visualization has been the primary method used in the study of fluid flow until quite recently [4].The following are examples of reasons why experimental flow visualization techniques are utilized: 1.
To gain insight into fluid movement around a scale model of a real object without the need for extensive calculations.2.
To inspire the development of new and improved theories of fluid flow.3.
To verify a new theory and test prototypes for new products.
Computer-generated visualization is a more recent innovation.It is used to represent complex data streams produced from mathematical models and simulations of flow systems.Often, the data produced are too complex and extensive to fully analyze as strings of numbers.It is widely accepted that the benefits of the growth in computing power will be greatly enhanced if the computer is not only used to calculate numerical data but also to visualize these facts in an understandable way [5,6].Thus, information can be better comprehended when presented visually through pictures, graphs, and locus plots rather than numerically.
The data visualization acquired through modelling fluid flow can serve various purposes, depending on its context of use.The process of verifying and analyzing theoretical models is an essential component of fundamental research.Comparing the flow model being used to the "real" fluid flow is necessary whenever a flow phenomenon is represented by a model [7].
Calculating and visualizing flow using a model, as well as comparing the results with experimental data, are two approaches that can validate the correctness of the model.If numerical results and experimental flow are displayed similarly, qualitative verification through visual inspection can be highly effective.
Research into numerical methods for solving flow equations may be aided by visually representing the solutions found, as well as visually representing the intermediate study results obtained throughout the iterative solution process.This can be done both before and after finding the solution [8,9].Visualizing fluid flow phenomena can be useful for design, optimization, and evaluation.Additionally, it can assist in designing any object functionally related to fluid flow.
Communication of flow analysis results to others, particularly those who are not professionals in the subject, is important [10].This is especially true when communicating ideas to individuals who are not professionals in a specific industry.
Flow Visualization Procedures
In most cases, the visualization process consists of the following four stages: data importing, data filtering and enrichment, data mapping, and data rendering, as depicted diagrammatically in Figure 1.
Step 1 involves locating a representation of the primary information to be investigated in the form of a data set, which can be either continuous or discrete in nature [10,11].In practical terms, importing data entails selecting a specific implementation of a dataset and then converting the initial information to the representation implied by the selected dataset.This process should involve a one-to-one mapping or data copying.The second step of data visualization is termed data filtering and enrichment.It involves identifying the features or aspects of the data that require focus.In most cases, the imported data do not correspond one-to-one with the relevant aspects.The data are filtered to extract pertinent information and then enriched with higher-level data to aid in the specific task.This process generates an enriched dataset that directly represents the features of interest for the task at hand.
The third step entails mapping the dataset to the visual domain.This involves associating aspects of the visual domain with the data aspects included in the enriched dataset.
The rendering operation marks the conclusion of the visualization process.Rendering transforms the scene created by the mapping operation into two or three dimensions, adjusting various user-specified viewing parameters such as viewpoint and lighting to produce desired images.In typical visualization applications, viewing parameters are considered part of the rendering operation.
A functional description of this process can deepen understanding of the steps comprising the visualization process [12][13][14][15].The visualization process in Equation ( 1) can be visualized as a function that maps between the set of all possible types of raw input data, known as , and the set of created images, known as [16]:
Flow Field Topology
Critical point theory serves as the foundation for flow topology analysis.This theory has been applied in various settings to examine the solution trajectories of ordinary differential equations.The topology of a vector field consists of critical points, where the velocity vector equals zero, as well as integral curves and surfaces connecting these critical points [17].Visualizing the topology of a vector field can convey its topological properties without overwhelming the viewer with excessive information that is already known.To investigate and present vector field topologies, the following steps must be taken: 1. Identify the most important locations.2. Classify the significant components of the situation.3. Compute integral curves and surfaces.
Objectives of This Work
Currently, Teesside University is conducting experimental measurements for small droplets in three-phase separators on oil-water mixtures and for microbubbles in an airwater Venturi-type microbubble generator.Both of these applications require a methodology to measure the droplet/bubble size distribution and determine the velocities of individual droplets/bubbles.The results for these experiments are obtained as both still photographs and video footage.Consequently, the data files produced are large, particularly in the case of video footage containing moving images.
Computerized flow visualization is seen as a method of analyzing these images to convert pictures into numerical data files.From this, the velocity, size distribution, and The second step of data visualization is termed data filtering and enrichment.It involves identifying the features or aspects of the data that require focus.In most cases, the imported data do not correspond one-to-one with the relevant aspects.The data are filtered to extract pertinent information and then enriched with higher-level data to aid in the specific task.This process generates an enriched dataset that directly represents the features of interest for the task at hand.
The third step entails mapping the dataset to the visual domain.This involves associating aspects of the visual domain with the data aspects included in the enriched dataset.
The rendering operation marks the conclusion of the visualization process.Rendering transforms the scene created by the mapping operation into two or three dimensions, adjusting various user-specified viewing parameters such as viewpoint and lighting to produce desired images.In typical visualization applications, viewing parameters are considered part of the rendering operation.
A functional description of this process can deepen understanding of the steps comprising the visualization process [12][13][14][15].The visualization process in Equation ( 1) can be visualized as a function Vis that maps between the set of all possible types of raw input data, known as D I , and the set of created images, known as V [16]:
Flow Field Topology
Critical point theory serves as the foundation for flow topology analysis.This theory has been applied in various settings to examine the solution trajectories of ordinary differential equations.The topology of a vector field consists of critical points, where the velocity vector equals zero, as well as integral curves and surfaces connecting these critical points [17].Visualizing the topology of a vector field can convey its topological properties without overwhelming the viewer with excessive information that is already known.To investigate and present vector field topologies, the following steps must be taken: 1.
Identify the most important locations.
2.
Classify the significant components of the situation.
Objectives of This Work
Currently, Teesside University is conducting experimental measurements for small droplets in three-phase separators on oil-water mixtures and for microbubbles in an air-water Venturi-type microbubble generator.Both of these applications require a methodology to measure the droplet/bubble size distribution and determine the velocities of individual droplets/bubbles.The results for these experiments are obtained as both still photographs and video footage.Consequently, the data files produced are large, particularly in the case of video footage containing moving images.
Computerized flow visualization is seen as a method of analyzing these images to convert pictures into numerical data files.From this, the velocity, size distribution, and number of droplets/bubbles in the continuous phase can be determined before transform-ing the data into graphs and locus plots, allowing for more quantitative interpretation than is possible from video and still imagery alone.This technique makes information on the entire flow field readily comprehensible, offering numerous advantages, especially for moving images or large data files [18][19][20][21].Notably, it eliminates the need for any data processing, which is a significant benefit [10].One outcome of this research is the development of software capable of managing moving data; the bubbles investigated represent these data.Working with a video provides potential improvements in accuracy and the ability to measure velocity compared to working with discrete still imagery.
Applying computerized flow visualization to the systems measured by Teesside University poses challenges.Both water-air and water-oil-air are transparent fluid mixtures, making it difficult to visualize the flow field and detect bubbles/droplets traveling within it using visualization software.
The following section of this paper discusses a novel approach to the analysis of moving data using data visualization.The newly developed software (Flow_Vis, https:// sites.google.com/view/flowvis/home) was applied to analyze video-recorded experiments.During the experiment, the software successfully analyzed the bubbles present in the moving liquid in terms of size, number, and velocity.
Generating the Bubbles
It is possible to use bubbles produced by a bubble generator in order to improve the perception of flow pattern.Clearer streamlines are produced from bubbles injected in the flow rather than smoke or solid particle dispersion because the bubbles are easier to differentiate from one another.Bubbles are able to accurately convey the characteristics of the flow because of their extremely low moments of inertia.One of the advantages of following individual bubbles is one way to locate streamlines, in the event that they are present.
Air bubbles, because of their low density and inert nature, can be formed and float through the liquid.This is made possible by the fact that air bubbles may be generated by any microbubble generator device such as a Venturi.Each bubble has a center of specific air volume that is enclosed in a layer of liquid.The mixing nozzle is the location where the bubbles are formed.The confined space in the throat is being traversed by the flow of air.The air is sucked into throat by the vacuum generated due the high velocity of liquid, which subsequently fills the space around the tube with air at a flow rate that has been previously specified.This is accomplished by passing the bubble film solution through the air that is contained in the central tube.When the bubbles are ready to be transported, low-pressure, dry compressed air is forced into the outer tube.This causes the bubbles to move forward.
Figure 2 shows the schematic diagram of the experimental apparatus used in this study to generate the microbubble field.The apparatus consists of the water tank (380 × 280 × 740 mm) connected to a circulation water pump via a water flow meter, a flow control brass ball valve, and Venturi microbubble generator.The maximum flow rate of the water pump is 13.3 L/m at pressure of 3.2 bar at the inlet of the Venturi microbubble generator.The Venturi is connected to the one of the tank walls where atmospheric air is sucked into its throat through an air flow meter followed by the air feed pipe.The Venturi is where the air is mixed with the main water flow.
Visualizing the Bubble Flow
The visualization procedure can be considered as a series of stages, each modelled by a different data transformation operation.Up until it produces the output visuals, the incoming data go through this process while being transformed in a variety of ways.In general, the bubble flow visualization process of data visualization can be viewed as a process with four stages: importing data sets, filtering, mapping, and rendering, as shown in Figure 1.
For the purpose of locating the bubbles in order to map them in visual space, a method known as the Hough circle transform is applied.This method begins with the presumption that the bubbles in the image are described as follows: Any edge point ( , ) in the (, , ) parameter space will be turned into a correct bubble if (, ) is the coordinate of the bubble center and ( ) is the radius of the bubble.If each of the image points is located on a different bubble, then the bubbles will collide at a single point in the coordinates (, , ) that correspond to the parameters of the bubble when all of the image points are located on the same bubble.
If the radii of the bubbles in a space are known, then the search can be simplified down to using only two dimensions.The (a, b) coordinates of the centers are the essential information needed.
𝑥 = 𝑎 + 𝑟 • cos(𝜃)
(3) The points in the parameter space denoted by the coordinates (a, b) lie on a bubble with a radius of r and its center at (x, y).The real center point will be the same for all of the parameter bubbles, and it is possible to locate it, as in Figure 3.
Visualizing the Bubble Flow
The visualization procedure can be considered as a series of stages, each modelled by a different data transformation operation.Up until it produces the output visuals, the incoming data go through this process while being transformed in a variety of ways.In general, the bubble flow visualization process of data visualization can be viewed as a process with four stages: importing data sets, filtering, mapping, and rendering, as shown in Figure 1.
For the purpose of locating the bubbles in order to map them in visual space, a method known as the Hough circle transform is applied.This method begins with the presumption that the bubbles in the image are described as follows: Any edge point (x i , y i ) in the (a, b, r) parameter space will be turned into a correct bubble if (a, b) is the coordinate of the bubble center and (r) is the radius of the bubble.If each of the image points is located on a different bubble, then the bubbles will collide at a single point in the coordinates (a, b, r) that correspond to the parameters of the bubble when all of the image points are located on the same bubble.
If the radii of the bubbles in a space are known, then the search can be simplified down to using only two dimensions.The (a, b) coordinates of the centers are the essential information needed.
x = a + r• cos(θ) The points in the parameter space denoted by the coordinates (a, b) lie on a bubble with a radius of r and its center at (x, y).The real center point will be the same for all of the parameter bubbles, and it is possible to locate it, as in Figure 3.In an ideal situation, the center of the bubble would be situated on a line that is perpendicular to the path of the moving bubble.Moving along the normal of each edge point is consequently all that is required to determine the various places of the centers.The distance that separates each edge point from the anticipated center of that bubble is one meas- In an ideal situation, the center of the bubble would be situated on a line that is perpendicular to the path of the moving bubble.Moving along the normal of each edge point is consequently all that is required to determine the various places of the centers.The distance that separates each edge point from the anticipated center of that bubble is one measurement that might be used to approximate the size of the associated bubble's radius.Because the center of a bubble has to lie on the normal of each point on the bubble, the actual center of the bubble is the point at which all of these points cross.The source codes for this software can be obtained from the website in reference [22].
Bubbles Velocity
After the bubbles have been recognized, the next step is to identify their locations.To achieve this an approximation of the velocity field in the region surrounding the bubble X p is obtained by applying a first order Taylor expansion.As a consequence of this, the velocity u can be determined by applying the formula that is presented below: The partial derivatives ∆u = ∂u i ∂x j completely specify the velocity field in the area surrounding the crucial point.A classification method for bubble locations can be determined by using the eigenvalues and eigenvectors of u.A vector field can take on a variety of different configurations, including the following: Positive eigenvalues are representative of velocities that are moving away from the critical point, whereas negative eigenvalues are representative of velocities that are moving in the direction of the bubble.
When a bubble has both negative and positive real values, then such a bubble can be moved without incurring either compression or expansion.
Dimension Reduction Visualization
Sometimes it is necessary to reduce a video of an experiment into a single image in order to watch the movement of the bubbles as they move throughout the experiment.Thus, this video may be reduced from D dimensions down to one dimension (where D >> 1), allowing us to have one visualization that goes in for the target to be achieved.In order to finish this procedure, it is necessary to use Equation ( 5), which will result in a reduction in the dimensions.
where y i is a point in the projection space, y j is a point that needs to be updated (and is a neighboring point to y i ), r ij weight (distance) in the high dimension, and d ij weight (distance) in the projection space, and ϵ value is small to prevent dividing by zero and reduce varies depending on the data.The parameter d c is a hypersphere's radius.The original distance r ij in the video is being compared to the projected distance d ij in the new location.The update is performed based on the value of the learning rate.
Experimental Results
The new software (Flow_Vis) was developed in MATLAB following the methodology outlined in Section 2. The processing was performed on a computer running the Windows 11 operating system and equipped with an Intel Core i5 processor.
In the context of the present paper, accuracy, robustness, computational complexity, and storage were the key aspects to be evaluated.The accuracy of bubbles in synthetic images was assessed by comparing the absolute errors of the estimated diameter and center coordinates with the actual values of the bubble sizes.
The analysis of the results is divided into four stages: 1.
The first stage explains how the video of the entire fluid movement is analyzed using Flow_Vis.
2.
In the second stage, each frame (or individual photo) is processed separately.
3.
The third stage involves computing the velocity of the bubbles and visually illustrating it as droplet loci.4.
Finally, a dimension reduction technique is applied to condense the video into a single visualization of bubble loci.
Video Analyzing
To conduct a comprehensive analysis of the experimental data, a video recording was made while the microbubble generator was operating.The recorded video was subsequently analyzed using the Flow_Vis software, and conclusions were drawn regarding the experiment's reliability.One notable feature that distinguishes this tool from others is its ability to handle lengthy video files.When recording the experiment, the video serves as input to the software, which then processes it to generate the most accurate visualization of the bubbles.Figure 4 illustrates that the video capturing the flow of bubbles through the liquid consists of a total of 239 frames.
original distance rij in the video is being compared to the projected distance dij in the new location.The update is performed based on the value of the learning rate.
Experimental Results
The new software (Flow_Vis) was developed in MATLAB following the methodology outlined in Section 2. The processing was performed on a computer running the Windows 11 operating system and equipped with an Intel Core i5 processor.
In the context of the present paper, accuracy, robustness, computational complexity, and storage were the key aspects to be evaluated.The accuracy of bubbles in synthetic images was assessed by comparing the absolute errors of the estimated diameter and center coordinates with the actual values of the bubble sizes.
The analysis of the results is divided into four stages: 1.The first stage explains how the video of the entire fluid movement is analyzed using Flow_Vis.2. In the second stage, each frame (or individual photo) is processed separately.3. The third stage involves computing the velocity of the bubbles and visually illustrating it as droplet loci.4. Finally, a dimension reduction technique is applied to condense the video into a single visualization of bubble loci.
Video Analyzing
To conduct a comprehensive analysis of the experimental data, a video recording was made while the microbubble generator was operating.The recorded video was subsequently analyzed using the Flow_Vis software, and conclusions were drawn regarding the experiment s reliability.One notable feature that distinguishes this tool from others is its ability to handle lengthy video files.When recording the experiment, the video serves as input to the software, which then processes it to generate the most accurate visualization of the bubbles.Figure 4 illustrates that the video capturing the flow of bubbles through the liquid consists of a total of 239 frames.In order to analyze the bubbles in this video comprehensively, it is necessary to examine all frames within the complete collection of available options.The sequential order of the frames in a video is crucial for accurate analysis and must be considered within the context of a specific evaluation.Figure 5 presents four graphs that assess each frame In order to analyze the bubbles in this video comprehensively, it is necessary to examine all frames within the complete collection of available options.The sequential order of the frames in a video is crucial for accurate analysis and must be considered within the context of a specific evaluation.Figure 5 presents four graphs that assess each frame depicted in Figure 4.The software analyzes these frames based on the number of identified bubbles, with an average of 1488 bubbles recognized throughout the experiment.
In Table 1, the Flow_Vis software analyzed the entire video, calculating the average diameter of the bubbles across all frames, which is 35 micrometers.The software will focus its attention on frame number 165, which exhibits the largest bubble diameter (39 micrometers), while frame number 33 has the smallest diameter for a bubble size (30 micrometers).Figure 5 presents the results of visualizing the video footage.In Figure 5a-c, the orange data represent the curve fit of the experimental data obtained from image processing software.Throughout the experiment, there was a noticeable change in both the total number of bubbles and their individual diameters, as depicted in Figure 5. Based on the information presented in Figure 5a, a significant number of bubbles were initially observed, which gradually decreased over time.Simultaneously, it was noted that the bubble diameters were very small at the beginning of the experiment but increased throughout the frame sequence, as shown in Figure 5b.This indicates that under these conditions, the bubbles are coalescing.Additionally, Figure 5c illustrates that the bubble diameter decreases as the bubble number increases, which is consistent with droplet coalescence.
depicted in Figure 4.The software analyzes these frames based on the number of identified bubbles, with an average of 1488 bubbles recognized throughout the experiment.
In Table 1, the Flow_Vis software analyzed the entire video, calculating the average diameter of the bubbles across all frames, which is 35 micrometers.The software will focus its attention on frame number 165, which exhibits the largest bubble diameter (39 micrometers), while frame number 33 has the smallest diameter for a bubble size (30 micrometers).Figure 5d displays the bubble size distribution, revealing that the majority of the bubbles fall within a diameter range of 20-60 µm, with a peak number of bubbles at 35 µm.
An issue that may arise is how to address the location of certain bubbles.In this method, it is desired to project the relationships among three bubbles to accurately arrange their positions, as illustrated in Figure 3.
As evident from Figure 6a,b, frame 165 exhibits the largest bubble radius (39 micrometers), while frame 33 displays the smallest bubble radius (30 micrometers).This information, provided by Flow_Vis, is crucial for supporting scientific research, as it aids specialized researchers in their identification efforts.Figure 6c,d indicate that frames 1 and 2 have the highest and lowest number of bubbles, respectively.These identified frames are significant as they are also utilized in Figure 5. Table 2 offers further insights and details on this aspect.
As evident from Figure 6a,b, frame 165 exhibits the largest bubble radius (39 micrometers), while frame 33 displays the smallest bubble radius (30 micrometers).This information, provided by Flow_Vis, is crucial for supporting scientific research, as it aids specialized researchers in their identification efforts.Figure 6c,d indicate that frames 1 and 2 have the highest and lowest number of bubbles, respectively.These identified frames are significant as they are also utilized in Figure 5. Table 2 offers further insights and details on this aspect.The key findings from Figures 5 and 6 are summarized in Tables 1 and 2. It is evident from these figures that frame 45 contains the highest total number of bubbles, while frame 33 has the smallest bubble diameter.Hence, at the outset of the implementation process, the bubbles initially exhibit the characteristics of fluid flow bubbles.However, towards the end of the implementation, as depicted in frames 165 and 193, this situation is reversed, with the number of bubbles decreasing while their diameter increases.The key findings from Figures 5 and 6 are summarized in Tables 1 and 2. It is evident from these figures that frame 45 contains the highest total number of bubbles, while frame 33 has the smallest bubble diameter.Hence, at the outset of the implementation process, the bubbles initially exhibit the characteristics of fluid flow bubbles.However, towards the end of the implementation, as depicted in frames 165 and 193, this situation is reversed, with the number of bubbles decreasing while their diameter increases.Figure 5c illustrates the relationship between bubble diameter and number, indicating a decrease in the number of bubbles as their size increases, and vice versa.
Data Analysis
After analyzing the video and locating the frames that are likely to play a significant part in the analysis, the Flow_Vis software gives the user the option to investigate any individual frame.For instance, frame number 165 was selected for further interrogation, because it has an adequate number of bubbles with the largest diameter.Figure 7 depicts the selected frame alongside the recognized bubbles, totaling 1206 as in Tables 1 and 2. Despite demonstrating an average bubble diameter of 39 micrometers, Figure 7 presents the size distribution of all bubbles in this frame.The orange line represents the data from this work, plotted against the results from standard image processing software in blue.Once again, a strong correlation is observed between both datasets.
part in the analysis, the Flow_Vis software gives the user the option to investigate any individual frame.For instance, frame number 165 was selected for further interrogation, because it has an adequate number of bubbles with the largest diameter.Figure 7 depicts the selected frame alongside the recognized bubbles, totaling 1206 as in Tables 1 and 2. Despite demonstrating an average bubble diameter of 39 micrometers, Figure 7 presents the size distribution of all bubbles in this frame.The orange line represents the data from this work, plotted against the results from standard image processing software in blue.Once again, a strong correlation is observed between both datasets.The proposed software can handle samples by identifying a specific region containing the bubbles required for the study.Figure 8 illustrates the enlargement of the selected region, enabling a more focused examination of the bubbles to obtain more accurate observations and conclusions.The proposed software can handle samples by identifying a specific region containing the bubbles required for the study.Figure 8 illustrates the enlargement of the selected region, enabling a more focused examination of the bubbles to obtain more accurate observations and conclusions.
The proposed software is able to deal with samples by selecting a particular region that contains bubbles that are required for the study.Figure 8 shows that the selected region is made larger, and a more concentrated study of the bubbles is carried out in order to obtain observations and draw conclusions that are more accurate.
Bubble Velocity
It is essential to understand the velocity of bubbles in a fluid and the direction in which they move.The Flow_Vis software was capable of deriving these measurements from the recorded video, which is not achievable with standard image processing of still photography.Specifically, it provided a visualization illustrating the relationships among The proposed software is able to deal with samples by selecting a particular region that contains bubbles that are required for the study.Figure 8 shows that the selected region is made larger, and a more concentrated study of the bubbles is carried out in order to obtain observations and draw conclusions that are more accurate.
Bubble Velocity
It is essential to understand the velocity of bubbles in a fluid and the direction in which they move.The Flow_Vis software was capable of deriving these measurements from the recorded video, which is not achievable with standard image processing of still photography.Specifically, it provided a visualization illustrating the relationships among bubble velocity, diameter, and their number, demonstrating the movement of bubbles in relation to speed, as depicted in Figure 9. Upon examining Figure 9A, a direct correlation between bubble velocities and the number of bubbles is evident.The velocity of the bubbles is observed to decrease as the number of bubbles increases.This behavior can be attributed to changes in drag force and virtual forces with higher numbers of bubbles.Figure 9B indicates a decrease in bubble velocity with increasing bubble diameter, contrary to the physics of the flow.This discrepancy may be due to the short duration of data recording or an unknown error.Nonetheless, Figure 8 confirms that the velocity values are within expected ranges, and the Flow_Vis software successfully identified this relationship.
Dimension Reduction Visualization
When the theory of dimension reduction is applied to a fluid that is moving and contains bubbles, a great deal of information may be derived about the system.We observe the degree of attention on the presence of these bubbles, and this is what is noticeable in Figure 10 when applying Equation ( 5) to reduce the dimensions from 239 to 1.
Dimension Reduction Visualization
When the theory of dimension reduction is applied to a fluid that is moving and contains bubbles, a great deal of information may be derived about the system.We observe the degree of attention on the presence of these bubbles, and this is what is noticeable in Figure 10 when applying Equation ( 5) to reduce the dimensions from 239 to 1.The presence of white locations, in Figure 10, demonstrates that bubbles are being recorded from those locations, and an increase in the intensity of white indicates that bubbles are continuously present in those areas.Inversely, the darkness indicates that the bubbles are crossing in a short amount of time, and the length of time that they are present decreases as the darkness increases.
Limitation
One limitation of the method is the inability to analyze changes in bubbles while they are not in motion.In such cases, the system can only study a single image, making it difficult to observe any changes occurring in the bubbles.Consequently, the size of the bubble remains constant.However, the likelihood of this is low, as the bubble is filled with air, which facilitates its movement within the surrounding liquid.
Conclusions
The Flow_Vis software, with the ability to analyze both video and still images to extract data, has been developed.The software has been tested against real experimental data generated from injection of air microbubbles into a tank of water.The Flow_Vis software results were compared with data produced from experimental measurements made from still photography processed using conventional image processing techniques.
This comparison demonstrated that the software can accurately calculate the distribution of bubble sizes and the average diameter of bubbles.Further analysis was performed to compare bubble velocities generated from CFD models of the experiments with the Flow_Vis generated velocities.Again, the results were found to agree well.
Experimental studies have demonstrated that this software possesses the ability to recognize bubbles even under challenging circumstances that provide difficulties for identification.Furthermore, it had the capability to analyze the bubbles from various perspectives, ascertaining their dimensions, velocity, and flow direction.The presence of white locations, in Figure 10, demonstrates that bubbles are being recorded from those locations, and an increase in the intensity of white indicates that bubbles are continuously present in those areas.Inversely, the darkness indicates that the bubbles are crossing in a short amount of time, and the length of time that they are present decreases as the darkness increases.
Limitation
One limitation of the method is the inability to analyze changes in bubbles while they are not in motion.In such cases, the system can only study a single image, making it difficult to observe any changes occurring in the bubbles.Consequently, the size of the bubble remains constant.However, the likelihood of this is low, as the bubble is filled with air, which facilitates its movement within the surrounding liquid.
Conclusions
The Flow_Vis software, with the ability to analyze both video and still images to extract data, has been developed.The software has been tested against real experimental data generated from injection of air microbubbles into a tank of water.The Flow_Vis software results were compared with data produced from experimental measurements made from still photography processed using conventional image processing techniques.
This comparison demonstrated that the software can accurately calculate the distribution of bubble sizes and the average diameter of bubbles.Further analysis was performed to compare bubble velocities generated from CFD models of the experiments with the Flow_Vis generated velocities.Again, the results were found to agree well.
Experimental studies have demonstrated that this software possesses the ability to recognize bubbles even under challenging circumstances that provide difficulties for identification.Furthermore, it had the capability to analyze the bubbles from various perspectives, ascertaining their dimensions, velocity, and flow direction.
Fluids 2024, 9 , 14 Figure 3 .
Figure 3. Every point in geometric space (left) produces a bubble in parameter space (right).The intersection of the bubbles in parameter space with the center of geometric space is (a, b).
Figure 3 .
Figure 3. Every point in geometric space (left) produces a bubble in parameter space (right).The intersection of the bubbles in parameter space with the center of geometric space is (a, b).
Figure 4 .
Figure 4.The video of an experiment has 239 frames.
Figure 4 .
Figure 4.The video of an experiment has 239 frames.
Figure 5 .
Figure 5. Analyzing the video to study the bubbles behavior.In (a-c), the orange data represent the curve fit of the experimental data from image processing software.
Figure 5
Figure5presents the results of visualizing the video footage.In Figure5a-c, the orange data represent the curve fit of the experimental data obtained from image processing
Figure 5 .
Figure 5. Analyzing the video to study the bubbles' behavior.In (a-c), the orange data represent the curve fit of the experimental data from image processing software.
Figure 6 .
Figure 6.Frames which have maximum and minimum bubbles diameter and largest and smallest number of bubbles.
Figure 6 .
Figure 6.Frames which have maximum and minimum bubbles diameter and largest and smallest number of bubbles.
Figure 7 .Figure 7 .
Figure 7.The results of analysis of frame 165.
Figure 8 .
Figure 8.A sample of the visualized bubbles is taken from (b), and then they are enlarged and analyzed in great detail in (c).
Figure 8 .
Figure 8.A sample of the visualized bubbles is taken from (b), and then they are enlarged and analyzed in great detail in (c).
Figure 9 .
Figure 9. Relation between bubble velocity with their number and diameter.
Fluids 2024, 9 , 14 Figure 10 .
Figure 10.A reduced visualization of the 239 frames to one, which shows the bubbles flow in a whole processing.
Figure 10 .
Figure 10.A reduced visualization of the 239 frames to one, which shows the bubbles flow in a whole processing.
Table 1 .
Results of analysis of bubbles in fluid.
Table 1 .
Results of analysis of bubbles in fluid.
Table 2 .
The relation between the bubble size and their number for the selected frames. | 2024-03-03T19:01:10.905Z | 2024-02-27T00:00:00.000 | {
"year": 2024,
"sha1": "5c58c4cf2cf02a35c99415199316c543306be0ef",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2311-5521/9/3/58/pdf?version=1709012530",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "b70c86383596a0ad22ddf85aa956c434eb34e8db",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": []
} |
4527755 | pes2o/s2orc | v3-fos-license | The incidence of psychoses in diverse settings, INTREPID (2): a feasibility study in India, Nigeria, and Trinidad
Background There are striking global inequities in our knowledge of the incidence, aetiology, and outcome of psychotic disorders. For example, only around 10% of research on incidence of psychotic disorders originates in low- and middle-income countries. We established INTREPID I to develop, implement, and evaluate, in sites in India (Chengalpet), Nigeria (Ibadan), and Trinidad (Tunapuna-Piarco), methods for identifying and recruiting untreated cases of psychosis, as a basis for investigating incidence and, subsequently, risk factors, phenomenology, and outcome. In this paper, we compare case characteristics and incidence rates across the sites. Method In each site, to identify untreated cases of psychoses in defined catchment areas, we established case detection systems comprising mental health services, traditional and spiritual healers, and key informants. Results Rates of all untreated psychoses were 45.9 (per 1 00 000 person-years) in Chengalpet, 31.2 in Ibadan, and 36.9 in Tunapuna-Piarco. Duration of psychosis prior to detection was substantially longer in Chengalpet (median 232 weeks) than in Ibadan (median 13 weeks) and Tunapuna-Piarco (median 38 weeks). When analyses were restricted to cases with a short duration (i.e. onset within preceding 2 years) only, rates were 15.5 in Chengalpet, 29.1 in Ibadan, and 26.5 in Tunapuna-Piarco. Further, there was evidence of age and sex differences across sites, with an older average age of onset in Chengalpet and higher rates among women in Ibadan. Conclusion Our findings suggest there may be differences in rates of psychoses and in the clinical and demographic profiles of cases across economically and socially distinct settings.
Introduction
Our understanding of the rate at which psychotic disorders occur in populations has changed in the past 10 years. In contrast to what was previously assumed, there is now strong evidence that the incidence of schizophrenia and other psychoses varies across geographical areas and social groups (McGrath et al. 2004;Kirkbride et al. 2006). This is important because understanding the origins of these variations may provide clues to aetiology, in particular to the kinds of environments that increase or decrease risk for disorder. Largely as a consequence of this, there has been an upsurge of research on (socio-) environmental factors (e.g. Morgan et al. 2008;Van Os et al. 2010), much of it suggesting that fragmented neighbourhoods (Allardyce et al. 2005;Zammit et al. 2010), disadvantaged social statuses (Bourque et al. 2011;Morgan et al. 2014a, b), and negative interpersonal experiences (Varese et al. 2012;Morgan et al. 2014c) are associated with risk of psychotic disorders -findings which mirror those from much earlier work (e.g. Faris & Dunham, 1939;Hollingshead & Redlich, 1958). What is remarkable about this research is that it has been conducted in a relatively narrow range of settings, i.e. select centres in high-income countries. It is only a slight exaggeration to say that we know nothing about rates of psychotic disorders in the rest of the world. For example, even in the WHO Ten Country study (Jablensky et al. 1992) the data were sufficient to allow estimates of incidence in only one low-and middle-income country (India). While this remains the case, our understanding of how environments and experiences over the life-course impact on risk and shape the varied manifestations and outcomes of psychotic disorders will be partial.
Incidence studies in other places
To clarify the extent and nature of existing findings from incidence studies of psychotic disorder outside, geographically, of North America, Europe, and Australasia, we conducted a review of the relevant literature, first drawing from the review by McGrath et al. (2004) and then repeating this to cover the period since (i.e. 1 January 2002-31 December 2014).
We identified 14 studies (eight from McGrath et al. 2004, six from our search) spread across a wide range of geographical settings (Table 1). In only one country (Surinam) was there more than one study and a majority (7/12 that provided information) included fewer than 100 cases. Reported incidence rates varied. However, there were no discernible patterns by geographical area and there was considerable heterogeneity in the methods and quality of the studies included, making direct comparisons difficult and limiting what can be inferred about the reasons for variation. For example, there was little consistency in methods for case detection, in age groups covered, and in symptomatic and diagnostic inclusion criteria and assessment. All these aspects of study design influence the number of cases counted and thereby estimated incidence rates. Most importantly, in settings where mental health services are relatively underdeveloped or where access is limited, studies that rely solely on such services inevitably produce biased estimates of incidence. Only six of the 14 studies included in our review attempted to extend casefinding beyond mental health services; in none is any information given about how many cases, if any, were detected through non-mental health service sources.
Aims
INTREPID I is a feasibility study. Our aim was to develop, implement, and evaluate, in three sites in India, Nigeria, and Trinidad, methods and strategies for identifying and recruiting untreated cases of psychosis (i.e. untreated at start of periods of case detection), as a basis for investigating incidence and, subsequently, risk factors (within a case-control design), phenomenology, and course and outcome (Morgan et al. 2015). Here we report on the implementation of and results from case-finding, in particular focusing on: (a) the sources through which cases were identified, (b) clinical and demographic profiles of included cases, and (c) incidence rates of untreated psychoses of both long (i.e. >2 years) and short (i.e. <2 years) duration.
Method
INTREPID I is a population-based programme designed to implement and evaluate methods for identifying, assessing, and following untreated cases of psychosis and controls in three sites: Chengalpet (near Chennai), India; Ibadan South East and Ona Ara, Nigeria; and Tunapuna-Piarco, Trinidad (Morgan et al. 2015). This programme is, in part, designed to investigate psychoses in the regions and countries from which the majority of migrants to the UK originate, and this primarily determined the choice of countries. The specific sites were chosen to ensure they contained a mix of urban and rural areas.
Sites and populations at risk
The sites included in INTREPID I are economically, socially, and culturally diverse (Morgan et al. 2015;Supplementary Table S1). All are in countries that have experienced recent periods of rapid economic growth and urbanization. However, rates of poverty, literacy, and life expectancy differ. In Nigeria, for example, an estimated 68% live in poverty (i.e. on < $1.25 per day), compared with around 32% in India and 4% in Trinidad. Life expectancy is also much lower in Nigeria (52 years) than in India (66 years) and Trinidad (70 years). This noted, data on these developmental markers are not available at site level and, given inevitable within-country variations, this means caution is needed in generalizing findings from our study to the country level. In each site, geographically demarcated catchment areas were defined (Supplementary Table S2). In Chengalpet and Tunapuna-Piarco, case-finding was conducted over a 7-month period and in Ibadan over a 6-month period. A 6-month period of case detection was set at the outset, on the basis that this would enable us to identify a sufficient number of cases to evaluate feasibility. Resources allowed us to extend case-finding for 1 month in Chengalpet and Tunapuna-Piarco.
Detection systems
We established extensive case detection systems in each site, tailored to the local healthcare systems, that incorporated all mental health providers (public and private, including psychiatrists and mental health nurses), the major spiritual and traditional healers, and a network of key informants (including primary-care doctors and nurses) (Morgan et al. 2015). We further provided all staff, healers, and informants with training and information materials on psychoses (developed from qualitative studies in each site; Cohen et al. unpublished data) to ensure a shared understanding of the problems and behaviours of interest. In Chengalpet, this system was augmented by locating five research workers within the local communities to periodically approach local residents at communal meeting points (where groups of residents would often congregate) to enquire about possible cases.
Inclusion and exclusion criteria
Our inclusion and exclusion criteria were identical to those used in the UK AESOP study , which in turn were based on those used in the WHO Ten Country study (Jablensky et al. 1992), i.e. inclusion criteria: age 18-64 years; resident in catchment area at time of case detection; evidence of psychotic symptoms or experiences in past 12 months; not treated with antipsychotics for 3 continuous months prior to the start of recruitment. Exclusion criteria: evidence of psychotic symptoms precipitated by an organic cause; central nervous system disease; transient psychotic symptoms resulting from acute intoxication. These criteria were purposefully broad (i.e. all psychoses without substantive treatment with antipsychotic medication prior to the start of the study), and, following most previous studies, did not specify a limit on the length of psychosis prior to detection. However, for analyses, we did distinguish cases with long (i.e. >2 years) and short (i.e. <2 years) duration of psychosis. In studies in the UK (e.g. AESOP), over 80% of cases included in incidence samples have a duration of untreated psychosis of <2 years ). Our short duration group, then, was defined to be as comparable as possible to previous studies in settings with more developed and accessible mental health services.
Case ascertainment and data collection
In the study periods in each site, we identified all those aged 18-64 years who presented to mental health services or spiritual and traditional healers or who were known to key informants (including primary-care doctors and nurses) and who met our inclusion criteria. Using the terminology from Table 1, this case-finding In the published report, the incidence rate is 35/100 000. However, this is calculated based on the total population in the study catchment area and not the population at risk. Therefore, the incidence rate reported in the table is the one we calculated using the population at risk.
i Standardized rate.
strategy is first-contact +, i.e. screening of all mental health services for first contact cases, plus efforts to go beyond this to include healers and informants. A team of researchers in each site was responsible for checking, on at least a weekly basis, with all providers, healers, and informants. All potential cases were screened for inclusion using the Screening Schedule for Psychosis (Jablensky et al. 1992). Information on sociodemographics and past and current symptoms and health service contacts and treatment on all those who passed the screen was collated from potential cases, informants, and clinical records (where available) using translated versions of the MRC Sociodemographic Schedule (Mallett, 1997), the WHO Personal and Psychiatric History Schedule (PPHS; WHO, 1993), and the Schedules for Clinical Assessment in Neuropsychiatry (SCAN; WHO, 1992). This information was used to determine inclusion and diagnosis. All researchers in each site underwent extensive training throughout the project in inclusion and exclusion criteria and in each assessment that comprised: face to face training; online training materials (http:// www.intrepidresearch.org/training); and ongoing supervision from site PIs (R.T., O.G., G.H.). For all relevant instruments we conducted inter-rater reliability exercises. Researchers in each site independently rated videos of assessments (http://www.intrepidresearch.org/training) and ratings by each pair of researchers were compared by calculating kappa statistics. These indicated moderate (range 0.41-0.60) to good (range 0.61-0.80) agreement among raters across all sites. Diagnoses were determined by consensus . In each site, researchers presented information from clinical interviews and, where available, clinical notes to the research team and PIs. On the basis of this, and ensuing discussions, consensus diagnoses were agreed.
Leakage
In each site, at the end of case recruitment we conducted leakage studies to identify any possible cases meeting our inclusion criteria who may have been missed. This involved researchers systematically checking, where possible, new admissions ledgers and registers for in-patient and out-patient services and, as appropriate, completing final checks with healers and informants.
Analysis
Incidence rates are expressed per 1 00 000 person-years at risk. Direct standardization was used to estimate sex and age-standardize rates across sites using the World (WHO 2000(WHO -2015 Standard Population (http://seer. cancer.gov/stdpopulations/world.who.html). To examine variations in incidence, incidence rate ratios were modelled using Poisson regression to adjust for sex and age.
Ethical standards
The authors assert that all procedures contributing to this work comply with the ethical standards of the relevant national and institutional committees on human experimentation and with the Helsinki Declaration of 1975, as revised in 2008.
Results
In each catchment area we screened large numbers of potential cases during the periods of case identification (Chengalpet 480, Ibadan 559, Trinidad 117). The majority were not eligible, primarily because of prior treatment or residence outside of the catchment areas, and we had insufficient information to determine eligibility on a small number (Chengalpet 18, Ibadan 0, Tunapuna-Piarco 9). From those screened, we identified 147 cases who met our inclusion criteria (Chengalpet 64, Ibadan 48, Tunapuna-Piarco 35) (see flow charts in the supplementary material). There were marked differences between sites in the sources through which cases were identified, reflecting differences in the structure of local healthcare systems (Morgan et al. 2015). In Tunapuna-Piarco, over 90% were identified through mental health services; in Ibadan around half were identified through services and around half through spiritual and traditional healers; and in Chengalpet, around 90% were identified through non-providers (i.e. key informants and local residents) (Supplementary Table S2).
Clinical and demographic characteristics
In Chengalpet and Tunapuna-Piarco, roughly half the cases received a diagnosis of schizophrenia (ICD-10, F20.X) and half of another psychotic disorder (e.g. delusional disorder, bipolar disorder with psychotic symptoms, etc.). By contrast, almost 90% of cases in Ibadan received a diagnosis of schizophrenia. In Chengalpet, duration of psychosis prior to detection (median, 232 weeks) was longer than in both Ibadan (median 13 weeks) and Tunapua-Piarco (median 38 weeks). Following onset, almost half the cases in Tunapuna-Piarco first sought or received help from a mental health professional and, by the point of inclusion in the study, over 90% had had a contact with a mental health professional. This contrasts with Chengalpet and Ibadan, where a majority first sought or received help from a spiritual or traditional healer and, at point of inclusion, around a third in each site had had no contact with a mental health professional (Supplementary Table S3).
The cohort in Chengalpet was older (mean age 39.8 years) at detection than in both Ibadan (mean age 31.0 years) and Tunapuna-Piarco (mean age 33.4 years). This, however, was mainly a reflection of differences in duration of psychosis prior to detection; when age of onset was considered, the evidence for differences in age of onset was weaker (mean age of onset: Chengalpet 33.8, Ibadan, 30.3, Tunapuna-Piarco, 31.6; p = 0.150). The gender distributions also differed. There were more women in Ibadan (68.8%) and in Chengalpet (56.2%) and fewer women in Tunapuna-Piarco (42.9%). In Chengalpet (but not Ibadan or Tunapuna-Piarco), these proportions were reversed for cases with schizophrenia (i.e. fewer women, 43.2%) (Supplementary Table S3).
Sex-and age-specific incidence rates
Given that the number of cases in each site was relatively small, it is difficult to make confident inferences regarding patterns of incidence by sex or age. Still, for completeness, analyses by sex and age are provided in Supplementary Tables S4-S6 and Supplementary Figs S1 and S2. Tentatively, in Chengalpet and Tunapuna-Piarco there was no evidence of sex differences in rates of all psychoses [women v. men: Chengalpet (IRR 1.33, 95% CI 0.81-2.17); Tunapuna-Piarco (IRR 0.75, 95% CI 0.39-1.47)]. However, when stratified by diagnosis, there was some evidence that the rate of other psychoses was higher in women in Chengalpet (IRR 2.94, 95% CI 1.24-6.95). In Ibadan there was some evidence that overall rates were higher in women (IRR 1.97, 95% CI 1.07-3.63); this broadly held for schizophrenia (IRR 1.67,. When analyses were restricted to those with a short duration of psychosis, these patterns were broadly the same (Supplementary Table S3). Finally, and more tentatively still, our data suggest age-specific rates in Ibadan and Tunapuna-Piarco broadly follow what has been previously reported, i.e. peak during 20s with a decline thereafter, but not in Chengalpet, where rates were more consistent through to mid-40s (Supplementary Tables S4 and S5; Supplementary Figs S1 and S2).
Leakage studies
From the leakage studies we identified a further 21 possible cases (Chengalpet 9, Ibadan 4, Tunapuna-Piarco 8), suggesting upwards of between 8% and 18% of cases were missed in each site. Due to limited information available on these individuals, we did not include them in the analyses above. However, they do provide a basis for estimating the upper range for incidence rates in each site (Supplementary Table S7).
Discussion
We identified more cases in each catchment area than anticipated and our estimated incidence rates are at the upper end of those reported in previous studies (McGrath et al. 2004). This allowed us to examine in more detail than anticipated variations within and among sites in the clinical and demographic characteristics of cases and rates of disorder. We found that: (a) in Chengalpet and Ibadan, most cases were identified outside of mental health services and, at point of ascertainment, around a third in each of these sites had not had any contact with a health professional; (b) age of onset was older and duration of psychosis was longer in Chengalpet; and (c) the incidence of short duration psychosis was higher in Ibadan and Tunapuna-Piarco than Chengalpet, with this being most marked for schizophrenia in Ibadan.
Methodological issues
Before considering these findings further, a number of methodological issues need to be addressed. First, while the number of cases included in each site is similar toand in many instances higher thanthe numbers that have formed the basis for other reports, they are nonetheless relatively small. The study was primarily designed to evaluate feasibility and was not powered to test hypotheses concerning, for example, differences in rates of disorder among sites. Consequently, our data on the clinical and demographic characteristics of cases and our estimates of incidence rates and rate ratios are imprecise and necessarily tentative.
Second, we cannot exclude the possibility thatdespite our efforts to extend case-finding beyond professional mental health serviceswe still missed cases. Our case detection systems in each site were not complete. For example, in Chengalpet we could only cover the major healing sites known to provide care for those with a mental disorder and, even then, consistently engaging with them was difficult, in part due to lack of trust (Morgan et al. 2015). To address this in Chengalpet, we incorporated an additional strategy of approaching local residents at communal meeting points to supplement case-finding. This identified a large number of cases with a long untreated disorder [i.e. 15 (50%) of cases identified via this strategy had a duration of >2 years] and it may be that these cases were missed in other sites, a possibility hinted at by differences in duration of disorder among sites (especially the short duration in Ibadan). Consequently, some of the differences observed between Chengalpet and the other sites may be a function of this methodological difference. (Because of this, we now plan to incorporate a similar strategy into case-finding in the other sites in the next stage of our programme.) Related to this, the relatively low proportion of cases in Ibadan with a non-schizophrenia diagnosis may be due to our having missed cases with, for example, affective psychoses. Further, our leakage studies found additional possible cases in each site. The proportion of possible leakage cases identified, however, was in line with the proportions in other studies (e.g. AESOP, 13%; Kirkbride et al. 2006). Still, that cases were very likely missed underscores the importance of conducting longer term programmes in which case detection systems develop and consolidate, as trust and familiarity increases.
Finally, our inclusion criteria were purposefully broad and in particular did not limit inclusion on the basis of duration of psychosis. This is in line with most previous studies, including AESOP ) and the WHO Ten-Country study (Jablensky et al. 1992), and this approach has the advantage of, ultimately, enabling samples to be subdivided by duration to examine whether and how this influences findings, which is indeed what we did. However, it is still important to keep this issue at the forefront when comparing findings across samples and studies. For example, in this study Chengalpet had the highest rates when all cases were included and the lowest rates when only those with a short duration were included. Indeed, this issue raises fundamental questions about how to estimate incidence rates in different settings. In the UK, for example, where mental health services are well developed and widely available and where over 80% of those included in incidence studies have a duration of psychosis of <2 years, first contact may provide a robust proxy for incidence. However, in Chengalpet it is less clear that such an approach is valid, as a relatively large proportion of included cases have a long-standing disorder.
To deal with this, conducting studies over a longer period of time and basing estimates of incidence on short duration cases only may provide the basis for ensuring greater comparability of findings across diverse healthcare systems.
Psychoses elsewhere
The above methodological issues notwithstanding, the sheer paucity of research on untreated (incident) psychoses in settings other than North America, Europe, and Australasia means that our data have value both in adding to the available literature and in demonstrating the necessity and feasibility of conducting robust and comparable epidemiological studies of psychoses in diverse settings.
In all sites, our estimated incidence rates for all psychoses were high (31-46/1 00 000), with the rate in Chengalpet being very similar to that reported for rural Chandigarh in the WHO Ten Country Study. They are at the upper end of rates reported in previous studies in low-and middle-income settings (see Table 1), including those that have used the most comparable methods. For example, studies in Brazil (Menezes et al. 2007), Trinidad (Bhugra et al. 1996), and Barbados (Mahy et al. 1999) that applied the same inclusion criteria reported rates between 15 and 35/1 00 000 per year. It is notable that the highest reported rate is from the study in Barbados, whichas far as we could ascertainwas the only one that sought to extend case-finding to religious institutions. The authors, however, do not report on how many cases were found via the institutions they covered. Further, our rates are high compared with studies in high-income countries. In AESOP, for example, rates of all psychotic disorders in all sites was 32.1/1 00 000; this, however, was inflated by the inclusion of a site in London (49.4). In Bristol and Nottingham, rates were 20.4 and 23.9/1 00 000 . When only short duration cases were considered, rates were inevitably lower, further underscoring the importance of clarity and consistency on duration of disorder in comparing rates across studies. This noted, it does seem that case-finding methods that extend beyond mental health services in relatively lowresource settings do detect more cases and provide more valid estimates of incidence. Furthermore, it is important to bear in mind that incidence rates are unlikely to remain static over time. That our rates are relatively high may reflect increases under pressures of economic and population growth. This is, of course, speculation. It is, though, an intriguing hypothesis that can only be tested by constituting long-term programmes of research that can provide robust estimates in diverse settings over time.
While there was no strong evidence that rates for all cases varied across sites, there were intriguing differences in rates of short duration psychoses within and among sites. These have to be considered cautiously, as the above caveats about methodological limitations mean it is possible these differences are due to chance or bias. This notwithstanding, it is notable thatwhen considering short duration casesrates differed across sites and the highest was in Ibadan, which contained the most densely populated area in any of the sites (population densities: Ibadan South East 15 674/km 2 , Ona Ara 916/km 2 , Chengalpet 1648/km 2 , Tunapuna-Piarco 476/km 2 ). It is unclear to what extent the apparent association between the incidence of psychoses and population density extends beyond the northern European cities in which it has been reported. Menezes et al. (2007), for example, found a low rate of around 15/1 00 000 in a catchment area in Sao Paulo, one of the most densely populated cities in the world. The association, then, between psychoses and population density may be more complex than research to date suggests; investigating this further in a wider range of cities, across different continents, may provide novel insights into the kinds of environments that foster the development of psychoses. One factor that may complicate the picture is infant mortality and life expectancy. If those most at risk of schizophrenia (e.g. due to prenatal and/or perinatal complications) are more likely to die in infancy or during childhood and adolescence, then rates of disorder may be lowor at least lower than they would otherwise bein countries with higher infant mortality rates and lower life expectancy. Furthermore, the low rates in Chengalpet and relatively high rates in Ibadan and Tunapuna-Piarco mirror differences observed in Europe in migrants from India, West Africa, and the Caribbean (e.g. ), a tantalizing observation that further hints at the potential for studies in such settings to cast light on population differences observed in high-income countries.
There were also noteworthy differences among sites in the age and sex distributions of the samples. Even after accounting for duration of illness, the average age of onset was older in Chengalpet than in Ibadan and Tunapuna-Piarco; and in both Ibadan and Tunapuna-Piarco, while the age distribution of risk broadly followed the pattern reported elsewhere (i.e. peak during 20s and dropping off thereafter), the average age of onset was still at the high end of what has been reported in other studies, both in high-income countries (McGrath et al. 2004) and in previous studies in low-and middle-income countries (e.g. around 50% of those included in the India and Nigeria sites of the WHO Ten-Country study were aged 15-24 years). It is of note, in relation to this, that studies in high-income countries have tended to find an older average age of onset the broader the case-finding net has been cast (e.g. AESOP; Kirkbride et al. 2006). While our sample sizes are such that we must be cautious in overinterpreting the findings on age, they do nonetheless at least raise the possibility that age of onset may differ across settings, which may (in turn) reflect differences in the distribution of and exposure to environmental risk factors over the life-course. A similar observation can be made regarding sex. That is, the higher incidence in women in Ibadan may be due to chance or an artefact of the study; alternatively, it may reflect a real difference in the sex distribution (and in risk factors) across settings. At the very least, these findings raise questions about how universal the age and sex distributions typically reported in studies from high-income countries are.
Conclusion
Our study demonstrates the feasibilityand necessityof conducting comparable epidemiological studies of psychoses in more diverse settings, which are tailored to local healthcare systems. Such work has the potential to broaden the scope, and substantially increase the rigour and value, of research on psychoses beyond the usual settings, producing data of importance for our understanding of all aspects of psychoses and of direct relevance to the needs of local populations. | 2017-04-12T00:33:07.908Z | 2016-03-28T00:00:00.000 | {
"year": 2016,
"sha1": "b4fe1d053fd34eff7901ed215a4f3e0ff618db35",
"oa_license": "CCBY",
"oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/EB2768CDC8020E1E07BC5BC2D169FC0E/S0033291716000441a.pdf/div-class-title-the-incidence-of-psychoses-in-diverse-settings-intrepid-2-a-feasibility-study-in-india-nigeria-and-trinidad-div.pdf",
"oa_status": "HYBRID",
"pdf_src": "Crawler",
"pdf_hash": "0074ba072c9d5f956872b35a8d8f6f7ff5baea33",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
253598039 | pes2o/s2orc | v3-fos-license | Case report: A novel mutation in TRPS1 identified in a Chinese family with tricho-rhino-phalangeal syndrome I: A therapeutic challenge
Tricho-rhino-phalangeal syndrome (TRPS) is a rare autosomal dominant malformation caused by mutations involving the TRPS1 gene. Patients with TRPS exhibit distinctive craniofacial and skeletal abnormalities. This report presents three intra-familial cases with TRPS1 gene mutations that showed the characteristic features of TRPS. A 13-year-old boy was admitted to Department of Endocrinology for the evaluation of short stature. Physical examination revealed that the boy had thin sparse hair, pear-shaped nose, protruding ears, small jaw and brachydactyly. A survey of his family history indicated that the boy's sister and mother shared the same clinical features. Radiological techniques demonstrated a different degree of skeletal abnormalities in these siblings. Next-generation sequencing and quantitative PCR were performed and showed a novel deletion mutation in exons 3–5 in the three familial cases, confirming the diagnosis of TRPS I. The healthy father did not carry the deletion mutation. Currently, there was no specific therapy for TRPS I; however, genetic consultation may be useful for family planning
Based on clinical characteristics and genetic analysis, TRPS is distinguished into three subtypes: TRPS I (OMIM 190350), known as Giedion syndrome, have distinct clinical manifestations that often correspond to distinct mutations or haploinsufficiency in the TRPS1 gene (5). Moreover, TRPS II (OMIM 150230), also named Langer-Giedion syndrome (LGS), is caused by a contiguous gene deletions involving both TRPS1 and EXT1 (6,7). TRPS II differs from TRPS I by the presence of multiple exostoses and intellectual disability (6). TRPS III (OMIM 190351) is also associated with TRPS1 mutations. Besides typical TRPS feature, TRPS III cases have more severe skeletal malformations (7).
Herein, we describe a Chinese Han family with three TRPS I cases caused by a novel deletion mutation in the TRPS1 gene involving exons 3-5 ( Figure 1).
Patients and clinical evaluation Case 1
A 13-year-old boy, proband, was first admitted to our Endocrinology Department for evaluation of his short stature. He was born after a full term pregnancy and normal delivery, as the second child in non-consanguineous family. His parents reported that his birth weight and length was normal but gradually developed short stature upon birth. In addition, he often suffered from respiratory infections and his tonsils were removed. However, no intellectual impairment was observed.
Upon admission, a routine examination revealed that the boy's weight was 81 kg (>97th percentile) and his standing height was 152.2 cm (3rd percentile). Pubertal development was normal. Another prominent dysmorphic feature included markedly thin and sparse scalp hair, protruding ears, a bulbous pear-shaped nose and a long philtrum with a thin upper lip [ Figure 2A Laboratory tests showed that serum levels of calcium, inorganic phosphate, alkaline phosphatase, free T4, TSH, PTH, corticosteroid and insulin-like growth factor 1 (IGF-1) were normal. The karyotype was 46, XY. Further extremity and radiological examinations showed brachydactyly of fingers and toes [ Figure 2B(a)].
Case 2
The patient was a 23-year-old girl, the elder sister of patient 1. She showed similar features to her brother, with a height of 146 cm (<3rd percentile). She was almost bald and declined to take off her wig [ Figure 2A(b)]. She showed a brachydactyly with obvious clinodactyly, a deviation of the forefinger, middle fingers and ring fingers bilaterally. Radiography revealed distortion of the proximal middle phalanges on the second, third and fourth fingers bilaterally. She also showed a skeletal malformation on second through fifth proximal phalanx on both feet [ Figure 2B(b)].
Case 3
The mother of the siblings, with a height of 140 cm (<3rd percentile), presented with sparse scalp hair and a nose with a bulbous tip [ Figure 2A
Molecular analysis
Genomic DNA from the proband and his family members was extracted from peripheral blood samples. A customdesigned Medical Exome Sequencing (MES, AmCare Genomic Lab), including target region capture of more than 5,000 phenotype-related genes contained in the Online Mendelian Inheritance in Man (OMIM), was applied and was followed by next-generation sequencing (NGS, PE 150) on the Illumina platform (Illumina, Inc.). Alignment of the sequence to the reference human genome (hg19) was performed by NextGen (Softgenetics, LLC). Trio analysis including both SNV annotation and exome-based CNV identification was done by an in-house pipeline. Synonymous as well as common SNPs (MAF > 0.1% in gnomAD) were filtered out subsequently. All Pedigree of family. The arrow indicates patient 1 as the proband.
Molecular Findings
Based on the MES trio analysis and qPCR validation, a small heterozygous deletion c.38−? _2700+? del within the TRPS1 gene (NM_014112.5) was identified ( Figure 3A). It is segregated in all the patients of this family (proband, his elder sister and mother), and the healthy father did not carry the deletion (Figures 3B,C). This novel small deletion includes exons 3 to 5, and is not present in gnomAD database, HGMD or any peer-reviewed publication. It is predicted to disrupt the reading frame and undergo nonsense-mediated decay (NMD) resulting in an amino acid change (p.Asn13Lysfs*3) because of the multiple exons deletion. According to the ACMG guideline, this variant is classified as likely pathogenic.
Discussion
TRPS1 was reported as the causal gene of TRPS I by Momeni et al. in 2000 (3). Haploinsufficiency is the known pathogenicity mechanism for the TRPS1 gene (7,10). In a previous comprehensive study, deletion variants of TRPS1 have been reported in multiple cases, most of them are whole Frontiers in Pediatrics gene deletions that include exons 1 to 7 or large fragments deletions. Only one patient carrying a smaller (exon 2-6) deletion within the gene has been reported (2). The recurrence of variable sizes of fragment deletion suggests the structure complexity in this region. We are reporting the second family carrying a small 3-exon deletion within the TRPS1 gene, which is predicted to disrupt the functional GATA motif of TRPS1. A mouse model study has revealed that a heterozygous knockdown of the GATA motif leads to hair and facial anomalies that overlap with findings of TRPS (11).
Our study also provides further evidence that structure variation is a common cause of TRPS. In this study, we used an optimized pipeline that combined both the SNV identification and NGS coverage depth data for CNV (even the small deletion/duplications) calls within one dataset, which proved to be a sensitive and cost-effective genetic analysis for the suspected TRPS patients, as well as for the better understanding of the genetic etiology of TRPS.
Definitive diagnosis of the disease is essential to perform timely therapeutic procedures. Nevertheless some alternative approaches have been tried for therapy of a few TRPS cases with mixed results. Short stature is a frequent clinical finding in affected individuals. How to improve their short stature is what these patients and their parents are most interested in. K Stagi (12) and Sarafoglou (13) described their TRPS I cases with or without growth hormone (GH) deficiency, and a remarkable increase in growth was observed through GH therapy in four cases. However, Naselli (14) reported another two TRPS I cases with poor growth, and showed no improvement in linear growth after a 1-year GH replacement therapy. In our study, the evaluation of GH-IGF-1 axis revealed that the boy did not have GH deficiency. His bone age was 15-year assessed through RUS-CHN radiographic atlas method, therefore he had no indications for GH treatment.
Sparse scalp hair is another major feature of TRPS patients. Their diffuse alopecia varies from normal hair to complete baldness (15), and the treatment option for alopecia remains (15). In this case, neither topical minoxidil nor oral finasteride was effective in preventing the progression of alopecia or inducing hair growth. Finally, the patient's hairs started to re-grow at 4 months after hair transplantation operation. All of our three patients complained of hair loss and slow hair growth rate since their childhood. Compared to Mi Soo Choi's patient (15), whose occipital scalp hair had normal hair density and diameter, our patients' hair on the entire scalp was affected and tended to be thinner. At present there is no special therapy for TRPS, even though alternative approaches were employed as GH replacement therapy for short stature and hair transplantation for baldness, the therapeutic results mixed. Therefore, genetic counseling may be useful for family planning.
Data availability statement
The datasets for this article are not publicly available due to concerns regarding participant/patient anonymity. Requests to access the datasets should be directed to the corresponding author.
Ethics statement
The studies involving human participants were reviewed and approved by Ethics committee of Zhongnan Hospital of Wuhan University. Written informed consent to participate in this study was provided by the participants' legal guardian/ next of kin. Written informed consent was obtained from the individual(s), and minor(s)' legal guardian/next of kin, for the publication of any potentially identifiable images or data included in this article.
Author contributions
QH: conducted research. QH and CJ: wrote the paper. QH, JZS and JLX: conceived the research. VWZ: Writing-analyzed the data. All authors contributed to the article and approved the submitted version. | 2022-11-18T15:31:16.378Z | 2022-11-18T00:00:00.000 | {
"year": 2022,
"sha1": "409a9095b9d7e480650250fbdbe34fdddf135a59",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "409a9095b9d7e480650250fbdbe34fdddf135a59",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
22134407 | pes2o/s2orc | v3-fos-license | Rising plasma nociceptin level during development of HCC : A case report
AIM: Although liver cirrhosis is a predisposing factor for hepatocellular carcinoma (HCC), relatively few reports are available on HCC in primary biliary cirrhosis. High plasma nociceptin (N/OFQ) level has been shown in Wilson disease and in patients with acute and chronic pain. METHODS: We report a follow-up case of HCC, which developed in a patient with primary biliary cirrhosis. The tumor appeared 18 years after the diagnosis of PBC and led to death within two years. Alfa fetoprotein and serum nociceptin levels were monitored before and during the development of HCC. Nociceptin content was also measured in the tumor tissue. RESULTS: The importance and the curiosity of the presented case was the novel finding of the progressive elevation of plasma nociceptin level up to 17-fold (172 pg/mL) above the baseline (9.2±1.8 pg/mL), parallel with the elevation of alpha fetoprotein (from 13 ng/mL up to 3 480 ng/mL) during tumor development. Nociceptin content was more than 15-fold higher in the neoplastic tissue (0.16 pg/mg) than that in the tumorfree liver tissue samples (0.01 pg/mg) taken during the autopsy. CONCLUSION: Results are in concordance with our previous observation that a very high plasma nociceptin level may be considered as an indicator for hepatocellular carcinoma. Horvath A, Folhoffer A, Lakatos PL, Halász J, Illyés G, Schaff Z, Hantos MB, Tekes K, Szalay F. Rising plasma nociceptin level during development of HCC: A case report. World J Gastroenterol 2004; 10(1): 152-154 http://www.wjgnet.com/1007-9327/10/152.asp
INTRODUCTION
The incidence of hepatocellular carcinoma (HCC) in primary biliary cirrhosis (PBC) is reported to range from 0.5% to 4.2%, which is somewhat less frequent than that in liver cirrhosis from other etiologies [1,2] .However, recent reports have shown an increasing trend.The incidence of HCC was found higher in stage III/IV than in stage I/II patients [1,3,4] .
Relatively few prospective and comprehensive studies are available on HCC in PBC, and the number of case reports is also limited.Summarizing the data of the literature, 126 of 6 188 PBC patients (2.0%) were found to have HCC.Since HCC usually develops many years after the diagnosis of PBC, screening of HCC has clinical importance and offers best with hope for early detection.Serum AFP is a widely accepted marker of HCC [5] ; however, using the conventional cut-off value of 500 ng/mL, it has a sensitivity of about 50% and a specificity of more than 90% in detecting HCC in a patient with coexisting liver disease [6] .The average survival of patients with HCC improved in the last decades [7] .Female HCC patients often have a better prognosis than male patients [8] .Because of the lack of sensitive markers of malignant hepatic tumors, HCC is usually recognized in the advanced stage.
We reported a follow-up case of HCC in a PBC patient.The importance of this case is the novel finding of progressive elevation in plasma nociceptin (N/OFQ, formerly named as orphanin FQ) up to 17-fold above the baseline, parallel with the elevation of alpha fetoprotein (AFP) during tumor development.Moreover, high nociceptin content was found in the tumor tissue as well.
Nociceptin, a newly discovered neuropeptide of 17 aminoacids, is the natural agonist of NOP receptor, earlier named opioid receptor-like 1 (ORL1) receptor.N/OFQ is a neuropeptide that is endowed with pronociceptive activity in vivo.Both the long and the short splice variants of the OP4 receptor mRNA were isolated in rat liver [9] ; however, there are no data about its physiological or pathophysiological role in liver function.To date only few clinical studies on nociceptin were reported, and only one of them dealt with liver disease.
Findings of the presented case are in concordance with our previous observation that a high plasma NC level may be an indicator of HCC.
CASE REPORT
A 47-year-old woman was admitted for fatigue, itching, hepatomegaly, high alkaline phophatase and gamma-GT levels.Primary biliary cirrhosis was diagnosed based on the clinical symptoms, laboratory findings, AMA M2 positivity, normal US and ERCP pictures.Liver biopsy revealed stage II/III.There was no family history of liver disease or autoimmune disease, and all other confounding risk factors including alcohol, HBV and HCV infections were excluded.The patient was a heavy smoker.The past surgical history was significant only for an appendectomy and two caesarean sections.Her menopause started at the age of 38.Bone mineral density was slightly below the normal range at the time of diagnosis of PBC.Ursodeoxycholic acid (Ursofalk) treatment, calcium and vitamin D supplementation were introduced.
Severe progression of osteoporosis was observed during the following years.Five years after the diagnosis of PBC significantly decreased bone mineral density was shown both on the femoral neck (T-score: -5.12, Z score: -3.38) and on the lumbar vertebrae (T-score: -5.44, Z-score: -3.9).At the age of 57 despite the calcium, vitamin-D supplementation, calcitonin and bisphophonate (Fosamax) treatment, the severe osteoporosis led to hip fracture and multiple vertebral collapses, which made the patient lifelong disabled.
Progression of PBC to stage IV occured.She was monitored by yearly US, by regular laboratory tests and AFP every 3-4 months.For different reasons we collected blood samples for plasma nociceptin level determination as well.Sampling was done in conformity to accepted ethical standards and was approved by the regional ethical committee.
Hepatocellular carcinoma developed 18 years after the diagnosis of PBC.Regular follow-up US investigations revealed a focal lesion of 2.5 cm in diameter, which increased rapidly in segment V of the liver.Fine needle biopsy findings proved hepatocellular carcinoma.After the HCC diagnosis a continuous and rapid increase in the size of the tumor, an elevation of ALP, GGT, plasma nociceptin and AFP levels and clinical deterioration were observed.The AFP value had been within the normal range one year before the tumor was detected (13 ng/mL), but it was elevated at the time of the diagnosis of HCC (426 ng/mL) and rose up to 3 480 ng/ml.The plasma nociceptin measured by radioimmunoassay 125 I-Nociceptin-kit, Phoenix Pharmaceuticals, Phoenix, CA, USA (10.6 pg/mL) was within the normal range (9.2±1.8 pg/mL) in the tumor-free stage, while progressive elevation was detected during the tumor development (15.8, 65.8, 103.7, 128.0, 172.2 pg/mL), reaching the highest value before death (Figure 1).Higher nociceptin content was measured in the tumor tissue (0.16 pg/mg) compared to the tumor-free liver tissue sample (0.01 pg/mg) taken during the autopsy.The patient refused any surgical or other systemic or local tumor treatment.The size of the tumor increased up to 12 cm in diameter and involved segments V, VII, VIII and partly IV.No metastasis was detected, but necrosis developed in the central region of the tumor.The patient received supportive treatment, analgetics, and died of tumorous cachexia 19 months after the discovery of HCC.The autopsy confirmed both PBC stage IV and HCC.Focal presence of AFP was shown in the tumor by immunohistochemistry (Figures 2 A and B).
A nociceptin content in the HCC tissue (0.16 pg/mg) was 15-fold higher than that in the tumor free liver tissue sample (0.01 pg/mg) taken during the autopsy.
DISCUSSION
The presented case is a further example of the occurrence of HCC in PBC, and it supports previous observations that HCC usually develops in stage III/IV patients [3] .The tumor developed 18 years after the diagnosis of PBC and led to the death of the patient within two years.Furthermore, severe osteoporosis is a common disorder in PBC, although recently it is considered as a non-specific complication [10,11 ] .In our patient the early menopause together with PBC resulted in many bone fractures causing lifelong disability.The low serum osteocalcin level indicated a low turnover osteoporosis.High serum osteoprotegerin and low RANKL have been reported in PBC [12] , and we found the same alteration in this patient.Progression of bone loss was detected despite the serum osteoprotegerin was two-fold higher than normal, which suggested that inflammatory process in the liver could also contribute to the elevation of osteoprotegerin.This is the first follow-up case in which progressive elevation of plasma nociceptin level was detected in parallel with the elevation of AFP during tumor development, and high N/OFQ content was found in the tumor tissue.
Nociceptin is the endogenous agonist of a G-protein coupled, naloxon insensitive opioid-like 1 receptor (ORL1) recently named as OP4.Although nociceptin is structurally related to opioid peptides, especially to dynorphin A, it does not interact with µ, δ and κ receptors.N/OFQ/OP4 system is a newly discovered peptide-based signalling pathway, involved in the modulation of pain and cognition.Opioid antagonists were successfully used for the treatment of pruritus in patients with PBC [13] .These results together with the data that high plasma N/OFQ level has been reported in Wilson disease and in patients with chronic pain, led us to measure plasma nociceptin level in primary biliary cirrhosis [14] .Our motivation was reinforced by an accidental observation that we found extremely high N/OFQ in a Wilson patient with advanced HCC.
The nociceptin levels in blood samples collected in the pre-tumor stage and during the tumor development period clearly showed that the elevation of N/OFQ was parallel with that of AFP and the clinical deterioration.When the tumor was first detected the N/OFQ level was 6-fold higher than that in healthy controls (65.8 versus 9.2±1.8pg/mL, n=29) and in other PBC patients without HCC (12.1±3.2pg/mL,n=21).When the tumor reached 11 cm the N/OFQ level was 17-fold higher than normal.Since N/OFQ content was 15-fold higher in tumor tissue than in tumor free parts of the liver, the question arose whether nociceptin was produced by HCC tissue or it accumulated in the tumor by passive binding or via increased nociceptin receptor expression.Further research is needed to clarify the mechanism and clinical significance of the highly elevated N/OFQ level in HCC.Since N/OFQ transcripts are expressed in immune cells [15] , the high N/OFQ level may also be an indicator of altered reaction of the body including immunological, cytokine and other mechanisms.
Elevated N/OFQ level might represent a compensatory mechanism in the N/OFQ/OP4 system to modulate pain perception in the central nervous system.This mechanism could explain why some patients with a very high plasma N/OFQ level did not have pain despite advanced stage of malignant liver tumor.It is remarkable that the N/OFQ was 3-fold higher in our patient than the highest values reported in patients with chronic pain without malignant disease [16] .
In conclusion, the novel finding of this study is that progressive elevation of plasma nociceptin was detected in parallel with the elevation of AFP during tumor development, as well as high N/OFQ content was found in the tumor tissue a PBC patient with hepatocellular carcinoma.This is in concordance with our previous observation that high plasma N/OFQ level might be considered as an indicator of HCC.
Figure 1
Figure 1Serum alpha fetoprotein and plasma nociceptin levels before and during the development of hepatocellular carcinoma in the presented patient with primary biliary cirrhosis.
Figure 2
Figure 2Hepatocellular carcinoma in the presented patient with primary biliary cirrhosis.A: Gross view of liver at autopsy.Grayish nodules of carcinoma protrude on the surface of the cirrhotic liver.Bar=1 cm.B: The focal red staining on the histologic picture of the hepatocellular carcinoma shows the immunohistochemical reaction of AFP.Original magnification 400×.
al. Nociceptin in HCC | 2017-12-18T22:25:38.666Z | 2004-01-01T00:00:00.000 | {
"year": 2004,
"sha1": "c86db7c54e21491bb17ecc2eda2ddbe9d88df736",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.3748/wjg.v10.i1.152",
"oa_status": "HYBRID",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "c86db7c54e21491bb17ecc2eda2ddbe9d88df736",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
11451498 | pes2o/s2orc | v3-fos-license | Meclozine Facilitates Proliferation and Differentiation of Chondrocytes by Attenuating Abnormally Activated FGFR3 Signaling in Achondroplasia
Achondroplasia (ACH) is one of the most common skeletal dysplasias with short stature caused by gain-of-function mutations in FGFR3 encoding the fibroblast growth factor receptor 3. We used the drug repositioning strategy to identify an FDA-approved drug that suppresses abnormally activated FGFR3 signaling in ACH. We found that meclozine, an anti-histamine drug that has long been used for motion sickness, facilitates chondrocyte proliferation and mitigates loss of extracellular matrix in FGF2-treated rat chondrosarcoma (RCS) cells. Meclozine also ameliorated abnormally suppressed proliferation of human chondrosarcoma (HCS-2/8) cells that were infected with lentivirus expressing constitutively active mutants of FGFR3-K650E causing thanatophoric dysplasia, FGFR3-K650M causing SADDAN, and FGFR3-G380R causing ACH. Similarly, meclozine alleviated abnormally suppressed differentiation of ATDC5 chondrogenic cells expressing FGFR3-K650E and -G380R in micromass culture. We also confirmed that meclozine alleviates FGF2-mediated longitudinal growth inhibition of embryonic tibia in bone explant culture. Interestingly, meclozine enhanced growth of embryonic tibia in explant culture even in the absence of FGF2 treatment. Analyses of intracellular FGFR3 signaling disclosed that meclozine downregulates phosphorylation of ERK but not of MEK in FGF2-treated RCS cells. Similarly, meclozine enhanced proliferation of RCS cells expressing constitutively active mutants of MEK and RAF but not of ERK, which suggests that meclozine downregulates the FGFR3 signaling by possibly attenuating ERK phosphorylation. We used the C-natriuretic peptide (CNP) as a potent inhibitor of the FGFR3 signaling throughout our experiments, and found that meclozine was as efficient as CNP in attenuating the abnormal FGFR3 signaling. We propose that meclozine is a potential therapeutic agent for treating ACH and other FGFR3-related skeletal dysplasias.
Introduction
Achondroplasia (ACH) is one of the most common skeletal dysplasias with an incidence of one in 16,000 to 26,000 live births [1]. Clinical features of ACH include rhizomelic short stature, apparent macrocephaly with midface hypoplasia, bowing of the lower limbs, and increased lumbar lordosis [2]. ACH is caused by gain-of-function mutations in the fibroblast growth factor receptor 3 (FGFR3) gene [3,4]. FGFR3 is a key regulator of endochondral bone growth, which signals through several intracellular pathways including the signal transducer and activator of transcription (STAT) and mitogen-activated protein kinase (MAPK) [5][6][7]. Gain-of-function mutations of FGFR3 cause several short-limbed skeletal dysplasias such as hypochondroplasia (HCH) [8], severe ACH with developmental delay and acanthosis nigricans (SAD-DAN) [9], and thanatophoric dysplasia (TD) types I and II [10]. In contrast, loss of function mutations in FGFR3 lead to the CATSHL syndrome in humans, which is characterized by overgrowth of the skeleton including camptodactyly, tall stature, scoliosis, and hearing loss [11], as well as spider lamb syndrome in sheep [12]. These findings indicate that the FGFR3 signaling functions as a negative regulator of endochondral bone growth.
No effective treatments for FGFR3-related skeletal dysplasias are currently available. Growth hormone (GH) has been administered to children with ACH based on evidence of a short-term beneficial effect [13]. The response to GH, however, is moderate and the long-term effect remains controversial. It is conceivable that downregulation of the FGFR3 signaling alleviates the skeletal phenotype of FGFR3-related skeletal dysplasias. Small chemical compounds that antagonize the FGFR3 signaling have recently been identified. Toxicological profiles of these compounds, however, remain mostly unresolved [14][15][16]. The C-type natriuretic peptide (CNP) is a potent antagonist of the FGFR3 signaling that alleviates the short-limbed phenotype of ACH mice through its inhibition of the FGFR3-MAPK pathway [6,17]. CNP has a short half-life and continuous intravenous infusion is required for in vivo experiments [18]. The CNP analog with an extended half-life, BMN 111, has recently been developed and significant recovery of bone growth was demonstrated in ACH mice by subcutaneous administration of BMN 111 [19].
The drug repositioning strategy, in which a drug currently used for patients with a specific disease is applied to another disease, has gained increasing attention from both academia and industry in recent years [20,21]. The advantage of this strategy is that the identified drugs can be readily applied to clinical practice, because the optimal doses and adverse effects are already established. Here, we screened 1,186 FDA-approved compounds to identify a clinically applicable drug that ameliorates ACH and other FGFR3-related skeletal dysplasias. We found that meclozine dihydrochloride, a commonly used anti-emetic drug for its antihistamine activity, efficiently suppresses FGFR3 signaling in three different chondrocytic cell lines and embryonic bone organ culture. We also identified that meclozine suppresses FGF2mediated phosphorylation of ERK.
Meclozine facilitates chondrocyte proliferation and mitigates loss of extracellular matrix in FGF2-treated RCS cells
As rat chondrosarcoma (RCS) chondrocytic cells express high levels of FGFR3, exogenous administration of FGF2 readily recapitulates cellular processes occurring in FGFR3-related skeletal dysplasias [22]. We thus added 10 mM of 1,186 FDAapproved chemical compounds (Prestwick Chemical) along with 5 ng/ml FGF2 to the RCS cells. Quantification of RCS proliferation by the MTS assay revealed that meclozine consistently induced 1.4-fold or more increases in RCS proliferation. In addition, 0, 1, 2, 5, 10, and 20 mM of meclozine exhibited dosedependent increases in RCS proliferation ( Figure 1A). We did not observe dose-dependency at 50 mM, which was likely due to cell toxicity. We also confirmed that 10 and 20 mM of meclozine increased the number of RCS cells ( Figure 1B).
We next compared the effect of meclozine with that of CNP [6,17] as a positive control. Growing RCS cells produced a large amount of cartilage-like sulfated proteoglycans, which were visualized by Alcian blue staining ( Figure 1C). The matrix proteoglycans were almost completely lost at 72 hours after addition of FGF2 due to inhibition of proteoglycan production and also to induction of matrix metalloproteinase-mediated degradation [6]. Treatment of FGF2-added RCS cells with meclozine partly restored staining for Alcian blue and round chondrocyte-like cell shapes, which were similar to those observed with CNP.
We first confirmed that treating RCS cells with FGF2 for four hours induced expressions of matrix metalloproteinase 10 (Mmp10), Mmp13, and a disintegrin-like and metalloproteinase with thrombospondin type 1 motif 1 (Adamts1) transcripts, as has been previously reported [6]. We found that meclozine and CNP significantly suppressed expressions of these matrix metalloproteinases ( Figure 1D). We also quantified expressions of Col2a1 and Acan transcripts, but FGF2 treatment for 72 hours did not reduce the expression levels of these genes in RCS cells ( Figure S1). Meclozine thus decreases expressions of the matrix metalloproteinases in FGF2-treated RCS cells.
Meclozine mitigates abnormally suppressed proliferation of HCS-2/8 chondrocytes expressing FGFR3-K650E, -K650M, and -G380R We examined the effects of meclozine on chondrocyte proliferation under the influence of FGFR3 mutants. As RCS cells express high levels of wild-type FGFR3, we used human chondrosarcoma (HCS-2/8) cells to observe unequivocal effects of the transduced FGFR3 mutants. We first introduced lentivirus carrying active mutants of FGFR3 (K650E in TDII, K650M in SADDAN, and G380R in ACH) into HCS-2/8 cells. The lentivirus carried IRES2-driven Venus cDNA downstream of FGFR3 cDNA. The MTS assay demonstrated that K650Eexpressing HCS-2/8 cells showed significantly suppressed cellular proliferation ( Figure S2). Meclozine partially rescued the growth arrest without apparent cellular toxicity in HCS-2/8 cells expressing the three FGFR3 mutants (Figure 2A). We also observed that the areas of Venus signals, which should be proportional to the number of Venus-positive cells, were increased by meclozine as well as by CNP in K650E-and G380R-expressing HCS-2/8 cells ( Figure 2B).
Meclozine mitigates abnormally suppressed differentiation of ATDC5 chondrogenic cells expressing FGFR3-G380R and -K650E in micromass culture
We next examined the effects of meclozine on chondrocyte differentiation in the presence of FGFR3 mutants. ATDC5 cells retain potency to differentiate into chondrocytes and are commonly used to study chondrogenesis in vitro [15]. We added meclozine to the micromass culture of ATDC5 cells that were infected with lentivirus expressing FGFR3-wild-type, FGFR3-G380R, or -K650E. Alcian blue staining revealed abundant sulfated proteoglycans on wild-type FGFR3 cells, while the staining intensity was reduced by the mutations. Addition of meclozine simultaneously with the chondrogenic induction alleviated the inhibitory effect of the G380R and K650E with and without statistical significance, respectively. Quantitative analysis of sulfated glycosaminoglycans in cell lysate demonstrated that meclozine increased the levels of glycosaminoglycans ( Figure 3).
Meclozine increases the longitudinal length of embryonic tibiae with or without FGF2 treatment in bone explant culture
We further quantified the effect of meclozine on FGF2mediated inhibition of cartilage development in the bone organ culture employing limb rudiments isolated from developing murine embryonic tibia [14]. We added combinations of 100 ng/ml FGF2, 0.2 mM CNP, and 20 mM meclozine to the culture medium, and compared the length of treated tibiae with that of contralateral control tibia from the same individual. The addition of FGF2 inhibited longitudinal growth of bone and cartilage of embryonic tibiae, while CNP and meclozine significantly attenuated the growth inhibition driven by FGF2 ( Figure 4). Histological analysis revealed that FGF2 treatment reduced the thickness of the hypertrophic chondrocyte layer, while treatments with CNP and meclozine mitigated the effect of FGF2 ( Figure S4). It is interesting to note that meclozine also increased the length of tibia without FGF2 treatment but without statistical significance.
Meclozine attenuates ERK phosphorylation in FGF2treated RCS cells
We next scrutinized the effects of meclozine on the downstream signaling pathways of FGFR3 in FGF2-treated RCS cells. RCS cells were pretreated with meclozine for 30 minutes before adding FGF2, and the phosphorylation levels of ERK and MEK were determined by Western blotting. The FGF2-mediated ERK1/2 phosphorylation was attenuated by meclozine, while MEK1/2 phosphorylation remained unchanged ( Figure 5A). We next introduced constitutively active (ca) mutants of ERK, MEK, and Meclozine rescued the FGF2-mediated growth arrest of RCS cells. (C) Meclozine (10 mM) ameliorated FGF2-mediated alteration of cellular shape and loss of extracellular matrix. RCS cells were treated with 5 ng/ml FGF2 with and without 0.2 mM CNP or 20 mM meclozine for 72 hours, and cartilagelike sulfated proteoglycan matrix was stained by Alcian blue. Growing RCS cells were round-shaped and produced abundant cartilage-like sulfated proteoglycan matrix in the absence of FGF2. FGF2 treatment transformed some cells to fibroblast-like shapes and prominently suppressed expression of sulfated proteoglycan matrix. In the RCS cells treated with CNP or meclozine, the cellular shape remained round and the intensity of Alcian blue staining approximated that of FGF2-negative cells. Representative images of triplicated experiments are shown. Magnified images of the middle RAF into RCS cells using lentivirus and quantified cell growth with the MTS assay. As predicted, meclozine ameliorated caMEK-and caRAF-mediated growth inhibition, whereas meclozine had no effect on caERK-mediated growth inhibition ( Figure 5B). We observed the similar effects by counting cells (Figures S5). Both data point to a notion that meclozine is likely to inhibit MEK1/2-mediated ERK1/2 phosphorylation or activate phosphatase(s) for phosphorylated ERK1/2 ( Figure 6).
Discussion
The drug repositioning strategy is an effort to identify new indications for the existing drugs. This strategy can potentially reduce the expenses and efforts associated with multi-stage testing of the hit compounds [20,23]. Among the 1,186 FDA-approved drugs that have favorable or validated pharmacokinetic and toxicological profiles, we identified meclozine as a novel inhibitor of the FGFR3 signaling, which can potentially be applied to clinical practice for short stature in FGFR3-related skeletal dysplasias. Meclozine is an over-the-counter H1 blocker, which has been safely used for motion sickness for more than 50 years. Because the optimal doses and adverse effects of meclozine have already been established, meclozine can be readily prescribed for FGFR3-related skeletal dysplasia after effectiveness in humans is confirmed.
Since there is no rational therapy for FGFR3-related disorders available to date, development of novel modalities to suppress the FGFR3 signaling has long been expected. Krejci et al. screened a library of 1,120 compounds and identified that NF449 inhibits FGFR3 signaling in RCS chondrocytes as well as in FGF2-treated embryonic bone organ culture. NF449 is structurally similar to suramin and possesses inhibitory activities of other tyrosine kinases in addition to FGFR3 [14]. Jonquoyet et al. identified that a panels are shown in the rightmost column. Bars in the left, middle, and right panels are 750, 150, and 30 mm, respectively. (D) Meclozine (20 mM) inhibited mRNA expression of matrix metalloproteinases in FGF2-treated RCS cells. Cells were treated with FGF2 and either CNP or meclozine for four hours and mRNAs were quantified by real-time RT-PCR. Expression levels of Mmp10, Mmp13, and Adamts1 are presented as the mean and SD normalized to that of FGF2-negative cells (n = 3). FGF2-mediated increases of Mmp10, Mmp13, and Adamts1 mRNA were antagonized by CNP and meclozine. Statistical significance is estimated by Student's t-test. doi:10.1371/journal.pone.0081569.g001 synthetic compound A31 is an inhibitor of the FGFR3 tyrosine kinase by in silico analysis. They demonstrated that A31 suppresses constitutive phosphorylation of FGFR3 and restores the size of embryonic femurs of Fgfr3 Y367C/+ mice in organ culture. In addition, A31 potentiates chondrocyte differentiation in the Fgfr3 Y367C/+ growth plate [16]. Jin et al. screened a library of random 12-peptide phages and found that P3 has a high and specific binding affinity for the extracellular domain of FGFR3. They showed that P3 promotes proliferation and chondrogenic differentiation of cultured ATDC5 cells, alleviates the bone growth retardation in bone rudiments from TD mice (Fgfr3 Neo-K644E/+ mice), and finally reversed the neonatal lethality of TD mice [15]. These novel FGFR3 tyrosine kinase inhibitors, however, may inhibit tyrosine kinases other than FGFR3 and may exert unexpected toxic effects in humans. Meclozine may also inhibit unpredicted tyrosine kinase pathways, but we can predict that there will be no overt adverse effect, because meclozine has been safely used for more than 50 years.
CNP is another therapeutic agent for FGFR3-related disorders. CNP-deficient mice were dwarfed with narrowing of the proliferative and hypertrophic zones of the growth plates [24].
Loss-of-function mutations in NPR2 encoding a receptor for CNP are responsible for acromesomelic dysplasia Maroteaux-type (AMDM), a form of short-limbed human skeletal dysplasias [25]. Conversely, overexpression of CNP prevented the shortening of achondroplastic bones by inhibiting the MAPK signaling pathway [17]. Yasoda et al. demonstrated that continuous delivery of CNP through intravenous infusion successfully normalized the dwarfism of Fgfr3 ach mice [18]. As CNP has a very short half-life, Lorget et al. developed an extended plasma half-life CNP analog, BMN111, which is resistant to neutral-endopeptidase digestion [19]. They showed that subcutaneous administration of BMN111 exhibits a significant recovery of bone growth in Fgfr3 Y367C/+ mice. Meclozine showed a similar inhibitory activity on the FGFR3 signaling compared to CNP in ex vivo bone explant culture as well as in vitro chondrogenic cells. We expect that meclozine can be used as a substitute for or in addition to CNP and the CNP analog.
The MAPK pathway is one of the major signaling pathways of FGFR3 in proliferation and differentiation of chondrocytes. Sustained ERK activation in chondrocytes leads to decreased proliferation, increased matrix degradation, altered cell shape, and decreased differentiation [5,6]. CNP inhibits phosphorylation of RAF1 kinase through inhibition by PKGII [6,17]. We demonstrated that meclozine attenuates ERK phosphorylation in chondrocytes. Gohil et al. reported that meclozine has an anti- Figure 5. Meclozine attenuates FGFR3-mediated ERK phosphorylation in FGF2-treated RCS cells. (A) RCS cells were pretreated with 20 mM meclozine for 30 minutes before adding 5 ng/ ml FGF2 and the levels of ERK and MEK phosphorylation were determined by Western blotting. As a loading control, the membranes were reprobed with antibodies against MEK and ERK. Meclozine suppressed FGF2-mediated ERK phosphorylation but not MEK phosphorylation after adding FGF2. (B) RCS cells were infected by lentivirus expressing constitutively active (ca) ERK, MEK, and RAF mutants. Cells were treated with 20 mM meclozine and their proliferation potencies were quantified using the MTS assay. The 490-nm absorbance was normalized to that without meclozine and the mean and SD are presented (n = 3). Meclozine rescued caMEK-and caRAF-mediated growth arrest, but had no effect on caERK-mediated growth arrest. doi:10.1371/journal.pone.0081569.g005 Figure 6. FGFR3 signal transduction in chondrocytes and mechanisms of FGFR3 inhibitors. Activations of MAPK (mitogenactivated protein kinase) and STAT (signal transducers and activators of transcription) negatively regulate chondrocyte proliferation and differentiation. MAPK signaling includes sequential stimulation of a signaling cascade involving RAS, RAF, MEK, and ERK. CNP binding to natriuretic peptide receptor-B induces the generation of the second messenger cGMP, which activates PKG and leads to attenuation of the MAPK pathway by inhibiting RAF activation. NF449 [14], A31 [16], and P3 [15] are recently identified FGFR3 inhibitors. NF449 and A31 have inhibitory effects on the kinase activity of FGFR3. P3 has an affinity for extracellular domain of FGFR3. Meclozine attenuates ERK phosphorylation. doi:10.1371/journal.pone.0081569.g006 oxidative phosphorylation (OXPHOS) activity in addition to antihistamine and anti-muscarinic properties [26,27]. In their report, meclozine showed cytoprotective activities against ischemic injury in the brain and heart. Since other drugs with anti-histamine, antimuscarinic, and anti-OXPHOS properties did not show inhibition of the FGFR3 signaling in our studies, pharmacological actions of meclozine on chondrogenesis are unlikely to be relevant to its antihistamine, anti-muscarinic, or anti-OXPHOS properties. Although additional studies are required to prove that meclozine is indeed effective for patients with FGFR3-related skeletal dysplasias, we propose that meclozine is an attractive and potential therapeutic agent.
Materials and Methods
Screening of 1,186 FDA-approved compounds in rat chondrosarcoma (RCS) cells RCS cells, which were kindly provided from Dr. Pavel Krejci (Medical Genetics Institute, Cedars-Sinai Medical Center, LA) [5], were cultured in Dulbecco's Modified Eagle's Medium (DMEM, Invitrogen) supplemented with 10% fetal bovine serum (FBS, Thermo Scientific) [14]. For the RCS growth arrest assays, ,5610 3 cells were seeded in a 96-well culture plate and incubated for 48 hours in the presence of 10 mM of 1,186 FDA-approved chemical compounds (Prestwick Chemical) and 5 ng/ml of FGF2 (R&D Systems). Cell proliferation was quantified by the MTS assay (Cell 96 AQueus One Solution Cell Proliferation Assay, Promega) according to the manufacturer's instructions. Cell numbers were counted using the TC 10 Automated Cell Counter (Bio-Rad). For counting cells, ,1610 5 cells were seeded in a 12-well culture plate and incubated for 48 hours in the presence of 10 or 20 mM of meclozine and 5 ng/ml of FGF2.
Alcian blue staining
For Alcian blue staining, ,1610 5 RCS cells in a 12-well plate were added with 5 ng/ml FGF2 and also with either 10 mM meclozine or 0.2 mM CNP (Calbiochem). After 72 hours, cells were fixed with methanol for 30 minutes at 220uC, and stained overnight with 0.5% Alcian Blue 8 GX (Sigma) in 1 N HCl. For quantitative analyses, Alcian blue-stained cells were lysed in 200 ml of 6 M guanidine HCl for 6 hours at room temperature [28]. The optical density of the extracted dye was measured at 610 nm using PowerScan 4 (DS Pharma Biomedical).
Total RNA extraction and real-time RT-PCR analysis
Total RNA was isolated from FGF2-treated RCS cells in the presence of 20 mM of meclozine or 0.2 mM of CNP using Trizol. The first strand cDNA was synthesized with ReverTra Ace (Toyobo). We quantified mRNA expression of matrix proteinases (Mmp10, Mmp13, and Adamts1) and extracellular matrix proteins (Col2a1 and Acan) using LightCycler 480 Real-Time PCR (Roche) and SYBR Green (Takara). The mRNA levels were normalized for that of Gapdh.
Vectors and cell transfection
The pRK7-FGFR3-WT, -K650E, and -K650M vectors expressing wild-type and mutant FGFR3 [29] were kindly provided by Dr. Pavel Krejci (Medical Genetics Institute, Cedars-Sinai Medical Center, LA). The pRK7-FGFR3-G380R was constructed by the QuikChange site-directed mutagenesis kit (Stratagene). The wild-type and mutant FGFR3 cDNAs were excised from pRK7-FGFR3 vectors by double digestion with HindIII and BamHI. The lentivirus vector, CSII-CMV-MCS-IRES2-Venus, was kindly provided by Dr. Hiroyuki Miyoshi (Riken BioResource Center, Tsukuba, Japan.), and was digested with NheI and BamHI. The HindIII site of the insert and the NheI site of the vector were blunted using Quick Blunting Kit (New England Biolabs) before ligation. HEK293 cells were plated in a 150-mm dish on the day before transfection. We introduced the pLP1, pLP2, pLP/VSVG plasmids (ViraPower Packaging Mix, Invitrogen), and the CSII-CMV-MCS-IRES2-Venus vector into HEK293 cells with Lipofectamine 2000 (Invitrogen) according to the manufacturer's protocols. At 48 hours after transfection, we filtered the media containing the virus particles using the Millex-HV 0.45 mm PVDF filters (Millex) and purified lentivirus using two steps of ultracentrifugation (Beckman Coulter). The lentivirus was added to the medium of HCS-2/8 or ATDC5 cells. After 48 hours, we confirmed that more than 90% of cells were positive for Venus signals.
Clones that express constitutively active mutants in the MAPK/ ERK pathway, pcDNA4Myc-ERK2(PD), pcDNA3HA-MEK1(DD), and pcDNA3Flag-C-rafDN [30], were kindly provided by Dr. Mutsuhiro Takekawa (Medical Science Institute, Tokyo University, Japan). The inserts were digested with BamHI and XhoI, and were cloned into the CSII-CMV-MCS-IRES2-Venus at the BamHI sites after blunting all digested ends. Lentivirus particles were generated as described above and were used to infect RCS cells.
Growth assay of human chondrosarcoma (HCS-2/8) cells
HCS-2/8 chondrocytic cells were kindly provided by Dr. Masaharu Takigawa [31]. The ethical review committee of Nagoya University Graduate School of Medicine approved the use of HCS-2/8 cells in condition that we do not analyze the whole genome of HCS-2/8 cells. HCS-2/8 cells were seeded in a 96-well tissue culture plate. HCS-2/8 cells were then infected with lentivirus expressing either FGFR3-WT, -K650E, -K650M, or G380R. After 48 hours, the numbers of cells were estimated by the MTS assay. In addition, ,1610 5 HCS-2/8 cells in a 12-well tissue plate were introduced with lentivirus expressing FGFR3-G380R, or -K650E. After 72 hours, the Venus-positive cell areas were quantified by the ArrayScan VTI HCS Reader (Thermo Scientific).
Micromass culture of ATDC5 cells
Mouse embryonic carcinoma-derived ATDC5 cells [32] were infected with lentivirus expressing either FGFR3-WT, -G380R, or -K650E. The infected ATDC5 cells were subjected to micromass cultures as described previously [33]. Briefly, ATDC5 cells were suspended in DMEM/F-12 (1:1) medium (Sigma) containing 5% FBS at a density of 1610 7 cells/ml and plated in 10-ml droplets to simulate the high-density chondrogenic condensations. After 1-hour incubation, the same medium supplemented with 1% insulintransferrin-sodium selenite (ITS, Sigma) was added to the cells. Medium was changed every other day until harvesting cells on day 6.
Bone explant culture
The animal study was approved by the Animal Care and Use Committee of the Nagoya University Graduate School of Medicine. For bone explant culture, tibiae were dissected under the microscope from wild-type ICR mouse embryos on E16.5 (Japan SLC), placed in a 48-well plate, and cultured in BGJb medium (Invitrogen) supplemented with 0.2% bovine serum albumin and 150 mg/ml ascorbic acid. The medium was changed everyday. Embryonic tibiae were further treated with 100 ng/ml FGF2 in the presence or absence of 20 mM meclozine or 0.2 mM CNP for 6 days, then photographed and fixed in 10% formaldehyde in phosphate-buffered saline, demineralized with 0.5 M EDTA, and embedded in paraffin. Sections were stained with hematoxylin-eosin and Alcian blue. Images were taken with the SZ61 microscope (Olympus) equipped with the XZ-1 digital camera (Olympus). The longitudinal length of bone, which was defined as the length between proximal and distal articular cartilage, was measured using ImageJ (NIH).
The expressions of the introduced wild-type and mutant FGFR3 constructs in HCS-2/8 and ATDC5 cells were determined by Western blotting using antibodies for FGFR3 (sc123, Santa Cruz) and GFP (11814460001, Roche). Figure S1 Expression levels of Col2a1 and Acan mRNAs were unchanged in FGF2-treated RCS cells. Cells were treated with FGF2 for 72 hours and mRNAs were quantified by real-time RT-PCR. Expression levels of Col2a1 and Acan mRNAs are presented as the mean and SD normalized to that of FGF2negative cells (n = 3). FGF2 minimally suppressed the Acan expression but without statistical significance (Student's t-test). FGFR3 is transcribed by CMV and Venus is downstream of IRES2 on the same transcript. As a control, the membrane was reprobed with Venus by anti-GFP antibody. (TIF) Figure S3 Immunoblotting of FGFR3 showing efficient expressions of FGFR3-WT, -G380R, and -K650E in ATDC5 cells. ATDC5 cells were infected with lentivirus expressing FGFR3-WT (wild-type), -G380R (ACH), and -K650E (TDII). FGFR3 is transcribed by CMV and Venus is downstream of IRES2 on the same transcript. As a control, the membrane was reprobed with Venus by anti-GFP antibody. (TIF) Figure S4 Meclozine increases the thickness of embryonic tibial growth plate in FGF2-treated bone explant culture. Tibia sections were stained with hematoxylin-eosin and Alcian blue on day six of explant culture. Arrows indicate hypertrophic chondrocyte layers. FGF2 treatment reduced the thickness of the layer, while treatments with CNP and meclozine mitigated the effect of FGF2. (TIF) Figure S5 Meclozine attenuates FGFR3-mediated ERK phosphorylation in FGF2-treated RCS cells. RCS cells were infected by lentivirus expressing constitutively active (ca) ERK, MEK, and RAF mutants. Cells were treated with 20 mM meclozine and the cell numbers were counted. The cell numbers were normalized to that without meclozine and the mean and SD are presented (n = 6). Meclozine rescued caMEK-mediated, but not caERK-mediated, growth arrest, although no statistical significance was observed. (TIF) | 2017-03-31T16:21:02.514Z | 2013-12-04T00:00:00.000 | {
"year": 2013,
"sha1": "937b4ef23dcc9460f969b28a26d36c069d80001c",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0081569&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "937b4ef23dcc9460f969b28a26d36c069d80001c",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
120951022 | pes2o/s2orc | v3-fos-license | Imaging the displacement field within epitaxial nanostructures by coherent diffraction: a feasibility study
We investigate the feasibility of applying coherent diffraction imaging to highly strained epitaxial nanocrystals using finite-element simulations of SiGe islands as input in standard phase retrieval algorithms. We discuss the specific problems arising from both epitaxial and highly strained systems and we propose different methods to overcome these difficulties. Finally, we describe a coherent microdiffraction experimental setup using extremely focused x-ray beams to perform experiments on individual nanostructures.
of a finite object by the finite illumination provided by a focusing device. In the second approach, an approximation that neglects the contribution of the substrate signal to the diffraction pattern is introduced. Finally, in section 6, we describe an experimental setup suited to perform coherent micro-diffraction experiments at third-generation synchrotron sources. We discuss here the existing gap between state-of-the-art experimental data and the requirements identified by our simulations.
Finite-element and x-ray diffraction calculations
The model system investigated here are SiGe islands grown by liquid phase epitaxy (LPE) with the method described in detail in [28]. The islands have the shape of a truncated square pyramid with a (001) top facet and {111} side facets, as shown in the sketch of figure 1(a). A common feature of LPE-grown islands is a distinct step in the Ge concentration at about 1/3 of their height, being lower in the bottom part and higher in the top part [29]. SiGe islands may also be grown by other techniques such as molecular beam epitaxy or chemical vapor deposition, which result in different, often multi-faceted island shapes, and pronounced gradients of the Ge content. In many cases, inhomogeneous island ensembles form, creating a demand for investigation of single islands.
In analyzing x-ray diffraction patterns from island ensembles as well as single islands, modeling approaches have been very successful in the past [10,13]. Therefore, we use the structure of LPE-grown SiGe islands on Si(001) substrates as determined in [13] as the basis of our investigation. We consider a truncated pyramid with a base size of 140 nm, a height of 70 nm, and a Ge content of 20 and 24% in the bottom and top parts of the island, corresponding to a lattice mismatch of 0.8 and 1.0% with respect to Si, respectively. Figure 1(a) shows a scheme of Comparison between the scattered intensity by a strained and an unstrained SiGe island. (a) Scattering factor corresponding to the Ge content assumed in the island, (b) finite-element calculation of the atomic displacement field u z from the Ge content shown in (a) and (c) 2D cut off the 3D intensity distribution around the Si (004) reflection. From (d) to (f), same graphs corresponding to a SiGe island within which the u z distribution is set to zero.
such a sample, with its crystallographic directions highlighted. In figure 2(a), a 2D cut off the SiGe islands in the x z-plane is shown, indicating the regions with different Ge concentration.
To calculate the x-ray scattered intensity, we need to know the displacement field of the atoms in the island and in the strained part of the substrate underneath. For this purpose, we perform FEM calculations using the COMSOL MULTIPHYSICS commercial FEM suite. The expansion of the SiGe domains is calculated taking into account the elastic anisotropy as well as the difference in elastic properties between Si and SiGe. For the latter, we use a linear interpolation between the values of pure Si and Ge. The boundary conditions are as follows: (i) the island and the substrate have to remain coherent at the interface, (ii) on the side faces of the substrate block only movements within the faces are allowed and (iii) the bottom face of the substrate is completely fixed. The top face of the substrate and the island are not constrained. These boundary conditions are valid if the substrate block is large enough so that at its outer faces the strain is virtually zero, which we checked by enlarging the substrate until no significant change in the strain field was observable.
From the FEM calculations, the shifts u(r) of the nodes of the FEM model with respect to the initial case are obtained. The actual displacements of the atoms of the crystal lattice can be calculated as an interpolation to the values at the FEM nodes. In the calculation of the scattered intensity, it is not necessary to obtain the displacement field at an atomic level: the construction of a pseudo-lattice with lower resolution in the range of a few nanometers per voxel is sufficient. In this way, the shifts, and also strains, are calculated with respect to a Si reference lattice in this pseudo-lattice: with R i being the position of atom i and R ref i being the position of the same atom in the reference lattice, i.e. in the absence of strain. Figure 2(b) shows the resulting displacement field distribution in the x z plane u(x, z). Displacement values range from −0.02 nm in the substrate close to the edges of the island, where compressive strain occurs, up to 0.98 nm at the top of the island, where the SiGe lattice relaxes toward its bulk lattice constant. Figure 1(a) shows the sketch of an x-ray scattering experiment on a single SiGe nanostructure. Assuming a planar wavefield illumination, the scattered intensity around a Bragg reflection can be approximated by [30] I (q) = |FT(g(r))| 2 (1) with g(r) being a complex-valued object where G is the chosen Bragg vector and ρ(r) is the 3D scattering factor distribution within the island, which depends on the SiGe content. In the proximity of the Si (004) Bragg reflection, the phase φ(r) of the object reduces to where a is the lattice constant of the reference crystal (Si in our case). Therefore, this reflection is sensitive to the atomic displacements along the z-direction only. Figure 1(b) shows an iso-intensity surface representation of the calculated 3D intensity distribution I (q) around the Si (004) Bragg reflection obtained with equation (1), in which the fast-Fourier-transform (FFT) algorithm was employed. As we will see below, the use of the FFT algorithm is crucial to reduce computing time when propagating the wavefield back and forth between the detector and sample positions. In the calculation, the displacement distribution u z (r) obtained from the FEM calculation was used. We observe that the intensity pattern has two main contributions: at higher q z values lies the Si (004) substrate peak and, at lower q z values, we observe the signal arising from the SiGe island, as a consequence of the vertical tensile strain present within the SiGe island. Both peaks have a fourfold symmetry in the q x q y plane but are visibly asymmetric along the q z -direction as a consequence of the strain distribution. The SiGe peak presents clear streaks perpendicular to the pyramid's facets, marked with red arrows in figure 1(b). The central intensity streak along the q z -direction arises from the substrate surface and we will refer to it as the surface streak.
In figure 2(c), a 2D cut in the q x q z plane of the 3D intensity distribution is shown. The origin of reciprocal space is chosen at the Si (004) substrate peak position, which appears at the exact center of the computational window after performing the FFT. This is due to the choice of Si as a reference for the displacement field. At lower q z values, we observe the signal from the SiGe island, presenting lower intensity values and extending over a larger region in reciprocal space as compared with the Si peak. For both peaks one can observe intensity fringes arising from the finite size of the island and the displacement distribution within the substrate.
For comparison, we have performed an identical calculation of the 3D intensity distribution I (q) around the Si (004) reflection in the hypothetical case that there are no displacements within the island (figures 2(d)-(f)). Despite being non-physical, this simulation nicely illustrates the effect of strain on the intensity distribution, which is centrosymmetric in the case of an unstrained object. Figure 3. General scheme of the iterative phase retrieval algorithms.
Phase retrieval of a SiGe island on a substrate
In a coherent x-ray scattering experiment, the coherent illumination of the sample ensures that the phase of the scattered intensity carries the information on the positions of all scatterers, both on the shape of the crystal and the internal displacement field. Chemical contrast may also affect the phase relation, but this effect remains negligible if the experiment is performed far from an absorption edge of the chemical elements in the semiconductor system. Therefore, the detailed knowledge of both the amplitude and the phase of the scattered beam is highly desirable in order to retrieve the direct space description of the investigated sample. However, only the intensity, i.e. the square of the complex-valued wavefield function is experimentally accessible through photon detection. This is known as the 'phase problem'. A solution to this problem has been proposed by Sayre in 1952 [16], for reciprocal space intensity measurements performed with an oversampling higher than 2 times the Nyquist frequency of the signal. This corresponds to confining the object in direct space to a finite region called the support, which occupies half of the total volume given by the computational window. As this problem has no analytical solution, the inversion relies on numerical iterative algorithms using back and forth transforms between direct space and reciprocal space. A sketch of the algorithm is shown in figure 3. The algorithm is initiated with a direct space guess that fulfills the support condition together with a set of random phases. For measurements performed in the far-field geometry on samples small enough in order to fulfill the Born approximation conditions, the relationship between direct and reciprocal space can be described by the Fourier transform of the electron density, which is introduced as an FFT in the iterative algorithm. In order to achieve convergence of the algorithm, a set of constraints has to be applied at each iteration in direct and reciprocal space. One common direct space constraint is the support condition, which is directly related to the oversampling condition. In reciprocal space, the solution has to match the experimental 8 data: F(q) = √ I (q). The convergence of the algorithm is reached when the solution fulfills both the direct and reciprocal space constraints and can be monitored by the error metric ξ 2 , given by It can be shown that the error metric decreases at each iteration. For this reason, this algorithm is named error reduction (ER) [17], which is a variant on the original Gerchberg and Saxton algorithm [31]. However, the convergence of the ER algorithm is rather slow and it often stagnates in local minima. One common approach is to use the ER in combination with the hybrid-input-output algorithm (HIO) [17]. HIO provides an element of feedback by including the solution before applying the real space constraint g n (r) in order to build the solution at the next iteration, for regions in the object where the direct space constraints are not satisfied. The parameter β is a number ranging between 0 and 1, and is typically set to a value close to 0.9. When convergence is reached, the error metric value given in equation (3) should converge to zero. However, for experimental data, zero error metric cannot be obtained due to noise and limited intensity dynamical range (DR). Therefore, the algorithm is interrupted after ξ 2 becomes smaller than a certain threshold. For real data, several solutions with comparable and small error metric may be found. These solutions are rejected. The search for convergence requires a lot of trial and error in order to find the correct sequence of ER and HIO, applying various direct space constraints. A solution is approved when different starting guesses (i.e. different set of initial random phases) lead to similar-almost identical-solutions for the same ER + HIO cycle combination. Small modifications of the algorithm sequence and/or the increase of the number of iterations does not affect the convergence in that case.
Applying phase retrieval algorithms to our model system of a strained SiGe island presents some specific problems. The total displacement field in the island increases with the island size and can reach values of 0.98 nm for a 140 nm island, i.e. about twice the lattice constant (see figure 2(b)), which corresponds to a phase shift of about 48 rad (16π ) at the (004) Bragg reflection. This means that the real space resolution has to be small enough to sample every phase change of 2π in the object. In addition, the presence of the substrate prevents the use of a finite size support in the horizontal direction. In the vertical direction, more than half of the direct space can be set to zero, which implies that the oversampling condition is at least fulfilled in that direction. The combination of these two specificities of our system, huge phase changes and non-finite support in the horizontal direction, are the main computational difficulties that have to be solved in order to retrieve the direct space description from an intensity pattern. In order to improve the computational time, the whole numerical study is performed on a 2D sample cross section (taken in the center of the island) instead of the complete 3D data set. As the 2D cut exhibits as many phase shifts as the total 3D sample, together with the presence of the semi-infinite substrate, the mathematical problem to be solved is expected to remain the same. The determination of the inversion problem is directly related to the number of unknowns that have to be found compared with the number of known values that are given by the oversampled measurement.
It is clear from our first attempts that the inversion of the intensity pattern from the 140 nm SiGe island on a Si substrate is not accessible due to the non-finite support and the large phase changes within the sample. Indeed, we observe that the iteration process converges to the correct solution for several simpler cases: an isolated island with no substrate, an island with a truncated substrate in the horizontal direction or a weakly strained island on a substrate (for a fixed island size, the strain is artificially reduced by reducing the phase shift). The convergence is obtained using a combination of standard and modified phase retrieval algorithms. The standard algorithms are HIO and ER so , where the latter is the ER with the usual support-only condition. The third algorithm, ER sobj , is a modified ER with an additional constraint in real space, which imposes the knowledge of the magnitude of the complex-valued object. Finally, ER so/sym is the ER algorithm with the usual support constraint and with an additional axial symmetry condition for both magnitude and phase of the object. Despite those additional constraints, the intensity pattern from a 140 nm island on a substrate cannot be inverted due to the combination of large phase changes and the absence of a finite support.
Nevertheless, the case of a realistic small island on a substrate can be successfully addressed. Indeed, as the phase shift in the island is smaller for smaller island size, the inversion becomes possible for islands with base sizes below 35 nm, even with the absence of a finite support. Such SiGe islands can be fabricated and are of interest for applications [32], but are so far out of reach for CDI experiments (see section 6). Figure 4(a) shows the intensity distribution calculated from a 35 nm island, together with the result of the inversion (b). In this calculation, the direct space pixel size of the 2D sample (which is a cross section of a 3D sample) is 1.18 nm in the x-direction and 0.84 nm along the z-direction. The computational window has 150 × 193 pixels in horizontal and vertical directions, respectively. For perfect noiseless numerical data, where all the intensity range is preserved, we expect to obtain an error metric equal to zero, which means that the retrieved quantity matches exactly the input data. However, in some cases, we observe the retrieval of a direct space quantity that exhibits significant resemblance to the input data without, however, a perfect match. It turned out that the 35 nm island always converges in a strict way when using the following combination of phase retrieval procedure: [200 × ER sobj + 500 × HIO + 200 × ER so/sym + 500 × HIO], several times until the error metric is below a certain error metric value necessary for a strict convergence (ξ 2 = 10 −9 ). A general issue is the DR that can be achieved in the experiment and the fact that the substrate signal cannot be described kinematically. In order to account for these experimental constraints, we investigated the effects of excluding the substrate peak and using a limited DR. We observe that the procedure keeps on converging strictly even after we introduce a beamstop of 5 × 4 pixels (horizontal × vertical) at the Si substrate Bragg peak and we limit the DR to 7 orders of magnitude (as shown on figure 4(a)). Down to a DR of 5 orders of magnitude, convergence is reached but with an error metric slightly larger than before. However, all found solutions are essentially equal to the exact solutions with very little discrepancies.
Unfortunately, the resolution and scattered intensity required to measure the 35 nm SiGe island at third-generation synchrotron sources are experimentally out of reach. Additionally, the experimental case would require to use a Si-on-insulator (SOI) wafer as a substrate due to the finite support in the vertical direction forced by the lower end of the computational window at a certain depth of the substrate. In the following, we propose other methods that allow us to retrieve the phase for larger island size and provide a more realistic approach to the substrate truncation problem. We show in different colors the parts corresponding to the different algorithms. Details can be found in the text.
Phase retrieval with finite-size illumination function
A possible approach to phase retrieve the coherent diffraction pattern of an island and the surrounding substrate is directly suggested by the experimental setup used for such an experiment. Usually one focuses the x-ray beam using beryllium refractive lenses, Kirkpatrick-Baez mirrors or Fresnel zone plates, creating a well-defined x-ray wavefront both concerning its intensity as well as its phase. In order to maximize the intensity scattered by the object, the size of the focused spot should ideally match the size of the object. The most important feature of this experimental setup is the fact that the transverse section of the x-ray beam exhibits a fast decaying intensity. In this case, the complex object g(r) to be retrieved is modified and is given bỹ g(r) = P(r)g(r) = P(r)ρ(r)e iφ(r) .
Here P(r) is the complex-valued illumination function, which can also define the scattering region. In such a case, the phase retrieval procedure has to be modified considering the effect of the illumination function. The input function at each iterations in the HIO algorithm must be modified to be [33] g n+1 (r) =g n (r) + [P(r)/P max (r)] × β[g n (r) −g n (r)] where P max (r) is the maximum value of P(r). This modification of the algorithm takes into account the fact that the intensity stems primarily from regions where the illumination function is more intense and it is expected to improve the convergence of the algorithm in two ways. First of all, the illumination function is finite, allowing the use of a finite support. Secondly, the knowledge of the illumination function is not just determining the finite extension of the object, but it is also determining an approximate value of the object's amplitude, which is dominated by the amplitude of the focused beam. The input function described in equation (5) is inspired by the ptychographic iterative engine (PIE) [34], where the illumination function is included in the complex function describing the object to retrieve. However, the PIE approach uses the redundant information of diffraction images from overlapping regions to retrieve the object. In our case, the redundancy in the information is ensured by the oversampling condition. We use only one diffraction pattern and consider that the illumination function is known.
As an example we show the phase retrieval of a 140 nm SiGe island and the substrate underneath. Figure 5(a) shows the simulated scattering pattern from an island illuminated by a focused x-ray beam. The scattering pattern is asymmetric with respect to the horizontal axis, which is an effect solely of the illumination function. As shown in figure 5(b), we have assumed a Gaussian-shaped beam with a full width at half maximum (FWHM) of 100 nm (which would represent in this case the central maximum of the Airy function created from a Fresnel zone plate setup) with a constant phase. The incident angle, which depends on the x-ray wavelength and on the probed Bragg reflection, was equal to 45 • . One can observe the absence of a surface streak due to our definition of the illumination function, which uses an artificially small penetration depth. The small penetration depth is chosen in order to allow for the definition of a 3D illuminated region within the size of the computational window. In practice, however, the substrate contribution cannot be avoided using a focusing experimental setup because the depth of focus is of the order of a few hundreds of microns. Nevertheless, this contribution is anyway excluded in the analysis of experimental data because it cannot be fully described in terms of kinematical scattering. In fact, even though our approach is not completely correct, it actually describes very well the observed diffraction pattern in experiments using a focused x-ray beam of similar size to that of the nanostructure, as will be shown in section 6.
The region to be retrieved was defined by the illumination function (see figure 5(b)). Considering the DR of the scattering pattern to be 4 orders of magnitude (which is experimentally feasible), the region to be phase retrieved was considered where the illuminated region amplitude was 1 × 10 −2 of the maximum amplitude. The x-ray beam was assumed to decay exponentially with a penetration depth of 0.5 µm. The complete phase map is shown in figure 5(c), where a phase variation of up to 7 × 2π was retrieved. In essence, instead of defining the scattering region by the computational window, the scattering region was defined by the beam itself.
In figure 5(d), we show the convergence of the phase retrieval code, using a sequence of [1500 × ER sobj + 50 × HIO], in which only the HIO algorithm was modified according to equation (5). As in the case of the previous example, the substrate and island electron densities were assumed to be known, and only the phases were retrieved. This allowed for the convergence of all retrieval attempts. This assumption is equivalent to experiments using materials with small variations in electron density and where only strain effects are studied. It would not apply to strained nanostructures with strong variations in the electron density, in which this approach does not seem to be possible as defined above. Most importantly, this phase retrieval approach is only feasible with a well-defined and known illumination function, which can be measured using the methods described, e.g. in [35,36].
Phase retrieval of an island separated from its substrate
We have shown before that the phase retrieval of an epitaxial SiGe nanostructure and its substrate is possible assuming that the structure is illuminated by a known finite-sized Gaussian function. This situation resembles a lot a real experiment in which a focused x-ray beam is used in order to illuminate a single nanostructure. However, in practice we usually only record reliable data around the SiGe peak, while the signal around the substrate Bragg peak is difficult 13 to measure. One needs to attenuate the beam in order not to saturate the detector, but then the noise level is increased, so that the DR is not sufficient for phase retrieval. In addition, the scattering from the unstrained substrate, as well as along the truncation rod, cannot be described correctly within kinematical scattering theory, so that the algorithms based on a simple FFT cannot be used for this part of reciprocal space. In this section we show that the signal from the SiGe peak alone corresponds, in a good approximation, to the complex density within the island alone separated from the substrate. Also, we prove that iterative phase retrieval algorithms with a finite support can be used in order to obtain the displacement field within the island alone directly from the intensity pattern around the SiGe peak only. In a measurement of the SiGe peak around the (004) Bragg reflection in which the signal around the substrate peak is missing, it is no longer possible to use the Si lattice as a reference for the definition of the displacement fields within the island. We can use instead the lattice spacing obtained from the position in reciprocal space of maximum intensity in the SiGe peak. In figure 2(c), we observe this position at q 004 SiGe = q 004 Si − 0.052 Å −1 for a 140 nm island, corresponding to a lattice spacing of a SiGe = 5.492 Å −1 . Figure 6(b) shows the phase φ within the island and its substrate according to equation (2) taking a SiGe as reference lattice parameter in the FEM calculation of the displacement fields u z . Because the reference lattice spacing appears in the middle of the island, we observe now small changes of the phase within the center of the island and a faster phase change at its edges and in the substrate. Figure 6(c) shows the calculated 2D intensity distribution around the SiGe (004) Bragg reflection from the complex object resulting from the amplitude distribution in figure 6(a) and the recalculated phase distribution of figure 6(b). The SiGe peak now appears centered in the window, as opposed to all previous cases where the Si lattice constant had been taken as a reference.
In order to know which information can be extracted from a measurement around the SiGe peak only, we have selected a part of the complex-valued diffracted wavefield, corresponding to the region shown in a red rectangle in figure 6(c). This region is chosen along the q z -direction in such a way that the substrate's peak is avoided while it remains centered around the SiGe peak. The amplitude squared of the wavefield taken in this restricted reciprocal space area is shown in figure 6(d), where we have also replaced the values in the central pixel column, corresponding to the surface streak, by zeros. By means of an inverse Fourier transformation (FFT −1 ), we obtain a complex object in direct space with an amplitude and a phase shown in figures 6(e) and (f), respectively. We note that since the complex-valued wavefield is known here, this can be done in one single step by means of an FFT −1 without the need of using iterative phase retrieval algorithms. Cuts across the objects in direct space with and without the substrate are compared in figures 6(g) and (h) for the scattering factor and the phase, respectively. We observe that the resolution in direct space along the vertical direction is poorer in the case of an isolated island due to the smaller range in q-space in its corresponding diffraction pattern. This simple exercise shows that the SiGe diffraction peak alone describes the original SiGe island separated from its substrate in a very good approximation, especially concerning its phase. One would not expect such a good approximation a priori, since the intensity corresponding to the whole system (the island and the substrate) shown in figure 6(c) contains a coherent addition of the intensities arising from the island and the substrate. The reason why it works in this particular case lies in the different lattice spacings within the island and within the substrate. Therefore, this approximation may be applied to other epitaxial systems, provided that the lattice mismatch is sufficiently large. On the other hand, the method is not yet sensitive enough to reproduce the density change in the model island at 1/3 of its height, as shown in figure 6(g).
(a)
(c) -50 -50 50 50 100 100 The next step for a model-independent analysis of the signal around the SiGe (004) peak is the demonstration of a phase retrieval directly obtained from the intensity pattern shown in figure 6(d). Such an inversion problem has two advantages with respect to the problem in section 3: (i) the object in direct space presents smaller phase variations and (ii) there is a finite support in both directions. Therefore, standard phase retrieval algorithms with a relaxed support constraint and no additional constraints can be applied in order to obtain an approximative solution for the isolated island. We performed successive series of [500 × HIO + 200 × ER so ]. Initially, we assumed a support much larger than the expected size of the object, which can be estimated from the autoconvolution function. Every 20 iterations of the HIO algorithm, a new support was created following the shrink-wrap method described in [37]. This method allowed the algorithm to progressively shrink the size of the support until a reasonable solution was found. As the criterion of convergence, we chose a minimum error metric value below which the iterative approach was interrupted. In figure 7, we show an example of phase retrieval using the region from a calculated diffraction pattern shown in figure 6(d), where the pixel values of the surface streak have been set to zero. These pixels were ignored in the algorithms when applying the constraints in reciprocal space. Figures 7(a) and (b) show the reconstructed object amplitude and phase in direct space, respectively. The results are to be compared with the expected amplitude and phase in direct space obtained from a finite region of the diffraction pattern by FFT −1 , shown in figures 6(e) and (f). We observe that the retrieved object resembles a lot the expected one not only in terms of shape and size, but also in density homogeneity and phase changes within the island. As we can observe in figure 7(c), the final support obtained by the shrink-wrap method has adapted very well to the shape of the object and it is only slightly larger than the reconstructed object, as expected [37]. We note that, unlike the convergence cases discussed above from a full calculated diffraction pattern (see e. g. section 3), in this case we do not expect a strict convergence due to the absence of a strict finite support in direct space for the original object: the density in figure 6(e) is not exactly equal to zero outside the object region, while in the algorithm we impose a support which is equal to zero at this region. Nevertheless, we prove here that phase changes within the island, related to the atomic displacements, can be reconstructed in a very good approximation.
We have studied the effect of a reduced DR in the diffraction pattern of figure 6(d) used for the reconstruction. The results presented above correspond to the full DR given in the calculation (7.3 orders of magnitude). In general, similar results to the ones presented above were obtained up to a DR of 5. For DRs down to 4 and 3, the reconstructed size of the island is slightly larger than the expected one. Although the shape still resembles the original one, a slight asymmetry arises with respect to the horizontal direction in both amplitude and phase. As a consequence, phase changes are correctly retrieved in one half of the island (either the left side or the right side), but not on the corresponding opposite side. Such effects are expected in a reconstruction from experimental data, where DRs of 3 or 4 orders of magnitude can be measured, as shown in the next section. For these cases, the knowledge of the exact support, i. e. the exact shape of the island, improves the reconstructions. This could be achieved with other complementary microscopy methods, such as scanning electron microscopy.
Experimental state-of-the-art coherent microdiffraction
In the following, we will show an example of measured data for individual epitaxially grown SiGe islands on Si(001) of lateral base size 450 nm. As in the models shown before, the contribution of the substrate is present and partially measured; effects from a non-uniform illumination function (due to the extreme focusing) are also highlighted.
The great progress made lately in focusing x-ray optics made the achievement of submicron beam sizes a standard at synchrotron sources. For such experiments, it is not only the small size (in the 100 nm range) that is important, but also the relatively large photon flux in the spot and its cleanliness (no side wings, well-known illumination function, etc). Indeed, these experiments are not only photon hungry because of the small scattering volume of the probed sample (volume of the individually probed crystallite), typically below 1µm 3 , but also due to the spreading of the signal in the reciprocal space (size effects): the local measured intensity using an area detector will thus scale, for one pixel, with L 6 , L being the lateral size of the investigated nano-object. Moreover, the direct-space resolution in CDI is determined by the total range measured in reciprocal space. Increasing the photon flux density impinging on the individual nanostructure is thus a mandatory condition for recording the data with reasonable statistics and dynamics. While in a 'classical' diffraction experiment using highly focused x-ray beams the spatial resolution is essentially given by the size of the spot used to investigate the sample, using CDI in combination with highly focused beams, a lateral resolution down to 10 nm or better can be achieved [38].
The results shown hereafter were obtained at the ID01 beamline at the European Synchrotron Radiation Facility (ESRF) in Grenoble, France. The sub-µm-sized x-ray beam was obtained using a circular (200 µm in diameter) Fresnel zone plate (FZP) with a 100 nm width outermost zone [39] and placed 129 mm upstream the sample. Working at an x-ray energy of 8 keV, an x-ray spot size of 350 × 400 nm 2 (vertical× horizontal, FWHM) was obtained at the sample position. When illuminating the FZP using an aperture matching the transverse coherence length of the x-ray beam (80 × 20 µm 2 ), an illumination of the sample with a high degree of coherence is ensured, with a measured photon flux of 1.5 × 10 8 ph s −1 in the resulting x-ray spot. This setup, schematically shown in figure 1(a), is similar to the one reported in [40]: the focused x-ray beam impinges onto the sample surface at an incident angle close to the Bragg condition characteristic for the SiGe island. If the beam fully illuminates the island, the expected scattered signal (and its distribution in the reciprocal space) coming from such pyramidal shaped islands around the (004) Bragg position is shown in figure 1(b) and it can be calculated by means of a Fourier transform of the object electron density function, as explained in detail in section 2. This is also valid when the object is illuminated by a diffraction-limited focused beam exactly at the focal plane [38,40]. With the area detector (CCD) placed as shown in figure 1(a), the interception of the scattered signal with the detector is indicated by the dark plane shown in figure 1(b). In this geometry, the incident angle of the beam is scanned while recording a CCD image at each point. The resulting sequence can be reconstructed in a regular grid of the 3D reciprocal space as shown in figure 8. The 3D RSM reconstruction shown here consists of 100 images taken with a Princeton CCD camera with 55 µm pixel size placed at 0.9 m downstream the sample and spanned over 0.5 • incident angle around the expected Bragg value for the island (2θ SiGe = 69.55 • ). Typical exposure times were of 20 to 120 s image −1 and eventually the acquisition of several frames was needed, depending on the statistics. All images were corrected for incident beam intensity I 0 recorded by a beam monitor in front of the sample, dark image subtraction and flat field of the CCD. Figure 8(a) shows a 3D view of an iso-intensity surface of the scattered signal from a single pyramid island close to the (004) Bragg reflection characteristic of the SiGe island. In this view, the rather low-intensity signal and hence prominent background (noise in the signal) prevents from seeing a lot of details, but already the main features shown in the simulated image (see figure 1(b)) can be easily identified. Many more details can be seen in the 2D cuts of the RSM along high symmetry planes (110) (q x q z plane in figure 8(b) and q y q z plane in figure 8(c)). The agreement with the simulated images (see e. g. figure 2(c)) is rather good, since all the major features (111 facets streaks, interference maxima and minima, surface streak, etc) can be identified. In fact, the measured surface streak is the substrate's crystal truncation rod. A closer lookup at the cut along the scattering plane ( figure 8(c)) shows a slight asymmetry in the signal, as simulated in previous calculations when assuming a Gaussian illumination of the island with the size of the focused beam matching the size of the island (see figure 5(a)). Such an effect does not appear if a beam larger than the object size is used, as in the case of 3 µm base size pyramids exposed in [13], for which symmetric RSMs were measured for an x-ray spot size about twice as large as the object size.
In figures 8(b) and (c), fringes arising from the finite size of the island are clearly visible, indicating that the oversampling condition is fulfilled. However, attempts to phase retrieve the data from the SiGe peak have not been successful so far. Apart from the noise in the data, failure is also partly due to the low q-range measured along the xand y-directions, q x,y 0.07 Å −1 . This corresponds to a real space resolution of about 10 nm, which is hardly sampling the 2π phase changes expected at the lower corners of an island of this size (2π phase variations every 25 nm). The measured q-range is limited by both photon flux and beam and sample stability, since a wider angular range would have to be scanned in order to perform the measurement. In fact, the enormous phase variations present in this island makes convergence impossible even with calculated data having the appropriate reciprocal space range and full DR (∼7 orders of magnitude) when using the algorithm scheme presented in section 5.
The conditions mentioned above (spot size, photon flux, materials and area detector CCD) represent almost the extreme case of measuring (in reasonable times, in this case 2-5 h) the full 3D RSM signal with a coherent beam. Attempts to record similar data for smaller objects (150 nm base size, i. e. volumes about 30 times smaller) show that the feasibility of such measurements is limited by the time required for a full measurement (several days), which is hardly compatible with beam stability requirements. However, better stability conditions and the possibility to increase the coherent photon flux in a smaller spot size would enable the measurement of smaller SiGe nanostructures, for which calculated data have been proven to converge.
Conclusions and outlook
We have studied the feasibility of applying CDI to highly strained epitaxial nanostructures, using SiGe islands grown on an Si (001) substrate as a model system. The final aim is to retrieve the 3D atomic displacement field distribution within an island directly from its coherent diffraction patterns measured around several Bragg reflections by means of iterative phase retrieval algorithms. For this purpose, we have systematically studied the convergence of phase retrieval algorithms using calculated data from FEM calculations. In order to speed up the convergence, we have reduced the problem to two dimensions.
We have encountered major difficulties in the convergence of standard phase retrieval algorithms caused by large phase variations ( 2π) within the object even for strains of the order of 0.01 and the absence of a finite support in the substrate. We have found two possible solutions that overcome the problem of a non-finite support: (i) using the knowledge of a finite illumination of the sample, which resembles the experimental case in which a focused beam is used and (ii) taking the intensity contribution from the island only, which can be inverted to provide a good approximation of the displacement field within the island separated from the substrate. The last case also overcomes the experimental problem of measuring an intensity distribution very close to a Bragg reflection of the substrate. However, the methods described here still fail in the cases of islands with base size larger than 140 nm.
On the other hand, a coherent micro-diffraction experiment has been presented on the very same SiGe epitaxial system. We have proven the possibility of recording the 3D coherent intensity distribution around the Si (004) Bragg reflection from an individual island of 450 nm base size. This measurement has been possible by the efficient focusing of the available coherent flux of a third-generation x-ray source into a spot matching the size of the island. In order to obtain similar experimental data from 140 nm islands, the beam focus should match this size and still better stability conditions have to be fulfilled for longer exposure times. With the upgrades of third-generation synchrotron sources and the free electron lasers being installed in the near future, the required brilliance might be reached, enabling CDI experiments on single strained epitaxial nanostructures. | 2019-04-18T13:07:15.613Z | 2010-03-01T00:00:00.000 | {
"year": 2010,
"sha1": "54582e74f00e54264e6ea56342fd26eaedc8d435",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1088/1367-2630/12/3/035006",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "2c0261acb2429ac9c86157e37728400f350ff201",
"s2fieldsofstudy": [
"Physics",
"Engineering"
],
"extfieldsofstudy": [
"Physics"
]
} |
132443149 | pes2o/s2orc | v3-fos-license | Evaluation of Flooding Risk in Greater Dhaka District Using Satellite Data and Geomorphological Land Classification Map
Flood is a common feature in rapidly urbanizing Dhaka city and its surrounding areas. In this research, evaluation of flood risk of Greater Dhaka in Bangladesh has been developed by using an integrated approach of GIS and remote sensing. The objective of the study is to measure the flooding risk based on the satellite data and geomorphological land classification map under the land use/land cover change from 1995 to 2015 related with the urbanization of Dhaka city. Comparing with each landform, land cover unit and historical rainfall data the flooding return period has been calculated. Terrace, natural levee and back swamp has been divided into three sub categories. Especially the built-up zone which is closer to the river channel, former river course and the back swamps are mostly vulnerable to flood inundation. This study revealed that, 70% of Greater Dhaka district within moderate to very high hazard zone, especially surrounding city like Manikganj Sadar Upazila areas. It is expected that, this study could contribute to effective flood forecasting, relief and emergency management for future flood event.
Introduction
Because of unique geographic location, Bangladesh is one of the most disaster prone country in the world [1].Among all natural disasters in Bangladesh, flood is the preeminent one.Every year a large portion of the country becomes flooded due to heavy rainfall and spilling water from the major rivers.The country lies on the downstream part of three major river basins: Brahmaputra, Ganges and Meghna and thus is frequently flooded.Floods of unusually large magnitude and long duration happen in the country affecting the majority of the population of Dhaka city and severely disrupting the socio-economic activities.Almost every year destructive floods reoccur in Bangladesh, including very severe floods of 1987, 1988 and 1998 (data were missing for 1957, 1958, 1959, 1979 and 1981) (Figure 1) [2].The 1988 flood set a new record for flooded area, while 1998 flood was unprecedented with its long duration.The flood damage potential in Bangladesh is increasing due to the possible causes of climate change, urban concentration in the three river basins, encroaching of settlements into flood prone areas [3].
Dhaka is the largest and densely populated area in Bangladesh.Dhaka has become one of the fastest-growing cities in the world, primarily driven by explosive population growth.The city's population was 0.41 million in 1951 and 0.71 million in 1961.By 1974, it had risen to 2.06 million, averaging an annual growth rate of 11.15% between 1961 and 1974 [4] [5].In 1981, the population rose to 3.44 million.The population reached around 6.48 million in 1991 and 9.6 million in 2001 [6] [7].Recently, Dhaka city population is more than 14 million [8], with an average annual growth rate of 4.08% during 1991-2001 [9] [10], which outpaced the country's annual growth rate of 1.3% [11] [12].In addition, growing population rate of Dhaka city put extra pressure on the low lying agricultural land the surrounding suburban areas [13].
Due to specific geographical location of Dhaka city, flood is the common natural hazard [14]; moreover urbanization has accelerated the degree of vulnerability to flood, particularly in the recent years [15].Dhaka has experienced many disastrous floods in the past of which the 1988, 1998, 2004 and 2007 flood is said to be the worst in memorable records [16].Disastrous floods in 1988 and 1998 inundated areas of 164 and 200 km 2 , respectively [17].The unprecedented flood of 1998 also severely affected Dhaka and its neighboring areas [18], which resulted in unusual damage and countless sufferings to
Area
As % of Total Area of Bangladesh Expon.(As % of Total Area of Bangladesh) % of area the people.In total, the 1988 flood affected 4.55 million people [19], and most of Dhaka was under water to various depths for more than 8 weeks [20].The hardest-hit sector was housing; nearly 262,000 houses of various types were damaged during the 1998 flood, worth USD $46.6 million (Taka 2.3 billion) [21].In 1998 flood, 1000 km of concrete roads were damaged [22].The total loss incurred by the 1998 flood was estimated to about US$3000 million [19].According to statically data book, on the 1988 and 1998 flood revealed that the death toll was at least 150, and more than 2.2 million people were affected [23].The number of institutions and houses affected by the 1988 flood was estimated at 14,000 and 400,000, respectively [24].Although fluvial flooding did not immerse most of the embanked areas of Dhaka, losses due to fluvial flooding were alarming in 2004 [25] [26].
There are several researches and maps on flood in Bangladesh has been done before at the local and government level.Ashraf M. Dewan (2007) described about estimating flood hazard in Greater Dhaka district zone by using remote sensing and GIS techniques, which essential traces the flood hazard management strategies in greater Dhaka city [27].M. Oya (1976) prepared the geomorphological map of the Brahmaputra Jamuna River and Ganges River plain (1:1,000,000), and the geomorphological map has been prepared by utilizing the mozaic of the photographs of ERTS-I taken in 1972 as a base map.Utilizing the infrared photographs of the ERTS, and field observation by helicopter, he classified each geomorphological units of the R. Brahmaputra-Jamuna and R. Ganges flood Plain.It was found that, the Maduhupur Forest Terrace, Barind Terrace, and Tippera Surface were formed by upheaval, and Sylhet Basin, Brahmaputra-Jamuna valley and Ganges plain were formed by ground subsidence.The alluvial plain consists of an alluvial fan formed by the Brahmaputra-Jamuna and old Brahmaputra River, and the natural levee, back-swamp, and delta mainly formed by the Ganges River and also he mentioned the flood features of land form units [28].
M. Masood (2012) explained about the vulnerability and risk of mid-eastern Dhaka by using DEM and 1D hydrodynamic model, presented the flood risk and vulnerability from DEM data of mid-eastern part of Dhaka [29].R. Rahman (2013) wrote about the flood risk and reduction approaches in Bangladesh and evaluated partial flood control model during monsoon season [30].These reports and maps are not concerned with the Flood inundation mapping by using satellite data and geomorphological land classification map of the study area.Realizing the situation this research has been conducted on greater Dhaka district, where the major focus to generate the flood inundation mapping based on the satellite data and geomorphological land classification map compare with the land use change.
From the above discussion it is very clear to understand that, no research has been done before in Greater Dhaka district zone regarding to the evaluation of flooding risk on the basis of geomorphological land classification map.Considering this situation, the objective of the study is to evaluation of flooding riskbased on the satellite data and geomorphological land classification map under the land use/land cover change from ing satellite data and geomorphological land classification map concerning with the big flooding event (1988, 1998 and 2004) and with urbanization of greater Dhaka district zone.
Geographical Settings of the Study Area
The study area chosen for this research is the greater Dhaka district of Bangladesh (Figure 2), where the total amount of population is 18,305,671 [31].The study area lies between 23˚40'N to 23˚551'N latitude and 90˚20'E to 90˚30'E longitude.It is occupied by the Buriganga River to the south, the Turag River to the west, the Tongi River to the north, the Balu River to the east and the Kaliganga River at the Manikganj district zone.
The greater Dhaka city is located mainly on an alluvial terrace, popularly known as the Modhupur terrace in Pleistocene period.Topographically, Dhaka city is relatively a flat land, the surface elevation of the city ranges between 1 and 14 meters [32].It belongs to sub-tropical monsoon zone and experience humid climatic conditions.Dhaka city experiences about 2000 mm annual rainfall of where more than 80 percent rainfall take place during monsoon.Historically, Dhaka city has been built up in a flood plain with numerous water bodies and canals that used to drain water from its upper reaches during monsoon season.As population increased, these areas were encroached.Moreover, unplanned urbanization, or urban sprawl has been taking place since 1971 in Greater Dhaka area which will be resulted more people to live in highly flood vulnerable place [33] [34].
Data Source and Methodology
The data for generating flood risk map of the study area has been collected from.The U.S. Army Topographic sheet (Scale 1:250,000) of 1955 [35] and geomorphological map of the Brahmaputra-Jamuna River and Ganges River plain (1:1,000,000) by M. Oya (1976) was used for preparation of our base map, Dhaka district land elevation map and ASTER data of 30 m resolution for preparation of Digital Elevation Model (DEM) [36] was downloaded from the website (http://www.gdem.aster.erdac.or.jp/search.jsp).
Field observation data has been collected to measure ground control points with the ground verification data by the Geological Survey of Bangladesh (GSB).
Photographic elements and field knowledge was utilized to delineate various land use/land cover categories such as agriculture land use, built up area, water surface, bare land and vegetation cover.Satellite data was interpreted using photographic and geotechnical elements besides field knowledge about the study area.For GIS and remote sensing data analysis, a time-series of Landsat Thematic Mapper (TM) and Enhanced Thematic Mapper Plus (ETM+) images were used to derive land use/cover maps of the study area were used (Table 1).Satellite images were geometrically corrected with the Global Positioning System (GPS) points.The images included the visible (bands 1, 2 and 3), the near infrared (NIR), the shortwave infrared (SWIR), and the middle infrared (MIR) bands with 30 m spatial resolution for TM and ETM+ images.The dataset was mainly downloaded from the archive of Landsat (http://earthexplorer.usgs.gov).
After preprocessing the imageries, we performed supervised classification of both imageries with maximum likelihood classification algorithm in ERDAS IMAGINE 9.1 using the field data to produce five cover classes.
Area under each category of land use/land cover was calculated and computed in the area (km 2 ) as well as in percentage.A comparative analysis of the land use/land cover maps was attempted to find out the changes during 1995-2015 period, by superimposing the two maps.The maps were then overlaid on DEM to know the correlation between elevation and land use/land cover (Figure 3).Cross section profile (Figure 5) provides a side view of the relief of the terrain along a line drawn between two locations (C0 to C1) on DEM map (Figure 4).This map in-
Superimposed Image of Digital Elevation Model and Land Cover Map
Unplanned urban expansion is one of the most important factors intensifying flood hazards in Greater Dhaka district zone.From the period of 1995 to 2015 (Figure 6 and Table 2), the major land-use change is caused by the increasing demand for non-agricultural land because of urban and infrastructure development.The decrease of agriculture land use was caused by development of infrastructure and factory (in Manikganj Sadar and Dhaka district zone).From 1995 to 2015, a significant decrease of agriculture (DEM) and land cover map was prepared (Figure 7) to identify the residential zone which is located in the low lying area.From Figure 7 it is also very clear to understand the relationship between land use/land cover units and various elevation range (Table 2).
By comparing land use/land cover change maps, geomorphological land classification map with DEM data following results are obtained.3) Vegetation cover has been increased substantially in low elevation range between 2 and 4 m, at the periphery of dried up water body and almost all low lying floodplain areas.
Return Period of Floods and Yearly Average Rainfall
Flooding due to rainfall is also a severe problem for certain city areas that may be inundated for several days, mainly due to drainage congestion.The main reason for the 1998 flood was excessive rainfall over the catchments area of the Ganges-Brahmaputra Meghna (GBM) river basin [37].Excessive rainfall is reported in Dhaka and Manikganj Sadarfrom 1953 to 2015 (Figure 8) [38].Monthly annual rainfall records show that the amount of precipitation during flooding year (1988, 1998 and 2004) were considerably higher than average rainfall.For example, the rainfall was 250 mm percent higher in 1988 compare to its normal condition.Thus, the runoff generated by rainfall could not flow out to the surrounding rivers since the water level of the river stage was also at peak.The accumulated runoff in low lying areas pushed long inundation and remained stagnant until the water level of the river stage receded.During 1988During , 1998During and 2004 flood events, rainfall statistics shows that in June, rainfall was tremendously longer, for other three monsoon months it was terribly larger in 1998 event.For example, 552 mm rainfall was recorded for the month of August in 1998 which was 176 mm bigger than normal rainfall, meanwhile in 1988 it was only 169 mm which was even less than the normal.
Flood Return Period Calculation from Yearly Precipitation Data
In this study, to calculate the return period of floods from yearly precipitation data the Hazen method has been applied [39], the probability, and the annual precipitations of concern, for the historical statistical data in recent 62 years.This method consisted in assembling the annual precipitations shown in Table 3 below.This procedure was done using the above equation, for the sample size of 62 years, by assigning ranges in ascending order, the precipitations, and the probabilities of occurrences and return periods for each year.These calculations are shown in Table 3 By using Hazen plotting position, the above graph (Figure 9) consisted in the plotting of the 62 year data values of precipitations (in cm) and return periods of occurrences on log-probability graph paper, where precipitation as a dependent variable (log annual precipitations in centimeters) and return period as a independent variables.Finally, using the least squares method, a regression line that fitted the data was drawn, for the purpose of interpolating or extrapolating any desired calculation.Figure 9 shows the graphical relationship of these two variables according to Table 3 data.
After the liberation war in 1971, the urban area has been expanding rapidly in the study area.In recent decades, the development is more rapid then the previous.In re-
Land Form and Flood Features in the Geomorphological Land Classification Map of Greater Dhaka District
In Figure 10, the geomorphic land form units are classified as follows, 1) Former river Terrace has been developed in the upper eastern part of the study area.According to the geological evolution of Bangladesh, terraces were formed in the Quaternary Period (Pleistocene Epoch).The northern part of Dhaka city is located in the terrace zone.Based on the elevation range, terrace has been divided into three types such as higher, middle and lower.The higher terrace has an elevation higher than 5.5 m, are mostly located at the upper part of the study area and covered with commercial activity purpose.The middle one has an elevation between 4 m and 5.5 m.Lower terrace has an elevation between 2.5 m and 4.0 m located in the lower part of Dhaka city are never influenced by the normal flooding condition.These zones are also not influenced by the river flood condition.Within the Dhaka city due to poor drainage condition rainfall flood had occurred in the big rainfall event.
The natural levee has been developed around river courses due to the deposition process during monsoon period and especially in the tremendous flooding year (e.g.1988, 1998 and 2004).In the study area, according to the elevation the natural levee has been divided into three type's such as higher, middle and lower.The higher natural levee has an elevation between 5 m and 6 m, are mostly located at the upper part of the study area and covered with human settlements and commercial activity purpose.The middle one has an elevation between 4 m and 5 m works as a natural embankment during normal flood but submerged during extraordinary flood condition.Lower natural levee has an elevation between 2.5 and 4 m located in the lower part of the study area are submerged at the normal flooding condition.
Back swamps are also divided into three types as following; higher, middle and lower back swamps.Back swamps are located between the natural levees in the Manikganj Sadar zone.In the tremendous flooding year (e.g.1988, 1998 and 2004), the back swamp has been long inundated.At the higher natural levee zone, the depth of inundation is higher compare with the lower natural levee zone.Severe flooding damages have occurred in this zone.The period of inundation is more than three months.The higher back swamp has an elevation between 1.5 m and 2 m, are mostly located at the upper part of the study area around the higher natural levee zone and which is covered with human settlements and commercial activity purpose.The middle one has an elevation between 0.5 m and 1.5 m and lower back swamp has an elevation of below 0.5 m located in the southern part of the study area are submerged at the normal flooding condition.At the higher back swamp zone, flood return period is much longer comparatively to the other moderate back swamp and lower back swamp zone.
The former river channel is usually channel without water bed.The former river course are located in the flood plain zone and especially in Buriganga River and Kaliganga river area and flooded in normal flood condition, which is flooded almost every year in the normal flood and rainfall condition.
Conclusions
In this paper to evaluate the risk of flood during 1995 to 2015 in Greater Dhaka district of Bangladesh, land-cover, elevation data, topographic map and geomorphic unit were overlaid on each other.The study demonstrates an effective way to modify the collected DEM so that it represents the current topography, which is very helpful to identify the various land cover and land form units.The objective of geomorphological land classification map is to provide information related to flood inundation risk on the basis of various landform units.
To find out the relationship between land cover and land form unit we have compared each other and the results of this paper are as follows: Urban development of the Dhaka city and its surroundings was quite rapid during 1995 to 2015.The Urban areas have spread into lowland area such as flood plains and back swamps from 1995 to 2015.This is clearly reflected in the relationship between the urbanization area and the landforms.The results of this research revealed the relationship between land use/land cover change and the geomorphological changes indicate that the built-up areas have expanded on vulnerable landforms with respect to floods.Moreover annual rainfall is another important factor which is closely related to the flood return period regarding to different geomorphologic land form units.
From the topographic map and the land cover map, there are a higher amount of settlement and built-up zones are located in the low lying high hazard zones.Moreover the number of settlements and commercial activities are increasing in the recent dec-ades (20%) over the low lying agriculture land, which putting an extra pressure not only on Dhaka city [13] [40] but also on the surrounding suburban city area.From the Digital Elevation Model (DEM), changes in built up area (20%) have occurred in almost all elevation range (1 m -6 m).Agricultural land use associated with high elevation range has been converted mostly into built up area and bare land, and at the same time low elevated agricultural land is converted to build up zone to meet the demand for housing to accommodate growth in population.
From the geomorphologic land classification map, the northern part of Dhaka city is located in the terrace zone.Based on the elevation range, terrace has been divided into three types such as higher, middle and lower.The higher terrace has an elevation higher than 5.5 m, mostly located at the upper part of the study area and covered with commercial activity purpose.The natural levee lies between 2 m and 5 m.It has been divided into 3 types too.Sometimes it works as a natural embankment during normal flood but submerges during extraordinary flood condition.Lower natural levee has an elevation between 2.5 and 4.0 m located in the lower part of the study area submerged at the normal flooding condition.Back swamp has been divided into 3 types: the higher back swamp has an elevation between 1.5 m and 2 m, are mostly located in the upper part of the study area around the higher natural levee zone and covered with human settlements and commercial activity purpose.The middle one has an elevation between 0.5 m and 1.5 m and lower back swamp has an elevation of below 0.5 m located in the southern part of the study area submerged at the normal flooding condition.In the back swamp zone, the period of inundation is more than three months.At the higher back swamp zone, flood return period is much longer comparatively to the other moderate back swamp and lower back swamp zone.
Geomorphological land form unit represents the current scenario of the study area.The map provides helpful information about flood risk zone and should be useful in assigning priority for the development of higher flood risk areas.Furthermore, this type of study will provide the updated information about geomorphic land form which is related to flood protection measure such as construction and development of infrastructure and preparedness for future flood event.
Figure 1 .
Figure 1.Historical flood affected data of Bangladesh from 1954 to 2014.
Figure 2 .
Figure 2. Geographical location of the study area.
Figure 3 .
Figure 3. Methodology flowchart of the study.
Figure 4
Figure4shows a cross session of flood plain with different landform units.The elevation range between 0 and 6 m were used to separate various landform unites.From the above cross section profile, it can be seen that the areas from 0 -4 m is under the high vulnerability of submerging when flood occur.It is observed that the land-cover also have closed relation on flood hazard.The areas of agricultural lands are also belong to submerged areas during flood time, and also corresponding to 2 m elevation boundary line.Agricultural land is characterized with the low-lying and well irrigated areas.Figure 5 could also explain the geomorphological landform unit clearly.
Figure 5 .
Figure 5. Cross section profile of Digital Elevation Model (DEM).
Figure 6 .
Figure 6.Land cover maps of Greater Dhaka district from 1995 to 2015.
land use (−9%) is found in the study area because most of the built up zone has been developed around the capital city Dhaka and at the same time the transportation network has been developed too.The built up zone is composed of residential land use, commercial land use and industrial land use.In 1995, the built up zone was 27% and expanded to 47% in 2015, the significant change had occurred because of convert agricultural infrastructure into urban infrastructure in the urban fringe zone.Commercial and industrial land use changes have been observed also with the growth of the area.The accelerated industrialization and urbanization following economic reforms and population increases have greatly affected land cover change through the increase of built up areas.The net bare land area had decreased 2% because with the increase of population, the demand of food had increased too.As a result, the bare has been converted to both agriculture land use and urban land use.The vegetation cover was 12% in 1995 and dropped to 8% in 2015 .The decrease of vegetation cover was a result of the construction of residential, commercial and industrial zone to promote urban development.The remarkable change has been occurred in the urban areas (increased 20%), where residences are developed because of expansion of the urban area around the Dhaka city, are extremely vulnerable to flooding.A superimposed image of Digital Elevation Model
1 ) 2 )
From 1995 to 2015 major land use/land cover changes in agricultural land occurred in low lying areas where is elevation ranging below 3 m.Increase in built up areas is due to shrinkage of agriculture land use and mostly transformation into residential and commercial activity.Moreover this area is located into the low lying floodplain zone.Changes in built up area (20%) have occurred in almost all elevation range between 1 m and 6 m.Agricultural land use associated with high elevation range converted into bare land, and at the same time low elevated agricultural land is converted to build up zone to meet the demand for housing to accommodate growth in population.
Figure 7 .
Figure 7. Land cover map and DEM superimposed of Greater Dhaka district.
Figure 8 .
Figure 8. Yearly average rainfall of Dhaka and Manikganj district from 1953 to 2015.
Figure 9 .
Figure 9. Yearly average precipitation of Dhaka and Manikganj district from 1953 to 2015.
cent years, rapid urbanization is mainly taken place in low lying areas around and within the city which serve as back swamp and flood plain zone and submerged during flooding season.Every year because of monsoon rainfall, Greater Dhaka district zone has been facing a serious drainage congestion which is one of the important factor to flood problems in Dhaka city.Due to the unplanned development of Dhaka city and filling of natural channels, it becomes very difficult for the artificial system to carry out vast amount of flood waters to the surrounding river.
Figure 10 .
Figure 10.Geomorphological land classification map of Greater Dhaka district.
Table 1 .
Materials and methods.
Table 2 .
Land cover change from 1995 to 2015 of the study area with elevation range.
Table 3 .
Table showing ranges, annual precipitations, probabilities of occurrence and return periods for Dhaka and Manikganj Sadar in the period of 1953-2015. | 2019-04-26T14:23:37.006Z | 2016-09-01T00:00:00.000 | {
"year": 2016,
"sha1": "fe575081e0331a2bb1dca2b11d4eb08b70fd68ba",
"oa_license": "CCBY",
"oa_url": "https://www.scirp.org/journal/PaperDownload.aspx?paperID=70921",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "fe575081e0331a2bb1dca2b11d4eb08b70fd68ba",
"s2fieldsofstudy": [
"Environmental Science",
"Geography"
],
"extfieldsofstudy": [
"Geography"
]
} |
252392104 | pes2o/s2orc | v3-fos-license | Insomnia Prevalence and Associated Factors Among University Students in Saudi Arabia During the COVID-19 Pandemic and Lockdown: A Large-Scale Survey
Purpose The coronavirus disease 2019 (COVID-19) pandemic has many psychological and physical effects to which university students are vulnerable. We aimed in this study to assess the prevalence of insomnia among university students in Saudi Arabia during the COVID-19 pandemic and lockdown and associated factors. Patients and Methods We conducted a cross-sectional study using a questionnaire to collect the responses of 5140 students from Saudi universities between April 24 and 30, 2020. Responders completed demographic questions and psychological scales, including the Insomnia Severity Scale (ISI), during the national lockdown period in Saudi Arabia. Results Approximately, 41% of the sample suffered from moderate to severe insomnia. The mean ISI score was recorded as 12.9 (SD 6.62). Insomnia was associated with female sex, younger age, students from new universities, junior students, if a relative suffered from COVID-19, having a chronic medical illness, and having a psychiatric disorder. Insomnia was associated also with suicidal ideation. Conclusion Insomnia prevalence was very high among Saudi university students during the COVID-19 pandemic lockdown. There were sociodemographic and medical factors associated with high insomnia prevalence. Universities need to plan and implement protective and intervention strategies to deal with this important issue.
a pandemic by the World Health Organization (WHO) soon thereafter. 1 The new coronavirus was distinguished by fast spreading and lethal outcome. 2 Globally, COVID-19 has caused more than 6 million deaths, reported by the WHO. 3 With the new pandemic and lack of knowledge about treatment, management, or even vaccination, 2 many countries, especially during the early stage of the pandemic, exercised extraordinary measures to try to control the spread of the infection and protect their nation and health-care systems from collapse. They enforced quarantine and general lockdown of unnecessary activities, such as gym, leisure, and visits to shopping malls, and transferred schools and universities to online teaching. 1,4 The Kingdom of Saudi Arabia was one of the first countries to apply strict roles during the COVID-19 crises. 5 Initially, all people were instructed to stay home, and schools and higher education were shifted to digital education. Many people were forced to work from home and, therefore, sit for long hours at a desk. Going out was only permitted for essential shopping or medical emergencies. 6 COVID-19 has not only affected physical health but has also created fear of the infection and curfews, and such complications as lack of socialization and disturbances to general routines. 7 This has placed a new burden on mental health. 2 University students were also affected psychologically by the change to electronic teaching and lack of clinical practice in some specialties such as medical colleges. 8 This added another stressor to students, in addition to the existing lifestyle stressors of high academic load, independency, and financial difficulties. 1,9 This was evident in a study of the COVID-19 pandemic in April 2020 in Switzerland, where student stress, anxiety, loneliness, and depressive symptoms worsened when compared to pre-crisis assessments. 9 A systematic review and meta-analysis of 89 studies regarding depressive symptoms, anxiety symptoms, and sleep disturbance in higher education students found the prevalence of these to be 34%, 32%, and 33%, respectively. 2 Another study assessing sleep among students and employees of a Swiss university during the pandemic found the prevalence of low sleep quality to be 44%. 4 Similarly, an online-based cross-sectional survey was used to examine 1521 students from Vietnamese universities, and fear and anxiety of COVID-19 were found to be substantially linked to psychological distress, life satisfaction, and sleep disturbance. 10 A large-scale study in Poland included 1111 university students found 58% had sleep difficulties but it decreased to 21% when took into account only moderate and severe insomnia on Insomnia Severity Index (ISI) scale while it was 27% in Argentina. 11,12 In Saudi Arabia, a cross-sectional study was performed with 790 participants during the COVID-19 quarantine, and results showed that 55.5% had poor sleep quality and 54.4% had insomnia. 13 Additionally, being female and married were risk factors. Another cross-sectional online questionnaire evaluated depression, anxiety, stress, resilience, and insomnia among 582 undergraduate university students in Saudi Arabia using the ISI. The results showed that more than half of students said they had trouble sleeping, and 1.4% said they were taking melatonin pills to help them sleep. Only 4.3% had a very mild sleep problem, whereas 16% had a mild sleep problem; 21.8%, a moderate sleep problem; 9.3%, a severe sleep problem; and 1.2%, a very severe sleep problem. 14 However, data regarding insomnia and sleep quality across students in different Saudi universities during the COVID-19 pandemic is lacking. According to our knowledge, in Saudi Arabia, there is no study assessed insomnia prevalence during national lockdown among students in different universities. We, therefore, aimed to assess sleep disturbances and related factors among university students in Saudi Arabia during the pandemic and especially during lockdown, to assist in the planning and implementation of protective and intervention strategies to assist students during this time.
Materials and Methods
Study Population and Sample communicate in Arabic language, 3. Had access to the online survey. Exclusion criteria includes: 1. Not being a university student in Saudi Arabia, 2. Does not read Arabic language, 3. Had no access to internet and the survey. We used a convenient sampling strategy to send the link to students in Saudi universities, and data was collected from April 24-30, 2020. The sample size was calculated based on the assumption of a prevalence of insomnia symptoms of 33% according to Deng et al. 2 We assumed a precision of 5%, a confidence interval of 95%, and an estimated accuracy of 5%. The minimum target sample size was 408 individuals, assuming a 20% non-response rate.
Data Collection
Due to social distancing, no face-to-face interaction was allowed. The survey was a self-reporting online survey that was delivered by data collectors by distributing the link to participants through emails, SMS, and WhatsApp messages. The first part of the survey was about demographic data, such as age, sex, grade point average (GPA), medical history, and mental health. In the Saudi high education system (in universities), the year consists of two main semesters called academic level. We used this variable instead of the academic year to be more accurate. To assess medical and mental diseases, we asked participants if they have been diagnosed (in the past or currently) with chronic medical disease (eg, Diabetes Mellitus, Hypertension, Bronchial Asthma, Thyroid diseases or others). Also, we asked them if they have been diagnosed with any mental disorder, and if yes, to mention it. The second part was a valid and reliable Arabic version of the ISI, 15 which included seven questions. Permission was obtained from the author for use thereof. The questions were answered on a scale of 0 (none) to 4 (very severe). All seven answers were totaled, and the score categorized as: 0-7, no clinically significant insomnia; 8-14, subthreshold insomnia; 15-21, clinical insomnia (moderate severity); or 22-28, clinical insomnia (severe). Also, we used scales validated in Arabic including patient health questionnaire 9 (PHQ9) and generalized anxiety disorder 7 (GAD7) to assess depression and anxiety symptoms. More details can be found in our previous publication. 6 A pilot study was conducted with ten participants (not included in the sample) to estimate the duration to complete the survey and to test logistics and readability.
Data Analysis
To test for significant differences in ISI scores between groups, we used a Student's t-test or chi-square test, according to the type of variable. Spearman's coefficient determined correlations between scale scores. The level of statistical significance was set at 0.05. As a final step, we used a linear regression model to estimate how sleep disturbances can be predicted. First, we did univariate logistic regression to get the odds ratio, and then we made a multivariable logistic regression by including variables that were important from the literature and statistically significant variables from the univariate logistic regression. The Statistical Package for the Social Sciences (SPSS), Version 21.0 (IBM Corp., Armonk, NY) was utilized to analyze the data. 16
Ethical Considerations
The Institutional Review Board of King Saud University granted ethical permission (number E-20-4846). The study is complied with the Declaration of Helsinki. Participant identities were kept anonymous because no identifying data was collected. We obtained informed consent from all participants, who were told about the study's goal, and their right to withdraw at any moment without facing any obligations to the research team. The participants were not offered any incentives or rewards. Table 1 shows the demographic and sociodemographic characteristics of the study population. Out of 6338 participants, 5140 (81%) completed the questionnaire; approximately four-fifths of these were the female (4146 participants, 80.66%). Our participants were on average 21.85 years old (SD 4.75). Only 477 (9.28%) of our participants were married, while 4600 (89.49%) were single, the remainder were widowed or divorced. Participants were students from 38 different Saudi universities. We grouped them into five groups according to their geographical location.
Insomnia and Associated Factors
In our sample, 2096 participants (40.8%) had moderate to severe insomnia (Table 2). Some sociodemographic factors showed statistically significant differences in mean ISI scores. Female students had more severe insomnia compared to males (mean ISI score 13.23 vs 11.54, respectively). Age and ISI scores were found to be negatively correlated. Students DovePress who lived in the eastern region had severe insomnia compared to students in other regions. Moreover, we found that students with a very good or good GPA had worse insomnia compared to the students with excellent or acceptable GPA. Students who suffered from a chronic medical illness had higher mean ISI scores than those who did not. Lastly, students with mental illness had severe insomnia compared to students who did not. Similarly, suicide ideation was correlated with insomnia. The ISI total score and the last item in the patient health questionnaire 9 (PHQ9) were found to be positively correlated (r [5138] = 0.28, p < 0.0001). Moreover, for insomnia univariate logistic regression, eleven variables were statistically significant among fourteen variables (Table 3). While in insomnia multivariable logistic regression, only seven variables were statistically significant among ten variables that were included in the model (Table 4). They are female sex, being married, living in eastern region, studying in scientific colleges, have a relative or acquaintances got COVID-19, have been diagnosed with a chronic medical or mental disorder.
Discussion
We aimed to determine the insomnia prevalence among university students in Saudi Arabia during COVID-19 pandemic (especially during the national lockdown), as well as associated factors. Approximately 41% of the sample suffered from moderate to severe insomnia. Insomnia was associated with female sex, younger age, being single or divorced, students from eastern region, students from new universities, junior students, relatives suffered from COVID-19, having a chronic medical illness, and having a psychiatric disorder. Insomnia was associated also with suicide ideation.
Approximately 41% of participants in our sample experienced moderate to severe insomnia. This percentage is higher than that in the pre-COVID-19 era when a cross-sectional study of Saudi medical students from 2011 to 2012 found that insomnia prevalence was 33%. 17 The reason for this change can be explained by multiple factors. First, our study was done during the peak of COVID-19, while the other study included a pre-COVID-19 period. Second, we used the ISI, while the previous study used the Pittsburgh Sleep Quality Index. Third, our study had a larger sample size (5140 compared to 320 in the previous study). Additionally, the previous study only included last years' medical students (fourth, fifth, or sixth year) compared to our study which included students from all years and colleges. This may affect the comparison, as it is well known that studying medicine can be stressful and a risk factor for insomnia. 17 On the other hand, a recent study was performed during the COVID-19 pandemic on a similar target population: 463 third to fifth-year medical students and medical interns in a single university in Saudi Arabia, also using the ISI. A total of 162 (34.9%) participants had insomnia. 18 This difference could be explained by the different population (medical students vs university students). Also, the cutoff score was lower in that study, as they included subthreshold insomnia where we included only moderate to severe insomnia. If we had included subthreshold insomnia, our prevalence would increase to 77%. Moreover, our study was done during the first month of lockdown, where that study was done months thereafter. Our findings are similar to those of a local study where moderate to severe insomnia prevalence rate was 32%, 14 but did not differentiate when the data collection was done. We believe that the difference lies in the period being during lockdown or not. Also, the sample size in that study was smaller (582 participants compared to ours), and 95% of the sample participants were from two universities only, while our sample included 38 universities. When compared to Saudi Arabian university students, our sample had a different sex ratio. Females make up 49.4% of Saudi university students, compared to 80% in our sample. 19 However, a local study found a similar percentage (73% female students among participants). 14 Our sample is similar to Saudi Arabian university students in terms of other demographic features. 19 We found that females were more susceptible to sleep disturbances than male participants. Female students had a 1.7 times higher risk of insomnia than male students, according to multivariable logistic regression analysis. Our findings were consistent with those of previous studies, eg, four studies found that female students had higher insomnia rates compared to males. 14,18,20,21 Female students appear to participate in research studies more than males. 14,22 The prevalence rate of insomnia in our study appeared to be double that of insomnia worldwide during COVID-19 among the public (4-22%). 23 This may be explained by the fact that university students have higher stressors, especially during COVID-19, and that we collected our data during the exam season. Another reason may be that 80% of our sample participants were female, which may have led to a higher prevalence of insomnia compared to other studies. Conversely, a local study during the same period found an insomnia prevalence of 54% among the public. 13 The difference in findings may be explained by different assessment tools (ie, ISI in our study compared to the Athens Sleep Questionnaire), and different populations (university students vs the public). Italian study found that insomnia severity was increased during the lockdown compared to pre-pandemic era which may explain the increase prevalence in our study. 24 Residing in the eastern region was a risk factor in our study, which can be explained by the fact that the first Saudi Arabia COVID-19 cases were detected in the eastern region. 25 Other risk factors such as medical or psychiatric illnesses or a relative with COVID-19 align with the findings of other studies. 14,18,26 These three variables were statistically significant in the multivariable logistic regression with odds ratio (1.97, 3.21, and 1.61), respectively. Having a psychiatric disorder will increase the insomnia risk by three folds. This can be explained by the fact that insomnia is a criterion for many psychiatric disorders, and there is a high comorbidity rate between psychiatric disorders and insomnia disorder. 11 Having a higher GPA was a protective factor in the univariate logistic regression but was not the case in the multivariable logistic regression. We believe this could be due to less stress on the participant compared to lower GPA participants. Also, senior levels students showed lower insomnia rates, which is similar to the findings of a large- scale study in China among university students. 27 This can be attributed to younger students being in a new environment and requiring more time to adapt. However, Alyoubi et al did find a difference in insomnia rates between older and younger students. 14 This difference was not statistically significant in the multivariable logistic regression. We found a link between insomnia and suicidal ideation among our sample, similar to other studies. 26,28 One study found that insomnia is a mediator between COVID-19 anxiety and suicide. 29 Another study found that insomnia is related to suicide directly and indirectly (as a risk factor for depression). 30 Another study on depression, stress and insomnia among medical students during COVID-19 found that 44% of the total variance in depression can be accounted by the indirect effect of insomnia. 31 To the best of our knowledge, this is the first insomnia study in Saudi Arabia that has revealed such high prevalence in the same population. As our study shows high levels of insomnia for university students, psychoeducation and cognitive behavioral therapy for insomnia (CBTi) could be helpful, especially for high-risk groups. 32 There are some limitations in our study. First, we had a high female to male ratio (4:1) as a sample bias. Second, we acquired our sample through convenient sampling without randomization, which could lead to selection bias. Third, we did not have any past data from the same sample to compare to pre-COVID-19. Fourth, we used self-report questionnaires; however, structured clinical interviews are preferable for accurate diagnosis. We suggest that future studies collect longitudinal data to track the COVID-19 pandemic impact over time. We also strongly advise authorities (especially universities) to make easy access to mental health guidance and counseling services for students. Telemedicine is one example of this, and telepsychiatry and teletherapy are both efficient and acceptable. 32
Conclusion
High insomnia prevalence among Saudi Arabian university students during COVID-19, with 41% of participants reporting moderate to severe insomnia. Female sex, living in the eastern region, having a medical or mental illness, having a family member who tested positive for COVID-19, or being a junior student are all risk factors. Universities and other interested individuals should pay closer attention to this critical issue and establish protection and management strategies.
Data Sharing Statement
Data are available from the corresponding author upon a reasonable request. | 2022-09-21T15:09:45.025Z | 2022-09-01T00:00:00.000 | {
"year": 2022,
"sha1": "6ad915d9ee5cb986694ce850c423b2ff11a1a9c3",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "a23e6252664573d5d9a56a8361c2b20c50d84841",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": []
} |
11234790 | pes2o/s2orc | v3-fos-license | Interleukin-6 and Asymmetric Dimethylarginine Are Associated with Platelet Activation after Percutaneous Angioplasty with Stent Implantation
Data linking in vivo platelet activation with inflammation and cardiovascular risk factors are scarce. Moreover, the interrelation between endothelial dysfunction as early marker of atherosclerosis and platelet activation has not been studied, so far. We therefore sought to investigate the associations of inflammation, endothelial dysfunction and cardiovascular risk factors with platelet activation and monocyte-platelet aggregate (MPA) formation in 330 patients undergoing angioplasty with stent implantation for atherosclerotic cardiovascular disease. P-selectin expression, activation of glycoprotein IIb/IIIa and MPA formation were determined by flow cytometry. Interleukin (IL)-6, high sensitivity C-reactive protein and asymmetric dimethylarginine (ADMA) were measured by commercially available assays. IL-6 was the only parameter which was independently associated with platelet P-selectin expression and activated GPIIb/IIIa as well as with leukocyte-platelet interaction in multivariate regression analysis (all p<0.05). ADMA was independently associated with GPIIb/IIIa activation (p<0.05). Patients with high IL-6 exhibited a significantly higher expression of P-selectin than patients with low IL-6 (p=0.001), whereas patients with high ADMA levels showed a more pronounced activation of GPIIb/IIIa than patients with low ADMA (p=0.003). In conclusion, IL-6 and ADMA are associated with platelet activation after percutaneous angioplasty with stent implantation. It remains to be established whether they act prothrombotic and atherogenic themselves or are just surrogate markers for atherosclerosis with concomitant platelet activation.
Introduction
Detrimental platelet activation plays a pivotal role in the development of acute ischemic events [1]. Following atherosclerotic plaque rupture, platelets adhere to exposed subendothelial structures of the injured vessel wall, and initiate clot formation thereby leading to further platelet recruitment and activation with subsequent vessel occlusion. However, it has been shown that even patients with stable atherosclerosis exhibit higher levels of platelet activation than healthy individuals [2], and that the extent of platelet activation in these patients is a strong predictor of future ischemic events [3]. Since atherosclerosis is increasingly recognized as a chronic inflammatory disease, markers of inflammation as well as factors promoting plaque formation may be linked to the extent of platelet activation [4]. Indeed, previous studies reported an association of inflammation and cardiovascular risk factors with on-treatment platelet reactivity. It has been shown that patients with high levels of Interleukin (IL)-6 and C-reactive protein (CRP) exhibit a worse response to antiplatelet therapy with aspirin and clopidogrel [5][6][7][8].
Other studies revealed an inadequate response to antiplatelet therapy in patients with advanced age [9], obesity [10,11], diabetes [12,13] and chronic kidney disease [14,15]. However, most of these studies focused on agonists'-inducible platelet reactivity. Consequently, data linking in vivo platelet activation with inflammation and cardiovascular risk factors are scarce. Moreover, the interrelation between endothelial dysfunction as early marker of atherosclerosis and platelet activation has not been studied, so far. We therefore sought to investigate the associations of inflammation, endothelial dysfunction and cardiovascular risk factors with platelet activation and monocyte-platelet aggregate (MPA) formation in patients undergoing angioplasty with stent implantation for cardiovascular disease.
Study Population
The study population comprised 330 patients undergoing angioplasty and stenting for atherosclerotic cardiovascular disease. Clinical and laboratory characteristics of the overall study population are given in Table 1.
Exclusion criteria were a known aspirin or thienopyridine intolerance (allergic reactions, gastrointestinal bleeding), a therapy with vitamin K antagonists (warfarin, phenprocoumon, acenocoumarol), treatment with ticlopidine, dipyridamol or nonsteroidal antiinflammatory drugs, a family or personal history of bleeding disorders, malignant paraproteinemias, myeloproliferative disorders or heparin-induced thrombocytopenia, severe hepatic failure, known qualitative defects in thrombocyte function, a major surgical procedure within one week before enrollment, a platelet count <100.000 or >450.000/μl and a hematocrit <30%.
The study protocol was approved by the Ethics Committee of the Medical University of Vienna in accordance with the Declaration of Helsinki and written informed consent was obtained from all study participants.
Blood sampling
Blood was drawn one day after the percutaneous intervention into 3.8% sodium citrate Vacuette tubes (Greiner Bio-One; 9 parts of whole blood, 1 part of sodium citrate 0.129 M/L) for whole blood flow cytometry and determination of asymmetric dimethylarginine (ADMA), and into serum tubes (Greiner Bio-One) for measurements of interleukin (IL)-6 and high sensitivity CRP (hsCRP), as previously described [16].
Measurement of interleukin (IL)-6 and high sensitivity C-reactive protein (hsCRP)
The IL-6 antigen levels were measured using the Elecsys IL-6 kit (Roche Diagnostics) on the ECL technology based COBAS e411 (Roche Diagnostics). The lower detection limit of this system is 1.5 pg/mL. The reported intra-and inter-assay coefficients of variation are typically lower than 6%. The hsCRP level was measured using fully automated particle enhanced immuno-nephelometry (N high-sensitivity CRP, Dade Behring, Marburg, Germany) on a Behring nephelometer II (BN Systems, Orchard Park, NY).
Measurement of asymmetric dimethylarginine (ADMA)
ADMA levels were determined with a commercially available enzyme-linked immunosorbent assay (DLD Diagnostika, Hamburg, Germany) according to the manufacturer's instructions.
Determination of P-selectin expression and glycoprotein (GP) IIb/IIIa activation
The expression of P-selectin and the binding of the monoclonal antibody (mAb) PAC-1 to activated GPIIb/IIIa were determined in citrate-anticoagulated blood, as previously published [3]. In brief, whole blood was diluted in phosphate-buffered saline (PBS) to obtain 20 x 10 3 platelets and incubated for 10 min. The platelet population was identified by staining with anti-CD42b (clone HIP1, allophycocyanin labeled; Becton Dickinson (BD), San Jose, CA, USA), and expression of activated GPIIb/IIIa and P-selectin were determined by the binding of the mAbs PAC-1-fluorescein (BD) and anti-CD62p-phycoerythrin (PE; clone CLBThromb6; Immunotech, Beckman Coulter, Fullerton, CA, USA), respectively. After 15 min of incubation in the dark, the reaction was stopped by adding 500 mL PBS and samples were acquired immediately on a FACS Calibur flow cytometer (BD) with excitation by an argon laser at 488 nm and a red diode laser at 635 nm at a rate of 200-600 events per second. Platelets were gated in a side scatter versus FL3 dot plot. A total of 10000 events were acquired within this gate. The gated events were further analyzed in histograms for FL-1 and FL-2 for PAC-1 and P-selectin, respectively. Standard BD calibrite beads were used for daily calibration of the cytometer.
Determination of monocyte-platelet aggregate (MPA) formation MPA formation was determined as previously described [17]. In brief, 100 μL of citrateanticoagulated whole blood was stained with saturating concentrations of the following fluorochrome-conjugated mAbs: allophycocyanin (APC)-labeled mAb for the constitutive platelet marker CD42b (glycoprotein Ib of von Willebrand factor receptor complex), PECy5-labeled mAb for monocyte CD14 (endotoxin receptor) and corresponding isotype controls (all from BD). After 10 min of pre-incubation with antibodies in the dark at room temperature, samples were fixed and erythrolyzed with Optilyse B (Instrumentation Laboratories). Flow cytometry was performed on a FACSCalibur (BD) flow cytometer. Acquisition was stopped when 3000 CD14+ events were acquired. Monocytes were identified by gating CD14+ events, and all additional analyses were performed on this population. The negative and positive delineator were determined by gating 2% background staining on the isotype control fluorescence. The percentage of MPAs characterized by the relative number of monocytes co-expressing the constitutive platelet marker CD42b (CD14+/CD42b+) was determined.
Statistical analysis
Statistical analysis was performed using the Statistical Package for Social Sciences (IBM SPSS version 21, Armonk, New York, USA). The Kolmogorov-Smirnov test was used to test for normal distribution. Variables with skewed distribution were log-transformed for regression analyses. After log transformation skewed variables were normally distributed. Median and interquartile range of continuous variables are shown. Categorical variables are given as number (%). We performed Mann Whitney U tests to detect differences of continuous variables. Univariate and multivariate linear regression analyses were used to assess the associations of inflammatory markers, ADMA and cardiovascular risk factors with platelet activation and MPA formation. Two-sided P-values <0.05 were considered statistically significant.
Results
In univariate analyses, IL-6 and hs-CRP were significantly associated with P-selectin expression; IL-6, ADMA, age, platelet count, white blood cell count (WBC), and serum creatinine were significantly associated with activated GPIIb/IIIa; IL-6, platelet count and female sex were significantly associated with MPA formation (all p<0.05).
The associations of age, sex, body mass index, hypertension, hyperlipidemia, diabetes, active smoking, platelet count, WBC, IL-6, hsCRP, serum creatinine and ADMA with P-selectin expression, GPIIb/IIIa activation and MPA formation were estimated in a multivariate linear regression model. Thereby, IL-6 was the only parameter which was independently associated with both parameters of platelet activation (P-selectin expression and activated GPIIb/IIIa) as well as with leukocyte-platelet interaction (all p<0.05; Table 2). ADMA was independently associated with activation of the fibrinogen receptor GPIIb/IIIa (p = 0.02), whereas the platelet count (p<0.001) and active smoking (p = 0.04) were independently linked to MPA formation ( Table 2) treated patients (all p>0.1). An additional multivariate regression analysis including only clopidogrel-treated patients did not change the results. All patients treated with peripheral angioplasty had intermittent claudication. Among the patients treated with coronary angioplasty (n = 121; 36.7%), 44 (36.4%), 41 (33.9%) and 36 (29.7%) had stable angina, unstable angina/non ST-segment elevation myocardial infarction (UA/NSTEMI) and ST-segment elevation myocardial infarction (STEMI), respectively. As expected, patients with an acute coronary syndrome (ACS; UA/NSTEMI or STEMI) had significantly higher levels of hsCRP than patients without ACS (median [interquartile range]: 1.56 mg/dl [0.45-4.23 mg/dl] vs. 0.74 mg/dl [0.31-1.52 mg/dl]; p<0.001). Levels of IL-6 and ADMA, platelet activation parameters (P-selectin, activated GPIIb/IIIa) and MPA formation did not differ significantly between patients without and with ACS (all p>0.05). The adjustment for ACS in the multivariate linear regression model did not change the results.
In a second step, IL-6 levels >median (>15.74 pg/mL) were defined as high IL-6 and IL-6 levels median (15.74 pg/mL) were defined as low IL-6. The platelet count did not differ significantly between patients with high and low IL-6 (p = 0.8). Patients with high IL-6 exhibited a significantly higher platelet surface expression of P-selectin than patients with low IL-6 (3. Age, female sex, active smoking, WBC, and serum creatinine were independently associated with high IL-6, while none of the tested patient characterictics was independently associated with high ADMA (Table 3).
Discussion
We found significant associations of IL-6 with in vivo P-selectin expression and activation of the fibrinogen receptor GPIIb/IIIa. Moreover, the extent of MPA formation was independently linked to IL-6 suggesting that inflammation increases not only platelet activation but also leukocyte-platelet interaction following angioplasty with stent implantation. ADMA as marker of endothelial dysfunction was significantly associated with activated GPIIb/IIIa. Patients with high IL-6 showed a significantly higher expression of platelet P-selectin, whereas patients with high ADMA exhibited a more pronounced expression of activated GPIIb/IIIa. Upon platelet activation, P-selectin is released from alpha granules and expressed on the platelet surface. Likewise, the fibrinogen binding site on GPIIb/IIIa becomes exposed [18]. While both P-selectin and activated GPIIb/IIIa are sensitive markers of platelet activation, they represent different properties of activated platelets. Platelet P-selectin is the major ligand for the P-selectin glycoprotein ligand-1 receptor on leukocytes, and mediates the binding of activated platelets to leukocytes [19]. The resulting leukocyte-platelet aggregates can be considered a surrogate marker for platelet activation, and were shown to be elevated in several pathophysiological circumstances, including myocardial infarction [20]. On the other hand, activated GPIIb/IIIa interacts with plasma coagulation and facilitates platelet-platelet interactions.
In our study, we assessed P-selectin expression, activated GPIIb/IIIa and MPA formation without the addition of platelet agonists (= in vivo). Since clopidogrel and prasugrel affect mainly adenosine diphosphate (ADP) inducible platelet activation, these parameters should be independent of the type of ADP receptor antagonist. Therefore, we decided to include patients on clopidogrel as well as patients on prasugrel therapy. Indeed, P-selectin expression, activated GPIIb/IIIa and MPA formation did not differ significantly between clopidogrel-and prasugrel-treated patients. Nevertheless, we performed an additional analysis including only clopidogreltreated patients. However, this did not change the results.
Previous studies reported a worse response to antiplatelet therapy with aspirin and clopidogrel in patients with increased markers of inflammation [5][6][7][8]. In detail, IL-6 was found to be an independent predictor of on-treatment residual platelet reactivity in response to arachidonic acid (AA) by light transmission aggregometry (LTA) and of urinary 11-dehydro-thromboxane B2 (D-TXB2) levels [5]. Moreover, hsCRP levels were independent predictors of platelet reactivity when determined by LTA, D-TXB2, the Impact-R and serum thromboxane B2 [5]. Other studies identified IL-6, CRP, WBC and RANTES as independent predictors of on-treatment platelet reactivity to AA and adenosine diphosphate by multiple electrode platelet aggregometry [6][7][8]. However, all of these studies assessed only agonists'-inducible platelet reactivity. Consequently, data on the association between inflammation and in vivo platelet activation were missing, so far. Our findings suggest that the poor response to antiplatelet therapy in patients with increased inflammatory markers may at least in part derive from increased platelet activation in vivo.
In a previous publication, supramedian IL-6 levels were independently associated with significantly higher levels of arachidonic acid-inducible platelet reactivity in patients undergoing angioplasty and stenting [5]. Therefore, we decided to use the median as cut-off value for high IL-6 levels.
In our study, only IL-6 was independently associated with both parameters of platelet activation and MPA formation. Other markers of inflammation, i.e. hsCRP and WBC, were not linked to P-selectin expression, activated GPIIb/IIIa and MPA formation. This finding suggests that IL-6 itself may contribute to platelet activation and leukocyte-platelet interaction in atherosclerotic cardiovascular disease. However, it remains to be established whether IL-6 fosters platelet activation, MPA formation and the development of atherosclerotic plaques or is just a surrogate marker for already existing atherosclerosis with ongoing platelet activation.
Upon activation, platelets release more than 300 different bioactive proteins. To the best of our knowledge [21], IL-6 has not been reported to be among these platelet releasates, but there is indirect evidence that platelets have the complete machinery to produce IL-6 [22]. Further, a recent study in mice with dextran sodium sulfate (DSS)-induced colonic inflammation found that the treatment of wild type mice with DSS significantly increased GPIIb/IIIa activation and leukocyte-platelet aggregate formation [23]. In contrast, these platelet responses to DSS were not observed in IL-6 deficient mice. Moreover, chronic IL-6 infusion in wildtype mice reproduced the platelet abnormalities observed in DSS-colitic mice, and IL-6-infused mice also exhibited an acceleration of thrombus formation in their arterioles. In another study, the infusion of IL-6 in normal dogs resulted in an enhanced sensitivity of their platelets to activation with thrombin and platelet-activating factor [24]. Table 3. Regression coefficients (B), confidence intervals (CI), and p-values (p) of multivariate regression analysis of age, sex, body mass index (BMI), hypertension, hyperlipidemia, diabetes, active smoking, platelet count, white blood cell count (WBC), and log transformed serum creatinine (log creatinine) for high interleukin-6 levels (high IL-6) and high asymmetric dimethylarginine (high ADMA).
High IL- 6 High ADMA It has been reported that in vitro IL-6 itself does not induce platelet expression of P-selectin and their aggregation [25]. In contrast, Oleksowicz et al. reported that the incubation of human platelets with IL-6 increased the expression of P-selectin as detected by flow cytometry as well as spheroid and dendritic platelet forms in electron microscopy [26]. Further, they observed an increase in platelet ATP levels after both 1 min and 1 hour IL-6 platelet incubations. Finally, they demonstrated a significant reduction in dense granules in high dose IL-6 incubations by transmission electron microscopy. In a different study, the same group reported that platelet-rich plasma incubated with IL-6 showed a dose-dependent enhancement of agonistinducible maximal aggregation and secretion of thromboxane B2 [27]. The discrepancy between the different observations may in part be explained by the findings that activated platelets release the soluble IL-6 receptor (sIL-6R), which, in the presence of IL-6 may induce IL-6 trans-signalling, leading to an autocrine activation loop, as evidenced by an increase of gp80 and gp130 content [25]. Recently, high levels of IL-6 were associated with early and late stent thrombosis following percutaneous coronary intervention [28]. Altogether, these findings support the role of IL-6 as mediator or even initiator of platelet activation and MPA formation.
ADMA is an endogenous competitive inhibitor of nitric oxide (NO) synthase. It decreases plasma NO levels and is considered as surrogate marker for endothelial dysfunction. Previously, ADMA was shown to predict cardiovascular and all-cause mortality in patients with angiographic coronary artery disease [29]. In the current study, ADMA was independently associated with the activation of the fibrinogen receptor GPIIb/IIIa and high ADMA levels were linked to a more pronounced expression of activated GPIIb/IIIa. These findings suggest that the interplay of the impaired endothelium with platelets, which are supposed to seal any damage, induces particularly the activation of GPIIb/IIIa, possibly to recruit further platelets from the blood stream.
A higher platelet count was independently associated with a more pronounced formation of MPA. This may be due to the higher number of platelets expressing P-selectin and other cellular adhesion molecules required for the interaction with leukocytes.
A limitation of our study is the lack of clinical outcome data. Moreover, blood sampling was performed one day after the percutaneous procedure, which may affect the extent of platelet activation as well as levels of inflammatory markers.
In conclusion, IL-6 and ADMA are independently associated with platelet activation after percutaneous angioplasty with stent implantation. It remains to be established whether they act prothrombotic and atherogenic themselves or are just surrogate markers for atherosclerosis with concomitant platelet activation. | 2018-04-03T00:37:26.329Z | 2015-03-25T00:00:00.000 | {
"year": 2015,
"sha1": "d56670c74aeec14e0f131ade4f2ce7496195125e",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0122586&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d56670c74aeec14e0f131ade4f2ce7496195125e",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
260132267 | pes2o/s2orc | v3-fos-license | The rs216009 single-nucleotide polymorphism of the CACNA1C gene is associated with phantom tooth pain
Phantom tooth pain (PTP) is a rare and specific neuropathic pain that occurs after pulpectomy and tooth extraction, but its cause is not understood. We hypothesized that there is a genetic contribution to PTP. The present study focused on the CACNA1C gene, which encodes the α1C subunit of the Cav1.2 L-type Ca2+ channel (LTCC) that has been reported to be associated with neuropathic pain in previous studies. We investigated genetic polymorphisms that contribute to PTP. We statistically examined the association between genetic polymorphisms and PTP vulnerability in 33 patients with PTP and 118 patients without PTP but with pain or dysesthesia in the orofacial region. From within and around the CACNA1C gene, 155 polymorphisms were selected and analyzed for associations with clinical data. We found that the rs216009 single-nucleotide polymorphism (SNP) of the CACNA1C gene in the recessive model was significantly associated with the vulnerability to PTP. Homozygote carriers of the minor C allele of rs216009 had a higher rate of PTP. Nociceptive transmission in neuropathic pain has been reported to involve Ca2+ influx from LTCCs, and the rs216009 polymorphism may be involved in CACNA1C expression, which regulates intracellular Ca2+ levels, leading to the vulnerability to PTP. Furthermore, psychological factors may lead to the development of PTP by modulating the descending pain inhibitory system. Altogether, homozygous C-allele carriers of the rs216009 SNP were more likely to be vulnerable to PTP, possibly through the regulation of intracellular Ca2+ levels and affective pain systems, such as those that mediate fear memory recall.
Introduction
Advanced dental caries causes the infection of dental pulp, for which the removal of dental pulp (i.e., pulpectomy) is required. There is usually no residual pain after pulpectomy, but pain can occasionally occur. Pain may also occur in the same area after tooth extraction. The rare pain that occurs after pulpectomy or tooth extraction is known as phantom tooth pain (PTP), a type of specific neuropathic pain. 1 Although Melzack's neuromatrix theory may be relevant because of its essentially similar characteristics to phantom limb pain after limb amputation, 1 the cause of PTP remains unclear. In phantom limb pain, genetic factors have been reported in animal studies. 1 In PTP in humans, Soeda et al. showed that the rs735055 single-nucleotide polymorphism (SNP) of the SLC17A9 gene and rs3732759 SNP of the P2RY12 gene are associated with the development of PTP, 2 but little is known about other genetic factors.
Nociceptive stimuli and neuropathy generate pain through various receptors and ion channels. One such channel is the calcium (Ca 2+ ) channel. Pain stimulation increases intracellular Ca 2+ , resulting in intracellular signaling. The generation and transmission of pain involve the action of voltagedependent Ca 2+ channels (VDCCs). VDCCs are classified into two main types: high voltage-activated and low voltageactivated. High voltage-activated VDCCs consist of a heterotetramer that is composed of α1, α2δ, β, and γ subunits. The α1 subunit protein is encoded by 10 different genes that are classified into L-type (Ca v 1), P/Q-type (Ca v 2.1), N-type (Ca v 2.2), R-type (Ca v 2.3), and T-type (Ca v 3) according to their specific characteristics. L-type Ca 2+ channels (LTCCs) are known to regulate the activity of transcription factors by working in concert with enzymes that are involved in phosphorylation (which is important for gene expression), the contraction of skeletal, cardiac, and smooth muscles, and the release of hormones and neurotransmitters. [3][4][5] Central sensitization may be maintained for several days after the pain stimulus has ceased, which is associated with symptoms of allodynia. 6 Two LTCCs, Ca v 1.2 and Ca v 1.3, are present in the dorsal horn of the spinal cord. Ca v 1.2 channels play a minor role in central sensitization, but they are known to be associated with neuropathic pain by modulating gene expression regulating intracellular Ca 2+ influx.
Most patients with PTP meet the criteria for somatoform pain disorders in the Diagnostic and Statistical Manual of Mental Disorders, 5th edition (DSM-5), and are often referred to as having nociplastic pain. 1 Human genetic variants of the CACNA1C gene, which encodes the Ca v 1.2 α1C subunit protein (CACNA1C), are widely associated with a higher risk of neuropsychiatric disorders, including depression, bipolar disorder, and schizophrenia. 7,8 Ca v 1.2 LTCCs have also been reported to be involved in affective pain systems, such as social fear learning and fear memory recall, in the anterior cingulate cortex (ACC). 9 Based on these reports, we hypothesized that the CACNA1C gene in the ACC region is involved in the development of PTP through psychogenic factors.
We postulated that the genetic cause of neuropathic PTP may involve SNPs of the CACNA1C gene. We statistically analyzed differences in gene polymorphism frequencies between patients with PTP (i.e., neuropathic pain in the oral and maxillofacial regions) and other patients without PTP but with pain or dysesthesia in the orofacial region (i.e., orofacial pain [OFP]). The results showed a significant association between the rs216009 SNP of CACNA1C and the vulnerability to PTP.
Patients
The present study was approved by the Ethics Committees of Tokyo Dental College and Tokyo Metropolitan Institute of Medical Science (approval no. 810 and 20-45, respectively). The study was performed in accordance with provisions of the Declaration of Helsinki. All subjects provided written informed consent for the genetics studies.
The study enrolled 33 PTP patients (26-74 years old) and 118 patients without PTP but with pain or dysesthesia in the orofacial region (OFP; 23-89 years old) who visited Tokyo Dental College Suidobashi Hospital from May 2007 to November 2019. The patients were classified as traumatic trigeminal neuropathy, trigeminal neuralgia, postherpetic neuralgia, neuralgia-inducing cavitational osteonecrosis, and nociplastic pain based on the International Classification of Orofacial Pain, 1st edition (ICOP), 10 and International Statistical Classification of Diseases and Related Health Problems, 11th revision (ICD-11). 11 Sixty patients had traumatic trigeminal neuropathy (39 patients experienced no pain, 21 patients experienced pain), 11 patients had trigeminal neuralgia, 17 patients had postherpetic neuralgia, 12 patients had neuralgia-inducing cavitational osteonecrosis (NICO), and 18 patients had nociplastic pain. The difference between nociplastic pain and neuropathic pain remains unclear. Therefore, the classification is unclear even in the ICD-11 and ICOP. 10,11 The following definitions were added in this study to clarify these differences. The following items were applied as diagnostic criteria for PTP: (1) allodynia in surrounding gingiva after pulp extraction and presence of pain that is unresponsive to local infiltration anesthesia and (2) postextraction pain with residual pain despite good healing of the mucous that covers the tooth, the presence of allodynia, and pain that is unresponsive to local infiltration anesthesia. 2
Genotyping and linkage disequilibrium analysis
We examined SNPs of the CACNA1C gene. The genotype data from the whole-genome genotyping of 151 patients with PTP or OFP were used to analyze 303 SNPs within and around the CACNA1C gene region (including 10 kilobase pairs [kbp] upstream and downstream). Genomic DNA was extracted from whole blood samples using standard procedures. The extracted DNA was dissolved in TE buffer (10 mM Tris-HCl and 1 mM ethylenediaminetetraacetic acid, pH 8.0). Whole-genome genotyping was performed after measuring DNA concentrations using a NanoDrop ND-1000 Spectrophotometer (Thermo Fisher Scientific, Tokyo, Japan), with the concentration adjusted to 100 ng/μl. Whole-genome genotyping was performed using the Infinium Assay II and iScan system (Illumina, San Diego, CA, USA) according to the manufacturer's instructions. Infinium Asian Screening Array-24 v1.0 BeadChips (total markers: 659,184) were used for genotyping in the genetic analysis. The BeadChips also contained several probes that were specific to copy number variation markers, but most target SNP markers on human autosomal or sex chromosomes. Data from whole-genomegenotyped samples were extracted and analyzed using Ge-nomeStudio 2.0 with Genotyping module v3.3.7 to assess quality of the results for SNPs within and around the CACNA1C gene region. To include flanking regions of the gene, SNPs were selected within a range of 10 kbp each upstream and downstream of the CACNA1C gene region. During the data cleaning process, samples with genotyping rates less than 0.95 were excluded from further analysis. As a result, no samples were excluded from further analysis. Markers with genotype call frequencies less than 0.95 or "cluster segregation" (i.e., a measure of genotype cluster segregation) less than 0.1 were excluded from the subsequent association studies. A total of 303 SNP markers survived the filtering process for this patient sample for the region that was investigated. Linkage disequilibrium (LD) analysis was performed on 303 SNPs in the CACNA1C gene region in the SNP array. The 148 SNPs with minor allele frequencies less than 0.05 were excluded from the LD analysis, and the remaining 155 SNPs were employed for further analysis. To estimate LD intensity between SNPs, the commonly used D' and r 2 values were calculated pairwise using the genotype dataset for each SNP. The LD block was defined among SNPs that showed "strong LD" based on the default algorithm of Gabriel et al. with an upper limit of 0.98 and a lower limit of 0.7 for the 95% confidence interval of D' that indicated strong LD. TagSNPs in the LD blocks were determined using the Tagger software package that is incorporated in Haploview, which was detailed in a previous report. 12
Statistical analysis
For all genotype frequency data, deviations from the theoretical Hardy-Weinberg equilibrium distribution were examined, and χ 2 tests were performed to analyze associations with clinical data for PTP. The χ 2 tests were performed using SPSS 28 software (IBM Japan, Tokyo, Japan). For all statistical tests, the criterion for significance was p < .05. Bonferroni correction for multiple comparisons was performed for 155 SNPs in the genotypic, dominant, and recessive models for each minor allele.
CACNA1C gene rs216009 SNP was associated with PTP
We focused on the CACNA1C gene, which is involved in neuropathic pain. The SNPs within and around the CAC-NA1C gene that were extracted from whole-genome genotyping data from PTP and OFP patients and the subsequent LD analysis resulted in the selection of 73 TagSNPs and a total of 29 LD blocks. D' and r 2 values are presented in Supplemental Table S1. A schematic diagram of the CAC-NA1C gene and r 2 values is presented in Figure 1. The genotype data consisted of three genotypes. In the genotypic and dominant models, the results were not significant for any SNPs (corrected p > .05) after Bonferroni correction for multiple comparisons (Supplemental Table S2, S3). In the recessive model, the results were significant only for the rs216009 SNP (corrected p = 4.1 × 10 À2 ) after Bonferroni correction for multiple comparisons ( Table 1). The rs216009 SNP did not deviate from theoretical Hardy-Weinberg equilibrium (Supplemental Table S4). Based on these results, the association between the rs216009 SNP of the CACNA1C gene and PTP was significant in the recessive model. An enlarged view of the area around the rs216009 polymorphism in Figure 1 is shown in Supplemental Figure S1. There was a higher rate of C-allele homozygote carriers in the PTP group than in the OFP group (PTP: CC/total = 39% [13/33], OFP: CC/total = 12% [14/118]). C-allele homozygote carriers had a higher incidence of PTP, suggesting that the C allele of the rs216009 SNP of the CACNA1C gene is associated with a higher risk of PTP in an autosomal recessive manner.
Discussion
The present findings suggest that the rs216009 SNP of the CACNA1C gene is significantly associated with the vulnerability to PTP. Homozygote carriers of the minor C allele of the rs216009 SNP of the CACNA1C gene were significantly more likely to be affected by PTP.
Significant differences were found between PTP and OFP. Orofacial pain comprises painless trigeminal neuropathy, nociplastic pain, and neuropathic pain other than PTP (including painful trigeminal neuropathy, trigeminal neuralgia, postherpetic neuralgia, and NICO). To clarify differences between these OFP subgroups and PTP, we statistically analyzed differences between each subgroup and PTP, although the number of people in each subgroup was small. The results (Supplemental Table S5) showed that PTP was significantly different from other neuropathic pain, nociplastic pain, and painless trigeminal neuropathy. These results suggest that PTP is specific and does not fit any diagnostic criteria that were defined by Marbach with regard to the rs216009 SNP of CACNA1C, although Marbach classified it as the same neuropathic pain. 1 This may be partially attributable to impairments in brain transmission in ACC regions and other areas. In the Genotype-Tissue Expression (GTEx) database, the rs216009 SNP is located in the peak region of H3K27ac enrichment in Brodmann area 9 (BA9) of the human prefrontal cortex (PFC), heart, and muscle (Supplemental Figure S2). 13 H3K27ac is known to be involved in enhancer activity, suggesting that the rs216009 SNP is located in the enhancer region of the CACNA1C gene in BA9 of the PFC, heart, and muscle. Additionally, the ACC is closely associated with the cerebral cortex, including the PFC (BA9), which is interconnected with areas that are important for pain processing. 14-16 Ca v 1.2 LTCCs in the ACC are involved in observational fear learning (affective pain system). 9 The ACC is an important brain region for the convergence of sensory and emotional information and has been reported to potentially mediate emotional responses to nociceptive stimuli. The ACC has also been reported to exhibit anatomical and neurochemical changes in chronic pain patients. 16 In chronic pain and nociplastic pain, pain is enhanced by cerebral cortex activity via the periaqueductal gray (PAG)-rostroventral medulla system (the descending pain inhibitory system). 16 Therefore, one possibility is that tissue through the ACC-PFC-PAG pathway in the brain may also have enhancer activity at the rs216009 SNP site. The CACNA1C gene has been associated with various psychiatric disorders. 7,8 Many patients with PTP have also been reported to meet DSM-5 diagnostic criteria for a somatoform pain disorder, often referred to as nociplastic pain. 1 The rs216009 SNP of the CACNA1C gene was shown to be significantly associated with PTP in the present study. Thus, the affective pain system may be involved in the development of CAC-NA1C-mediated PTP. Psychological factors may lead to the development of PTP by modulating the descending pain inhibitory system in the ACC-PFC-PAG pathway in the brain in patients who carry the homozygous C-allele of the rs216009 SNP of the CACNA1C gene. However, further research is needed to elucidate pathways that are involved in the development of PTP.
Genetic mutations of CACNA1C are also known to be a risk factor for posttraumatic stress disorder (PTSD). 17 Dopamine D 1 receptors have been reported to be involved in the prolongation of remote fear memories and vulnerability to PTSD. Dopamine is involved in contextual fear memory, and Ca v 1.2 LTCCs are a downstream target of D 1 receptor signaling. Bavley et al. used Cacna1c knockout mice to examine remote contextual fear after the onset of PTSD-like symptoms. 17 Their results suggested that Cacna1c expression inhibits fear memory recall and that Ca v 1.2 LTCCs may be responsible for neurogenesis in the hippocampus. Fear memories are involved in the emotional pain system, suggesting that the Cacna1c gene may be associated with the affective pain system through neurogenesis. The present study found that the rs216009 SNP of the CACNA1C gene is associated with PTP, suggesting that CACNA1C may be related to fear memory recall in PTP.
In neuropathic pain, nociceptive transmission has been reported to involve Ca 2+ influx from Ca v 1.2 LTCCs in dorsal horn neurons of the spinal cord. 18 The rs216009 SNP of the CACNA1C gene that encodes the α1C subunit of the Ca v 1.2 LTCC 3 is located in an intron region and isolated outside the LD block (Figure 1). CACNA1C transcription may be regulated by enhancer activity around the rs216009 SNP in the spinal trigeminal nucleus, although further studies are needed to confirm this possibility. The region around the rs216009 SNP may be involved in changes in CACNA1C expression as an enhancer of the cyclic adenosine monophosphate response element binding protein (CREB)-dependent promoter in the upstream region of the CACNA1C gene. 19 Ca 2+ influx into cells through Ca v 1.2 channels has been shown to induce CREB activation, which depends on nociceptive activity. 18,20 The rs216009 SNP may be involved in enhancing CACNA1C expression, thereby affecting the increase in Ca 2+ influx and CREB activation. CREB activation, in turn, would lead to further CACNA1C expression 19 and the activation of painrelated genes. 21 The activation of pain-related genes may have been involved in the vulnerability to PTP in the present study through both an increase in intracellular Ca 2+ levels by upregulated CACNA1C expression and CREB activation. Although the facts based on the previous reports and the present study seem to be logically consistent, further research is needed to confirm the mechanisms that underlie the vulnerability to PTP. In the present study, C-allele homozygote carriers of the rs216009 SNP had a higher rate of PTP, suggesting that the C allele of the CACNA1C rs216009 SNP is associated with the vulnerability to PTP in an autosomal recessive manner. Allele frequencies of the rs216009 SNP of the CACNA1C gene in different regional populations and in the present study among patients with PTP and OFP and among total patients are as follows: The rs216009 SNP of the CACNA1C gene has a Tallele frequency of 62% and C-allele frequency of 38% in East Asian populations, according to the 1000 Genomes study in the dbSNP database. 22 The subjects in the present study were Japanese who had allele frequencies that were similar to the general East Asian population (total patients: 58% T allele, 42% C allele). Additionally, OFP patients had C-allele frequencies that were similar to East Asian populations (OFP patients: 63% T allele, 37% C allele). PTP patients had a T-allele frequency of 41% and C-allele frequency of 59%, with a higher percentage of C alleles compared with other regional populations (e.g., American populations: 78% T allele, 22% C allele; African populations: 84% T allele, 16% C allele; European populations: 90% T allele, 10% C allele; South Asian populations: 79% T allele, 21% C allele). In the present study, homozygous C-allele carriers of the rs216009 SNP had a higher incidence of PTP, suggesting that the C allele is associated with the susceptibility to PTP. These results suggest that Japanese and other East Asian populations, because of their higher C-allele frequency, may have a higher risk of PTP than populations in other regions.
In conclusion, homozygous carriers of the C-allele of the rs216009 SNP of the CACNA1C gene exhibited greater vulnerability to PTP, possibly through the regulation of intracellular Ca 2+ levels and affective pain systems, such as those that mediate fear memory recall. Further research is needed to elucidate the precise mechanisms of PTP development. The corrected P value, which is P > 1, is indicated as 1. * P < 0.05. | 2023-07-26T06:16:00.470Z | 2023-07-01T00:00:00.000 | {
"year": 2023,
"sha1": "200323a70e7023017448f2f4c6960018d2cd74f2",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "aa4dd478c28fac40f31d80e9ee46967bb636ad3c",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.